report
stringlengths 320
1.32M
| summary
stringlengths 127
13.7k
|
---|---|
As DOD’s urgent needs processes have evolved, there have been several reviews of DOD’s abilities to rapidly respond to and field needed capabilities. For example, according to senior DOD officials, the department has conducted a study to determine lessons learned from several independent urgent needs processes that might be integrated into the department’s main acquisition process. However, two studies by the Defense Science Board in 2009 found that DOD had done little to adopt urgent needs as a critical, ongoing DOD institutional capability essential to addressing future threats. Most recently, the Ike Skelton National Defense Authorization Act for Fiscal Year 2011 requires DOD to review its processes for the fielding capabilities in response to urgent operational needs and consider such improvements as providing a streamlined and expedited approach, clearly defining the roles and responsibilities for carrying out all phases of the process, and establishing a formal feedback mechanism. We reported in April 2010 on several challenges that affected DOD’s responsiveness to urgent needs. Through our field work in Iraq and analysis of 23 case studies, we found that with the exception of one system all the solutions to our case studies were fielded within 2 years of being endorsed by a theater command—which was within DOD’s informally established timeline for satisfying joint urgent operational needs. However, we found that challenges with training, funding, and technical maturity and complexity hindered DOD’s ability to rapidly respond to urgent warfighter needs. The following summarizes these key findings and our recommendations. Additional information is provided in our April 2010 report. Training—We found challenges in training personnel that process urgent needs requests. For example, we found that while the Army required selected officers to attend training on how to address requirements and identify resources for Army forces, officers at the brigade level responsible for drafting and submitting Army and joint urgent needs requests—and those at the division level responsible for reviewing the requests prior to submission for headquarters approval—were not likely to receive such training. As a result, once in theater, Army officers often faced difficulties drafting, submitting, and reviewing the volume of urgent needs requests, which, according to Army officials, could be over 200 per month. To address this challenge, we recommended that the Army update its training regimen for officers who initiate and review urgent needs requests. DOD partially concurred, stating that these training issues are applicable across the department and that it would develop additional policy. Funding—We found that funding was not always available when needed to acquire and field solutions to joint urgent needs. This result occurred in part because the Office of the Secretary of Defense had not given any one organization primary responsibility for determining when to implement the department’s statutory rapid acquisition authority or to execute timely funding decisions. We recommended that the Secretary of Defense designate an entity with primary responsibility for recommending use of rapid acquisition authority. The department partially concurred, and stated it would develop additional DOD policy for using rapid acquisition authority. In addition, we found that the Office of the Secretary of Defense had the authority, within certain dollar thresholds, to reprogram funds for purposes other than those specified by Congress at the time of the appropriation. However, in the absence of a high-level authority with primary responsibility for executing such reprogramming or transfer decisions, DOD faced challenges in consistently securing timely cooperation from the services or other components. We recommended DOD establish an executive council to make timely funding decisions on urgent need requests. DOD partially concurred, stating it would develop additional DOD policy and rely on existing councils to address our recommendation. Technical maturity and complexity—We found that attempts to meet urgent needs with immature technologies or with solutions that are technologically complex could lead to longer time frames for fielding solutions to urgent needs. Also, we found that DOD guidance was unclear about who is responsible for determining whether technologically complex solutions fall within the scope of DOD’s urgent needs processes. We recommended that DOD issue guidance to clearly define roles and responsibilities for implementation, monitoring, and evaluation of all phases of the urgent needs process—including applying technological- maturity criteria. DOD concurred, stating that it would develop new policy and update existing policy. We also reported in April 2010 that DOD had not established an effective management framework for its urgent needs processes. Specifically, we reported that DOD’s guidance for its urgent needs processes (1) was dispersed and outdated; (2) did not clearly define roles and responsibilities for implementing, monitoring, and evaluating all phases of those processes; and (3) did not incorporate all of the expedited acquisition authorities available to acquire joint urgent needs solutions. Further, we found that data systems for the urgent needs processes did not have comprehensive, reliable data for tracking overall results and did not have standards for collecting and managing data. In addition, we reported that the joint process did not include a formal method for feedback to inform joint leadership on the performance of solutions. Finally, we noted that in the absence of a management framework for its urgent needs processes, DOD did not have tools to fully assess how well its processes work, manage their performance, ensure efficient use of resources, and make decisions regarding the long-term sustainment of fielded capabilities. We made several recommendations to DOD to address these findings and DOD generally concurred with our recommendations. In June 2010, the Senate Armed Services Committee urged DOD to address these shortcomings that we identified “as quickly as possible.” In our report being released today, we identified cases of fragmentation, overlap, and potential duplication of efforts of DOD’s urgent needs processes and entities. However, the department is hindered in its ability to identify key improvements to its urgent needs processes because it does not have a comprehensive approach to manage and oversee the breadth of its efforts. Further, DOD has not comprehensively evaluated opportunities for consolidation of urgent needs entities and processes across the department. In this new report, we made several recommendations to DOD for improving its management and oversight of urgent needs, and DOD fully concurred with those recommendations. The following summarizes our key findings and recommendations, which are provided in more detail in the report we publicly release today. Over the past two decades, the department has established many entities that develop, equip, and field solutions and critical capabilities in response to the large number of urgent needs requests submitted by the combatant commands and military services. Many of these entities were created, in part, because the department had not anticipated the accelerated pace of change in enemy tactics and techniques that ultimately heightened the need for a rapid response to the large number of urgent needs requests submitted by the combatant commands and military services. While many entities started as ad hoc organizations, several have been permanently established. On the basis of DOD’s and our analysis, we identified at least 31 entities that play a significant role in the various urgent needs processes. Table 1 below shows the 31 entities we identified. We found that fragmentation and overlap exist among urgent needs entities and processes. For example, there are at least eight processes and related points of entry for the warfighter to submit a request for an urgently needed capability, including through the Joint Staff and each military service. Entities within these processes then validate the submitted urgent need request and thus allow it to proceed through their specific process. Moreover, our analysis showed that overlap exists among urgent needs entities in the roles they play as well as the capabilities for which they are responsible. For example, at the joint level we found six entities involved in facilitating urgent needs requests and five entities involved in providing sourcing support for urgent needs requests. Additionally, several entities have focused on developing solutions for the same subject areas, such as counter-IED and ISR capabilities, potentially resulting in duplication of efforts. For example, both the Army and the Marine Corps had their own separate efforts to develop counter-IED mine rollers. DOD has taken some steps to improve its fulfillment of urgent needs. These steps include developing policy to guide joint urgent need efforts, establishing a Rapid Fielding Directorate to rapidly transition innovative concepts into critical capabilities, and working to establish a senior oversight council to help synchronize DOD’s efforts. Despite these actions, the department does not have a comprehensive approach to manage and oversee the breadth of its activities to address capability gaps identified by warfighters in-theater. Federal internal control standards require detailed policies, procedures, and practices to help program managers achieve desired results through effective stewardship of public resources. However, DOD does not have a comprehensive, DOD-wide policy that establishes a baseline and provides a common approach for how all joint and military service urgent needs are to be addressed—including key activities of the process such as validation, execution, or tracking. Additionally, we found that DOD has a fragmented approach in managing all of its urgent needs submissions and validated requirements. For example, the Joint Staff, the Joint Improvised Explosive Device Defeat Organization (JIEDDO), the military services, and the Special Operations Command have issued their own guidance, which varied, outlining activities involved in processing and meeting their specific urgent needs. DOD also lacks visibility over the full range of urgent needs efforts—from funding to measuring results. Specifically, we found that DOD does not have (1) visibility over the total costs of its urgent needs efforts, (2) a comprehensive tracking system, (3) a universal set of metrics, and (4) a senior-level focal point. The following summarizes these key findings. DOD does not have visibility over total costs. DOD cannot readily identify the cost of its departmentwide urgent needs efforts. Based on the information submitted to us in response to our data request, the total funding for the fulfillment of urgent needs is at least $76.9 billion from fiscal year 2005 through fiscal year 2010. Because DOD does not have visibility over all urgent needs efforts and costs, it is not fully able to identify the need for key process improvements and adjust program and budgetary priorities accordingly. DOD does not have a comprehensive tracking system. DOD cannot readily identify the totality of its urgent needs efforts as well as the cost of such efforts because it has limited visibility over all urgent needs submitted by warfighters—both from joint and service-specific sources. Specifically, DOD and the services have disparate ways of tracking urgent needs; some have formal databases to input information while others use more informal methods such as e-mailing to solicit feedback. For example, the Joint Chiefs of Staff and each of the military services utilize electronic databases to track capability solutions as they move through the urgent needs process. However, more than a third of the entities involved in the process did not collect or provide the necessary information for the joint or service-based systems to track those solutions. Moreover, there was confusion over whose role it was to collect and input data into these tracking systems. DOD does not have a universal set of metrics. Our analysis found that the feedback mechanisms across DOD, the Joint Staff, the military services, JIEDDO, and the Special Operations Command are varied and fragmented. In April 2010, we recommended that DOD develop an established, formal feedback mechanism or channel for the military services to provide feedback to the Joint Chiefs of Staff and Joint Rapid Acquisition Cell on how well fielded solutions met urgent needs. The department concurred with the recommendation and stated that it would develop new DOD policy and that the Joint Chiefs of Staff would update the Chairman’s instruction to establish requirements for oversight and management of the fulfillment of urgent needs. However, the majority of DOD urgent needs entities we surveyed for our March 2011 report said that they do not collect all the data needed to determine how well these solutions are performing. Additionally, in April 2010, we also recommended that DOD develop and implement standards for accurately tracking and documenting key process milestones such as funding, acquisition, fielding, and assessment, and for updating data-management systems to create activity reports to facilitate management review and external oversight of the process. DOD agreed with these recommendations and noted actions it planned to take to address them. However, our current analysis found that the department lacked a method or metric to track the status of a validated urgent requirement across the services and DOD components, such as whether a requirement currently in development could be applicable to another service. DOD does not have a senior-level focal point. DOD’s lack of visibility over all urgent needs requests is due in part to the lack of a senior-level focal point (i.e., gatekeeper) that has the responsibility to manage, oversee, and have full visibility to track and monitor all emerging capability gaps being identified by warfighters in-theater. At present, the department has not established a senior-level focal point to (1) lead the department’s efforts to fulfill validated urgent needs requirements, (2) develop and implement DOD-wide policy on the processing of urgent needs or rapid acquisition, or (3) maintain full visibility over its urgent needs efforts and the costs of those efforts. We have previously testified and reported on the benefits of establishing a single point of focus at a sufficiently senior level to coordinate and integrate various DOD efforts to address concerns, such as with counterterrorism and the transformation of military capabilities. In addition to not having a comprehensive approach for managing and overseeing its urgent needs efforts, DOD has not conducted a comprehensive evaluation of its urgent needs processes and entities to identify opportunities for consolidation. Given the overlap and potential for duplication we identified, coupled with similar concerns raised by other studies, there may be opportunities for DOD to further improve its urgent needs processes through consolidation. GAO’s Business Process Reengineering Assessment Guide establishes that such a comprehensive analysis of alternative processes should be performed to include a performance-based, risk-adjusted analysis of benefits and costs for each alternative. In our current report, we identified and analyzed several options, aimed at potential consolidations and increased efficiencies, in an effort to provide ideas for the department to consider in streamlining its urgent needs entities and processes. These options include the following: Consolidate into one Office of the Secretary of Defense-level entity all the urgent needs processes of the services and DOD, while allowing the services’ program offices to maintain responsibility for developing solutions. Consolidate entities that have overlapping mission or capability portfolios related to urgent needs, such as entities involved in the development of solutions for biometrics. Establish a gatekeeper within each service to oversee all key activities to fulfill a validated urgent need requirement. Consolidate within each service any overlapping activities in the urgent needs process, such as the multiple entry and validation points that exist in the Army. The options we identified were not meant to be exhaustive or mutually exclusive. DOD would need to perform its own analysis, carefully weighing the advantages and disadvantages of options it identifies to determine the optimal course of action. Additionally, it must be recognized that many entities involved in the fulfillment of urgent needs have other roles as well. However, until DOD performs such an evaluation, it will remain unaware of opportunities for consolidation and increased efficiencies in the fulfillment of urgent needs. In the report we publicly release today, we make several recommendations to promote a more comprehensive approach to planning, management, and oversight of DOD’s fulfillment of urgent needs. In summary, we are recommending that: DOD develop and promulgate DOD-wide guidance across all urgent needs processes that establishes baseline policy for the fulfillment of urgent needs, clearly defines common terms, roles, responsibilities, and authorities, designates a focal point to lead DOD’s urgent needs efforts, and directs the DOD components to establish minimum urgent needs processes and requirements; and DOD’s Chief Management Officer evaluate potential options for consolidation to reduce overlap, duplication, and fragmentation, and take appropriate action. DOD concurred with all of our recommendations and stated that specific actions it will take to address these recommendations will be identified in a report on its urgent needs processes that is required by the Ike Skelton National Defense Authorization Act for Fiscal Year 2011 and due to Congress in January 2012. DOD also stated that the Deputy Chief Management Officer, supported by the military services’ Chief Management Officers, will participate in this end-to-end review and provide oversight and assistance in utilizing process improvement techniques and tools. Over the past several years we have identified significant challenges affecting DOD’s ability to rapidly respond to urgent needs of the warfighter and effectively manage and oversee the breadth of its urgent needs processes. It is noteworthy that DOD has recognized these challenges and continues to take steps towards improving its programs. However, until the department holistically examines the entirety of its various urgent needs processes and entities, including evaluating the need for consolidation, and establishes clear and comprehensive policy, it will not be in a position to ensure the warfighter, Congress, or the public that its processes are addressing the critical needs of U.S. forces in the most timely, efficient, and effective manner. Given the magnitude of the financial resources at stake, coupled with the need to field urgent need solutions as rapidly as possible to prevent loss of life or mission failure, it is imperative that DOD’s senior leadership make it a top priority to reform its urgent needs process. Mr. Chairman, this concludes my statement. I would be happy to answer any questions you or other members of the subcommittee may have at this time. For further information regarding this testimony, please contact William Solis at (202) 512-8365 or solisw@gao.gov. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony are Cary Russell, Assistant Director; Usman Ahmad, Laura Czohara, Lonnie McAllister, John Ortiz, Richard Powelson, Steve Pruitt, Ryan Stott, Elizabeth Wood, Delia Zee, and Karen Zuckerstein. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | This testimony discusses the challenges that the Department of Defense (DOD) faces in fulfilling urgent operational needs identified by our warfighters. Over the course of the wars in Iraq and Afghanistan, U.S. forces have encountered changing adversarial tactics, techniques, and procedures, which challenged DOD to quickly develop and provide new equipment and new capabilities to address evolving threats. Further, U.S. troops faced shortages of critical items, including body armor, tires, and batteries. DOD's goal is to provide solutions as quickly as possible to meet urgent warfighter needs to prevent mission failure or loss of life. To meet its urgent needs, DOD had to look beyond traditional acquisition procedures, expand the use of existing processes, and develop new processes and entities designed to be as responsive as possible to urgent warfighter requests. In addition to requests for equipment from DOD's existing stocks, warfighters have requested new capabilities, such as: technology to counter improvised explosive devices (IED); technology related to intelligence, surveillance, and reconnaissance (ISR) to provide increased situational awareness; and equipment related to command and control to enhance operations on the battlefield. In meeting urgent needs, it is important for DOD to efficiently use the department's financial resources. DOD has spent billions of dollars over the past several years to address urgent warfighter needs. Our past work on weapons acquisition has shown that the department has often pursued more programs than its resources can support. Additionally, our past work also has shown that DOD has had difficulty translating needs into programs, which often has led to cost growth and delayed delivery of needed capabilities to the warfighter. Today, we are publicly releasing a report that addresses (1) what entities exist within DOD for responding to urgent operational needs, and the extent to which there is fragmentation, overlap, or duplication; (2) the extent to which DOD has a comprehensive approach for managing and overseeing its urgent needs activities; and (3) the extent to which DOD has evaluated the potential for consolidations of its various activities and entities. This statement will first briefly discuss challenges we reported in April 2010 that affected the overall responsiveness of DOD's urgent needs processes and then highlight the key findings and recommendations of today's report. Today's report contributed to our findings in another report being released today that addresses opportunities to reduce potential duplication in government programs. We reported in April 2010 on several challenges that affected DOD's responsiveness to urgent needs: (1) Training: We found challenges in training personnel that process urgent needs requests. For example, we found that while the Army required selected officers to attend training on how to address requirements and identify resources for Army forces, officers at the brigade level responsible for drafting and submitting Army and joint urgent needs requests--and those at the division level responsible for reviewing the requests prior to submission for headquarters approval--were not likely to receive such training.(2) Funding: We found that funding was not always available when needed to acquire and field solutions to joint urgent needs. This result occurred in part because the Office of the Secretary of Defense had not given any one organization primary responsibility for determining when to implement the department's statutory rapid acquisition authority or to execute timely funding decisions. (3) Technical maturity and complexity: We found that attempts to meet urgent needs with immature technologies or with solutions that are technologically complex could lead to longer time frames for fielding solutions to urgent needs. Also, we found that DOD guidance was unclear about who is responsible for determining whether technologically complex solutions fall within the scope of DOD's urgent needs processes. In our report being released today, we identified cases of fragmentation, overlap, and potential duplication of efforts of DOD's urgent needs processes and entities. However, the department is hindered in its ability to identify key improvements to its urgent needs processes because it does not have a comprehensive approach to manage and oversee the breadth of its efforts. Many of these entities were created, in part, because the department had not anticipated the accelerated pace of change in enemy tactics and techniques that ultimately heightened the need for a rapid response to the large number of urgent needs requests submitted by the combatant commands and military services. While many entities started as ad hoc organizations, several have been permanently established. DOD has taken some steps to improve its fulfillment of urgent needs. These steps include developing policy to guide joint urgent need efforts, establishing a Rapid Fielding Directorate to rapidly transition innovative concepts into critical capabilities, and working to establish a senior oversight council to help synchronize DOD's efforts. Despite these actions, the department does not have a comprehensive approach to manage and oversee the breadth of its activities to address capability gaps identified by warfighters in-theater. In addition to not having a comprehensive approach for managing and overseeing its urgent needs efforts, DOD has not conducted a comprehensive evaluation of its urgent needs processes and entities to identify opportunities for consolidation. Given the overlap and potential for duplication we identified, coupled with similar concerns raised by other studies, there may be opportunities for DOD to further improve its urgent needs processes through consolidation. In the report we publicly release today, we make several recommendations to promote a more comprehensive approach to planning, management, and oversight of DOD's fulfillment of urgent needs. In summary, we are recommending that: (1) DOD develop and promulgate DOD-wide guidance across all urgent needs processes that establishes baseline policy for the fulfillment of urgent needs, clearly defines common terms, roles, responsibilities, and authorities, designates a focal point to lead DOD's urgent needs efforts, and directs the DOD components to establish minimum urgent needs processes and requirements; and (2) DOD's Chief Management Officer evaluate potential options for consolidation to reduce overlap, duplication, and fragmentation, and take appropriate action. |
More than 2.7-million miles of pipeline transport roughly two-thirds of our nation’s domestic energy supply. These pipelines carry gas and hazardous liquids from producing wells, to processing plants, and eventually to end users, such as businesses and homes. (See fig. 1.) Within this nationwide system, there are three main types of pipelines— gathering, transmission, and distribution. Based on annual reports submitted to PHMSA by pipeline operators at the end of 2015, there were about 18,000 miles of gas gathering pipelines, 301,000 miles of gas transmission pipelines, and 2.2 million miles of gas distribution pipelines regulated by PHMSA. In addition, in 2015 there were about 4,000 miles of liquid gathering pipelines and 205,000 miles of hazardous liquid transmission pipelines regulated by PHMSA. Gathering pipelines: Gas gathering pipelines collect natural gas and other gases from production areas, while hazardous liquid gathering pipelines collect oil and other petroleum products from oil well heads. Gathering pipelines operate at pressures ranging from about 5 to 800 pounds per square inch (psi). These pipelines then typically transport the products to processing facilities, which in turn refine the products and send them to transmission pipelines. Transmission pipelines: Transmission pipelines carry gas or hazardous liquids, sometimes over hundreds of miles, to communities and large-volume users (e.g., factories). Transmission pipelines tend to have the largest pressures of the three types of pipelines, generally operating at pressures ranging from 400 to 1,440 psi. Gas distribution pipelines: Gas distribution pipelines transport natural and other gas products to residential, commercial, and industrial customers. These pipelines tend to operate at lower pressures—0.25 to 100 psi. As noted earlier, pipeline material and weld failures and corrosion together are among the leading causes of significant incidents from 2010 through 2015, as reported to PHMSA by pipeline operators. (See fig. 2.) Material failures can occur due to impurities in the steel manufacturing process, defects in the manufacturing process to convert steel into pipelines, or from failures in the welding or joining of pipeline segments together, among other causes. Corrosion can occur on the exterior or interior of a metallic pipeline, during which electrons from the metal undergo electrochemical reactions often involving water or oxygen, resulting in the degradation of the pipeline. External corrosion may result when the metal surface of the pipe is exposed to groundwater or soil environments that increase electrical conductivity of a pipeline and accelerate the corrosion process. External corrosion is also a factor in stress corrosion cracking, where stress on the pipeline from high or fluctuating operating pressures and corrosive environmental conditions cause cracks to form in pipeline material. Internal corrosion occurs inside the pipeline, and may be caused by the presence of water, corrosive materials, or bacteria. PHMSA has established regulations that identify requirements for pipeline materials and corrosion prevention technologies in the gas and hazardous liquid pipeline network. PHMSA’s regulations identify design standards for pipelines and regulate what materials can be used under different operating conditions and pressures. For corrosion prevention, external coatings and a technology known as cathodic protection are required for metallic pipes installed beginning in 1971. External coatings are a protective layer of plastic material or other chemical compounds applied and bonded across the metallic surface of a pipe. Coatings are applied prior to or during installation, and coat both the pipe and the welds that join pipeline segments together. However, external coatings can be damaged by construction or degrade over time. Therefore, after the external coatings are applied, cathodic protection is added. Cathodic protection involves applying an electrical current onto the pipeline to control external corrosion. External coatings and cathodic protection thus work together to protect the pipeline by disrupting the chemical process that leads to corrosion. (See fig. 3.) Under PHMSA’s pipeline safety program, pipeline operators take primary responsibility for the integrity of their pipelines, and PHMSA conducts inspections to ensure operator compliance with federal safety regulations. For example, the Pipeline Safety Improvement Act of 2002 required PHMSA to implement a risk-based approach to gas and hazardous liquid transmission pipeline safety, an approach known as integrity management. The integrity management program requires operators to, among other things, systematically identify threats and mitigate risks to pipeline segments located in high consequence areas, which include highly populated or environmentally sensitive areas. PHMSA and state pipeline safety offices conduct inspections to oversee operators’ compliance with this and other federal requirements. PHMSA has also established regulations requiring operators to ensure that personnel are qualified to perform certain tasks, including corrosion control activities such as monitoring cathodic protection. In its operator qualification regulations, PHMSA has stated its objective is to reduce the risk of accidents on pipelines attributable to human error. These regulations require that operators develop a written qualification plan that identifies a list of covered tasks for personnel as well as an approach to evaluate whether individuals are qualified to perform those tasks. (See table 1.) Operator qualification plans may include provisions to provide training, as appropriate, to ensure that individuals performing covered tasks have the necessary knowledge and skills to perform the tasks in a manner that ensures safe operation. The regulations do not prescribe how operators must evaluate personnel to ensure they are qualified, though they state the evaluation may take the form of a written or oral exam, on-the-job performance assessment, or a simulation, among other methods. PHMSA and state pipeline safety offices work together to oversee and inspect federally regulated gas and hazardous liquid pipelines. In general, PHMSA has primary authority to regulate and enforce interstate pipeline safety, including the design, construction, operation and maintenance of pipelines certified by the Federal Energy Regulatory Commission or crossing state lines. In the nine states designated as interstate agents, state pipeline inspection staff supplements PHMSA inspections, but PHMSA maintains enforcement authority over these pipelines. Regarding intrastate pipelines, state pipeline safety offices may assume inspection and enforcement responsibility for intrastate pipelines in their states after annually certifying to PHMSA that they are complying with applicable federal standards for their oversight. PHMSA currently has certifications with the 48 contiguous states, the District of Columbia, and Puerto Rico for intrastate gas pipelines within their boundaries, and with 15 states for hazardous liquid intrastate pipelines. If a state authority does not apply for annual certification, inspection, and enforcement activities, all intrastate facilities in that state remain the responsibility of PHMSA. PHMSA’s pipeline inspectors and the nine interstate agents conduct periodic integrated inspections of interstate pipelines. These inspections look at the entirety of an operator’s pipeline safety approach, including ensuring operators meet operator qualification requirements. PHMSA conducts its integrated inspections on individual pipeline segments, known as inspection systems. These inspection systems are comprised of one or more smaller pipeline units. PHMSA’s Office of Pipeline Safety employs over 200 staff across headquarters and 5 regional offices, with about 130 of those staff involved in inspections and enforcement of interstate pipelines. As part of oversight activities, PHMSA also collects a range of data on pipeline materials and corrosion prevention through annual operator reporting and incident reports, and during its integrated inspections. The data describe various characteristics of the pipeline, including the type of material (e.g., steel, plastic, or composite), the diameter of the pipe, and when it was installed. The corrosion prevention data include information on whether the pipeline is coated and cathodically protected, and other characteristics associated with corrosion. The vast majority (over 95 percent) of U.S. gas and hazardous liquid pipeline miles that PHMSA regulates are constructed of either steel or plastic, with relatively minor use (less than 5 percent) of other materials, including composites and iron, according to our analysis of PHMSA data from 2015. (See table 2.) The extent of steel and plastic use varies in different parts of the pipeline network (gathering, transmission, and distribution) due to operating conditions and other factors, as discussed below. For example, nearly all pipeline miles in the transmission network consist of steel pipelines, while plastic pipelines represent over half of pipeline miles in the gas distribution network. In addition, the ratio of these materials within the network has changed over time. PHMSA’s data indicates that, from 2010 through 2015, the percentage of plastic pipeline miles in the gas distribution network increased from 58 to 62 percent. According to industry stakeholders we interviewed, the vast majority of new and replacement distribution pipes are made of plastics. The composition and use of these materials can vary widely: Steel: Steel is widely used in the gathering, transmission, and distribution segments of the pipeline network, and can be manufactured in various grades (strengths). Each grade refers to a specific strength range and chemical composition of iron and a small percentage of various elements, including carbon and manganese. According to operators and expert stakeholders we interviewed, the grade of steel used in a pipeline depends on a variety of factors, including the required operating pressure to propel the product through the pipeline, the operating environment, and cost. PHMSA regulations establish a design formula to determine the maximum allowable operating pressures for pipelines constructed from various grades of steel. In practice, steel’s strength to withstand high operating pressures and other design characteristics generally facilitate its use across all portions of the pipeline network. Corrosion- resistant steel alloys, such as stainless steel, are used in limited circumstances due to their high costs, according to a few operators and expert stakeholders. Such alloys are used primarily in offshore applications or in limited circumstances to gather particularly corrosive oil or gas products or in high consequence areas. Plastic: Plastics are used primarily in gas distribution pipelines, along with a smaller percentage of gathering pipelines. Specifically, operators and expert stakeholders identified polyethylene, which is a plastic used to make many common household products such as bottles and food wrap, as the most commonly used plastic, particularly within the gas distribution network. According to PHMSA data, 99 percent of plastic pipeline distribution network miles are composed of polyethylene. An operator and industry stakeholder also identified polyamides, a nylon-woven plastic, as an emerging pipeline material due to its increased strength, although it is currently only used in less than 1 percent of distribution pipeline miles according to PHMSA data. Current PHMSA regulations permit plastic pipelines to be used at pressures up to 100 psi, with exceptions for certain polyethylene pipelines that can be used up to 125 psi, and certain polyamides up to 200 psi. In 2015, PHMSA proposed changes to these regulations that would allow use of polyamide plastic pipelines at even higher operating pressures. Composites: Although composites, such as fiberglass, fiber-reinforced plastic, and other materials represent a very small portion of the nation’s pipeline network miles, two expert stakeholders reported increasing use of the these materials primarily in gathering. For example, one expert stakeholder said that fiber pipe was starting to be adopted in place of steel for gathering in certain situations because it can be used at higher pressures than polyethylene. Operators and expert stakeholders also told us that composite pipes are generally corrosion-resistant and are easier to transport and install, as they may come in spoolable reels and do not require welding. However, PHMSA officials noted that the design and materials for composite pipelines can vary substantially, and there are few applicable standards or requirements for composite materials. Consequently, composite materials need to be vetted individually for each specific use, according to PHMSA. As a result, PHMSA requires operators to obtain special permits to use composite materials, and the maximum allowable operating pressure can vary depending on the type of material proposed. From 2010 through March 2017, PHMSA had approved 8 of 14 special permit applications that proposed the use of composite materials in the pipeline network. PHMSA officials stated that the pipeline industry is working to develop standards for these materials, and the industry has petitioned PHMSA to incorporate any such standards into its regulations. Operators and expert stakeholders identified a variety of benefits and limitations associated with commonly used pipeline materials, such as the ability or inability to accommodate high pressures, and resistance or susceptibility to corrosion. More specifically, for steel, plastic, and composite pipelines, they identified trade-offs among these materials, as detailed in table 3. For example, while steel provides strength and can accommodate higher operating pressures compared to plastic and composites, it is susceptible to corrosion and requires the use of corrosion protection technologies. In contrast, plastic and composite materials are generally corrosion-resistant, except when metallic components are used in some composite pipes that are reinforced with steel. However, NTSB officials noted that assessing the integrity management of plastic pipelines can be challenging because there are limitations in established technologies currently available to assess flaws in plastic pipe or certain joints, and the industry has limited data regarding the long-term reliability of plastic pipelines and associated components. Although several operators and expert stakeholders told us that steel generally costs more per unit than plastic, the relative costs of pipelines made of these materials depend on an interplay of factors, including pipeline design, installation, and maintenance. Design: According to almost all the operators and expert stakeholders we interviewed, the design of a pipeline, including the intended operating pressure, is a significant factor in the selection of a pipeline material, and a majority of operators and expert stakeholders we interviewed said that pipeline diameter and wall thickness can affect the cost of pipeline materials. Specifically, at lower diameters and pressures, such as in distribution and some gathering pipelines, plastic often has a cost advantage, while in the larger diameters and pressures of transmission pipelines, steel is the only cost-effective material. Higher diameters and pressures necessitate increasingly thicker walls, which makes plastic cost prohibitive, according to operators and expert stakeholders. Steel, in contrast, can be manufactured at higher grades, allowing thinner but stronger walls, or at lower grades producing thicker, but lower-strength walls. Operators and experts told us that because pipeline steel is purchased by weight, the pipeline industry has increased its use of higher grade, thinner-wall steel in recent years to reduce material costs while maintaining higher strengths. Installation: Operators and expert stakeholders we interviewed told us that installation is a major cost component for pipelines, though these costs are generally higher for steel than other materials. Generally, in circumstances where either steel or plastic could be used, operators and expert stakeholders told us that installation of steel is more expensive than plastic. For example, a steel pipeline requires that a trench be dug and prepared before installation, while plastic can often be plowed into the ground without preparation, reducing time and expense. Joining of pipe sections, by either welding (steel) or fusing (plastic) is an important component of installation and can also add to the cost. Operators and expert stakeholders also told us that welding steel is more difficult and time consuming than fusing plastic, adding to the cost. Operators and expert stakeholders also noted that composite material pipeline segments can be challenging to join with other segments in the pipeline network. Maintenance: For steel pipelines, over half of the operators and expert stakeholders we interviewed stated that material-specific maintenance costs to prevent corrosion can affect the overall life-cycle cost of the pipeline. For example, several operators and expert stakeholders said that while using higher grade steel allows operators to reduce overall steel material expense, higher grade steel pipelines have thinner walls and may have less corrosion allowance—that is, the amount of material that may corrode without affecting the integrity of the pipeline. As such, higher grade steels can result in higher maintenance costs associated with monitoring and corrosion prevention, according to one expert stakeholder. Operators and expert stakeholders noted that while plastic pipelines do not have corrosion prevention maintenance costs, they are more susceptible than steel to third-party damage that requires repair. Operators and expert stakeholders we interviewed stated that the primary technologies to prevent external corrosion are coatings and cathodic protection, and these tools are widely used across the pipeline network. As previously noted, PHMSA regulations require external coatings and cathodic protection for all metallic pipes installed beginning in 1971. According to our analysis of PHMSA operator-submitted data, operators have externally coated and cathodically protected over 96 percent of steel gathering and transmission pipelines and 85 percent of steel distribution pipelines across the federally regulated pipeline network. A lower percentage of steel distribution pipelines are externally coated and cathodically protected because distribution networks in many areas were installed before 1971. According to PHMSA officials, these older, unprotected steel distribution pipelines are often replaced with plastic, which reduces the total mileage of unprotected steel distribution pipelines. Coatings and cathodic protection offer important safety benefits to protect steel pipelines from external corrosion, but these technologies also have limitations in their effectiveness. Operators and expert stakeholders generally agreed that coatings and cathodic protection are complementary technologies and function most effectively when used together. Specifically, coatings provide a protective barrier to the pipeline surface, and if this barrier is compromised, cathodic protection delivers an electric current to the exposed area to inhibit corrosion. In addition, over half of the operators and expert stakeholders we interviewed stated that these technologies are also used to prevent stress corrosion cracking in steel pipelines. However, these technologies have some limitations. For example, according to operators and expert stakeholders, some coatings can be difficult to install and apply in the field and all coatings can deteriorate over time. They also said that a variety of different coatings exist and their effectiveness can vary based on operating factors, particularly in extreme temperatures which can disbond coatings from the pipe surface. Operators and expert stakeholders also told us that the effectiveness of cathodic protection can be limited by “shielding,” which occurs when the electrical current is obstructed from reaching the pipeline by obstacles such as rocks, failed coatings, or interference from nearby electric power cables. (See table 4.) Operators and expert stakeholders we interviewed identified a variety of factors that can affect the cost of these technologies. According to operators and expert stakeholders, coatings and cathodic protection are generally a cost-effective way to protect steel pipelines against external corrosion and stress corrosion cracking, and operators and expert stakeholders said that coatings and cathodic protection are a relatively small portion of total pipeline cost. According to operators and expert stakeholders, factors that can affect the overall cost of coatings include the type of coating; application costs, including application of coating to pipeline joints in the field after welding; and maintenance of the coating (which requires excavation, inspection, and repair). Factors that can affect the overall cost of cathodic protection include initial installation of equipment; the cost of providing power, including in remote locations where power is not readily available; the need to increase electrical power over time to protect the pipeline as coatings degrade; and on-going monitoring and maintenance. Operators and expert stakeholders also identified internal corrosion prevention technologies along with their benefits and limitations. (See table 5.) According to PHMSA, many interrelated technical factors can affect the likelihood, aggressiveness, and location of internal corrosion. For example, certain types of internal corrosion are caused by chemical reactions between the material being transported and the wall of the pipeline. In these cases, pipeline operators stated that they typically inject “inhibitors”–chemical compounds that inhibit these chemical reactions. The type of chemical compound injected will depend on the type of product, cost, availability, and environmental effect. In other cases, pipeline operators stated that they can use devices known as “cleaning pigs.” Cleaning pigs are electronic devices with cleaning brushes attached to them that run through the inside of the pipeline to scrub it and remove water and other contaminants from the pipeline. Operators and expert stakeholders also emphasized the importance of controlling pipeline-operating conditions to prevent internal corrosion, including maintaining sufficient flow and velocity of products in the pipeline to reduce the accumulation of water and contaminants. A variety of factors affect the cost of internal corrosion prevention technologies. First, operators and expert stakeholders noted that labor and equipment, such as installing infrastructure to launch and receive the cleaning pigs, can be expensive. Second, operators and expert stakeholders noted that the use of cleaning pigs can temporarily reduce the flow rate of the product so, while not necessarily affecting costs, the revenue of the pipeline operator could be affected by the use of the technology. Third, the extent of the internal corrosion threat can require greater use of inhibitors and cleaning pigs, thereby increasing the costs for the pipeline operator. Similar to coatings and cathodic protection, only two of the operators and expert stakeholders we interviewed provided specific information on the costs of these technologies, and generally noted that the costs of these technologies vary with the type of technology used. Gas and hazardous liquid pipeline operators use several sources to train personnel on pipeline corrosion, including in-house training and third- party programs, according to our interviews with eight operators and nine other stakeholders, including unions, training providers, and industry associations. According to operator qualification regulations from PHMSA, operators have discretion to determine the training approaches they provide to ensure personnel are qualified. Operators are also responsible under the regulations for ensuring any contractor personnel they hire for operations and maintenance tasks are qualified, even if the contractors are already trained for those tasks. The operators we interviewed told us that PHMSA’s operator qualification regulations provide flexibility to tailor their operator qualification program and any corresponding training program to their operational needs. In practice, operators and stakeholders told us this flexibility allows operators to use several training sources and approaches to supplement their operator qualification plans. Internal training programs: Operators and stakeholders noted internal training programs vary across companies, depending on factors such as the type of pipeline, environment, and staff resources. All of the eight operators we spoke with provided in-house training programs depending on the needs of the company. Six of the eight operators said their in-house training included methods such as on-the-job training, mentoring programs, apprenticeships, and online training. Such training programs teach skills related to corrosion prevention such as applying pipe coating, conducting cathodic protection surveys, and examining the soil surrounding the pipe. Operators we interviewed also identified different approaches for retraining staff to maintain qualification. For example, one hazardous liquid operator requires all corrosion technicians and contractors to take training and assessments every 3 years, while another hazardous liquid operator stated that it assigns retraining intervals from 2 to 6 years based on the risk associated with each task. Third-party training providers: In addition to internal training, operators also frequently use third-party training providers, according to operators and stakeholders we interviewed. All eight operators and six stakeholders said operators typically use third-party providers, such as industry associations and colleges, for general corrosion training programs. For example, NACE International, formerly the National Association of Corrosion Engineers (NACE), offers two training programs on corrosion prevention technologies: a series of courses on pipeline coatings and a series of courses on cathodic protection. All eight operators said they use NACE training, and six operators said they consider NACE certifications to be the industry standard for hiring corrosion personnel. Several operators also reported that their personnel attended the Appalachian Underground Short Course, which offers a variety of corrosion-related courses at four levels of difficulty, along with a separate course on pipeline coatings. Operators also cited Purdue University’s Corrosion College and the Midwest Energy Association’s EnergyU as frequently used sources of corrosion training. Industry association guidance: Operators also make use of industry associations’ guidance for training, according to one operator and three stakeholders. For example, operators may consult practices from the American Society of Mechanical Engineers, which offers guidance on training programs and identifies 170 tasks personnel should be able to perform, including corrosion-related tasks. Operators also use recommended practices from the American Petroleum Institute, including guidance that identifies 99 pipeline operational safety tasks, such as corrosion-related tasks. Union training for members: Unions are another source of pipeline corrosion training. Two of the three unions we spoke with provide pipeline training to their members related to corrosion prevention. For example, one union representing contractors uses a third party to provide training for its members on pipelines using its national training fund. The course does not specifically cover corrosion but addresses pipe damage and abnormal operating conditions such as corrosion. Staff from a second union that represents contractors said that they train members to their own national standards for specific corrosion- related tasks, such as those related to pipeline coating and cathodic protection. The staff stated that personnel must have completed the union’s pipeline technical course to be dispatched to job sites. However, operators must separately ensure the personnel are qualified to perform the specific tasks they are contracted to complete, as discussed below. A third union, which represents operator employees, said that while the union does not provide formal training, its members receive training from the operators. Although PHMSA’s operator qualification regulations allow operators flexibility in training approaches, operators and contractors identified several challenges. In particular, operators told us that they rely on contractors for a variety of corrosion-related tasks, in part because of limited resources, and a need for specialized expertise. Because approaches to training and operator qualification vary across the industry, operators have difficulty verifying contractor qualification, and contractor training and qualification may not transfer to various operators. Operator challenges: Operators and stakeholders identified challenges in ensuring that contractor personnel have the skills and abilities to carry out various corrosion-related tasks associated with PHMSA’s operator qualification regulations, known as covered tasks. Although, according to the regulations, operators are responsible for ensuring their contractors are qualified, seven of the eight operators we spoke with said verifying contractor qualifications was challenging. For example, an operator we interviewed said even if a contractor has completed a training program from a union or third-party provider, the contractor may not be trained or have the experience to fulfill the operator’s needs and the operator may need to separately evaluate the contractor’s ability to perform covered tasks. In addition, three operators and two unions said qualification evaluations do not always accurately reflect contractors’ skills and abilities. For example, one operator said contractor personnel may have passed an evaluation to install cathodic protection, but they may not have been trained to complete important parts of the installation, such as the use of specialized tools to measure electrical current. Contractor challenges. Operators and stakeholders also said contractors’ training and qualifications are not always transferrable, though perspectives on the severity of the challenge and approaches to address it varied. Three operators and one union stated that an evaluation is not always portable to different companies, which one union said can be a barrier for contractors to obtaining work. For example, one operator said it requires contractors to undergo training and evaluations from specific, third-party providers to demonstrate they are qualified to perform 31 specific covered tasks related to corrosion prevention in their operator qualification plan and noted that the operator does not accept contractors with qualifications from other providers. Furthermore, three operators we interviewed noted that even if contractors have been previously evaluated by the operator, they may need to be retested for each covered task by the same operator or by a third-party accepted by the operator. The two unions representing contractors we spoke with had different perspectives on the portability of evaluations and related training. One union representing contractors said lack of portability was a challenge for its members, who may have to take duplicative training and might lose income while completing an evaluation before starting a job. Another union representing contractors said that while each operator usually prefers its own internal training methods, portability was not a significant issue for its members. Operators and other stakeholders we spoke with identified mechanisms under way to address these challenges. Six pipeline operators told us they currently rely on third-party companies to facilitate the verification and portability of qualifications in the pipeline industry. For example, one training vendor stated that it maintains an operator qualification program for over 140 pipeline operators across the country. This vendor said it customizes covered tasks’ lists based on each client’s needs, though some tasks are common, such as those related to coating and cathodic protection. The vendor said operators may hire it to conduct on-site evaluations to qualify personnel and train operator staff to conduct evaluations. Stakeholders and operators also identified broader solutions to overcome these challenges across multiple operators. For example, one industry organization representing hazardous liquid pipeline operators said there is currently an industry initiative examining challenges related to the portability of training. In addition to the above challenges operators and stakeholders cited, PHMSA has taken steps to update its regulations related to corrosion prevention training. In July 2015, PHMSA proposed changes to its regulations through the rulemaking process to provide additional direction on pipeline training and operator qualification. More specifically, PHMSA proposed to clarify topics unaddressed in the initial version of these regulations published in 1999. First, as noted above, PHMSA’s regulations identify training as one option to ensure personnel qualification, but operators have discretion to determine the extent of training to provide. The proposed changes to PHMSA’s existing operator qualification regulations would, among other things, require operators to provide training for personnel who perform operator-defined covered tasks, though operators would have discretion in determining what training to provide for employees and contractors covered by the regulations. Second, PHMSA has noted that since the current regulations are not prescriptive, the resulting flexibility makes it difficult to measure an operator’s compliance with the rule. Moreover, PHMSA officials said that operators do not always review their covered tasks, evaluations, and procedures to ensure they are effective. The proposed changes would require pipeline operators to evaluate the effectiveness of their operator qualification program and retain records of these evaluations. Third, the current operator qualification regulations cover activities on pipelines after they are installed, but there are no requirements to cover new construction tasks. As a result, operators are not required to ensure that personnel employed or contracted to construct new pipelines meet specific qualifications to perform construction tasks. The proposed changes would expand the scope of the operator qualification regulations to cover new pipeline construction and other currently uncovered tasks. PHMSA issued a final rule in January 2017 addressing other pipeline safety issues considered in the 2015 proposed changes, but it did not issue a decision on the proposals related to the above topics. Specifically, as part of the final rule, PHMSA noted that it expects to publish an additional final rule on operator qualifications in the near future, after it considers and evaluates comments received from stakeholders. PHMSA uses data on pipelines and corrosion collected from operators in its Risk Ranking Index Model (referred to as RRIM) to determine the frequency of PHMSA’s inspections of operators based on threats to pipeline integrity, such as ineffective coatings. In recent years, PHMSA has taken steps to improve the quality of the data used in RRIM, including reviewing operator-reported data for outlier values. PHMSA officials designed RRIM using their professional judgments, and they did not document the rationale or justification for key decisions, including the selection of threat factors and their associated weights. Moreover, PHMSA has not used data to assess the model’s overall effectiveness and lacks a process to do such an evaluation. Without documentation and a data-driven evaluation process, both of which are consistent with federal management principles, PHMSA cannot demonstrate the effectiveness of RRIM in allocating PHMSA’s limited inspection resources according to pipeline threats or targeting its limited resources to the greatest threats. Since 2011, PHMSA has used data on pipelines and corrosion prevention, along with other data elements, in a risk ranking model to prioritize pipelines for inspection and manage its inspection resources. The purpose of PHMSA’s RRIM is to generate a risk score for each federally inspected pipeline and help determine the frequency of inspection. RRIM incorporates data on a variety of pipeline characteristics, which PHMSA calls threat factors, including a few associated with material and corrosion failures. Those threat factors include: steel pipe that lacks a protective external coating, also known as “bare steel;” steel pipe that was coated ineffectively, in such a way that the external coating may no longer adhere to the pipe; and steel pipe that was manufactured using a technique common from the 1920s until the 1970s, known as low frequency electric-resistance welding, that is susceptible to catastrophic failure and certain types of corrosion. PHMSA inspectors collect these data from operators during integrated inspections for each pipeline segment, or unit, they inspect. PHMSA officials said RRIM is designed to incorporate various threats and is not limited to material or corrosion threats. Other threat factors that PHMSA uses in RRIM include commodity type, recent significant incidents, and recent enforcement actions. According to PHMSA officials, integrated inspections are tailored to each operator and include reviews of operator maintenance, repair, and other records and visits to pipeline locations to assess cathodic protection or observe other activities. On an annual basis, PHMSA uses RRIM to calculate a risk score for each pipeline unit to determine the frequency of integrated inspections. PHMSA assigns a weight to each threat factor in RRIM, based on data the agency collects. For example, PHMSA assigns a weight of 2 to pipeline units where operators report that ineffective coating is present, as shown in table 6. The weights are then added together and multiplied by the consequence index of the unit. The resulting number is the unit risk score, which is averaged across the units that comprise each inspection system, which is the level at which PHMSA conducts integrated inspections. The inspection system’s risk score determines whether the system is assigned to the high, medium, or low risk tier, and inspected at least every 3, 5, or 7 years, respectively. Annually, PHMSA officials use RRIM to identify inspection system priorities for the next year, based on the risk tier and the amount of time since the system’s most recent inspection. Each year PHMSA inspects a portion of the total number of inspection systems, which in 2016 totaled 655 systems. For example, in 2016, PHMSA used RRIM to prioritize a list of 79 systems to be inspected in 2017. Of these systems prioritized for inspection, 29 percent were considered high risk, 53 percent were medium risk, and 18 percent were low risk. In addition, based on the criteria PHMSA established for inspecting high, medium, and low risk systems at least once every 3, 5, and 7 years, respectively, each of these 79 systems were due for inspection. PHMSA officials said this approach allows them to allocate inspection resources to pipelines considered higher risk, while ensuring that all inspection systems are inspected at least every 7 years. PHMSA officials noted that a risk-based inspection approach is necessary given the size of the federally regulated pipeline network and the number of its inspection staff. PHMSA officials said RRIM is the primary tool they use across their regional offices and interstate agents to prioritize and schedule inspections but said they also consider input from regional inspection staff as part of this process. Each year, PHMSA headquarters officials provide the list of inspection priorities generated by RRIM to the regional offices, and inspectors have the opportunity to review the list and provide feedback. Regional inspectors told us that during these reviews they use their knowledge of local operators and pipelines to recommend that certain pipeline units be given higher or lower priority than they are ranked by RRIM. Regional inspectors said RRIM is generally effective in prioritizing inspections, but there are threats that it may not capture, such as the management experience of an operator, or whether there has been recent public concern regarding a particular pipeline. For those interstate pipelines inspected by states’ pipeline safety offices designated as interstate agents, state officials said they also have the opportunity to review PHMSA’s inspection priorities and suggest additional priorities. In recent years, PHMSA has taken steps to improve the quality of the data used in RRIM. In June 2012, the Department of Transportation’s Office of Inspector General identified a number of long-standing data management deficiencies at PHMSA that have limited its ability to conduct meaningful analysis to improve its oversight. Among the concerns, the Inspector General found that shortcomings in PHMSA’s data management and quality limited the usefulness of operator incident and annual reports in identifying pipeline safety risks. PHMSA officials told us that in recent years they have implemented a number of procedures to limit errors and improve the accuracy of data submitted by operators. For example, under PHMSA’s internal data management procedures, officials stated that they review all incident report submissions from operators on a monthly basis to ensure they are complete and to identify outlier data entries. PHMSA officials also said they review a spreadsheet of all unit data that can be confirmed in operator-submitted annual reports and compare data entries with those submitted by operators in prior years to detect data anomalies. Although PHMSA has taken some steps to improve data quality, PHMSA officials identified other limitations with their current data collection that hinder their ability to enhance the data used in RRIM. Officials stated that RRIM does not include all the threat factors they would like to use because they have not collected certain data at the unit level, which is the level at which the model calculates risk scores. For example, officials said they would like to use data on maximum allowable operating pressure as a threat factor for RRIM, but PHMSA’s current data on operating pressure are collected in aggregate at the state level for each operator and not at the unit level. In addition, officials noted that the data do not allow PHMSA to identify the precise location of threats such as ineffective external coating or certain types of welding associated with corrosion, and therefore PHMSA cannot determine whether threat factors are co-located and potentially correlated. To help address this limitation, PHMSA has developed a form that enables inspectors to systematically collect additional data at the unit level during an inspection. While this approach could improve the quality of data in RRIM these data will not be immediately available, since the inspection systems are inspected every 3, 5, or 7 years based on their risk score. To address these limitations, PHMSA has sought to expand its data collection through the National Pipeline Mapping System, a geographic information system managed by PHMSA, which officials said would strengthen the accuracy and precision of the data used in RRIM. In June 2016, PHMSA issued a public notice to expand its information collection authorities to collect pipeline data from operators at a positional accuracy of approximately 100 feet, which is significantly more precise than the pipeline unit, which in 2017 averaged over 200 miles in length. This data collection would include threats associated with corrosion prevention, such as whether the pipe is externally coated, and how the pipe was welded, among other pipeline characteristics. However, in its March 2017 decision memo, OMB declined PHMSA’s proposal to collect these additional data, but the memo did not provide a reason for this decision. PHMSA officials said they are evaluating their next steps and plan to propose a revision to their data collection that does not impose excessive burden on stakeholders before PHMSA’s current data collection authority expires in 2020. More broadly, as part of its strategic plan, PHMSA is taking steps to better align its organizational structure with the need for a consistent approach to how it collects, manages, and uses data. In 2016, PHMSA established the Office of Planning and Analytics, whose mission is to support a data-driven approach to PHMSA’s oversight by leading strategic planning and analytical projects. According to PHMSA, the establishment of the Office of Planning and Analytics will support PHMSA’s efforts to become a data-driven and risk-based safety agency. Although PHMSA has taken steps to improve the quality of data used in RRIM, PHMSA did not document key decisions and the rationale used to design RRIM. Specifically, in designing RRIM, PHMSA did not document its rationale for the selection of threat factors and their associated weights, or the thresholds for risk tiers, and the frequency of inspection associated with each risk tier. Standards for Internal Controls in the Federal Government state that documentation is necessary to demonstrate the design, implementation, and operating effectiveness of a program. Additionally, OMB’s risk analysis principles state that agency risk analyses should be based upon the best available scientific methodologies, information, data, and weight of the available scientific evidence, and that the rationale for the judgments used in developing a risk assessment should be stated explicitly. PHMSA officials said they used professional judgment to select threat factors, to determine their associated weights, and to establish the risk tiers and inspection frequency, but they did not document the rationale or justification for their decisions, including how, if at all, they used data as part of developing this approach. Selection of threat factors and weights: PHMSA officials said that certain threat factors, such as mileage of bare pipe and ineffective coating, are generally known in the pipeline industry as problematic, but officials did not document their decisions for how they determined the selected weights or for how, if at all, they used data to develop the values for the weights. The officials said they conducted sensitivity analyses to calibrate the threat factor weights when they designed RRIM in 2012, but did not document these analyses. Thresholds for risk tiers: Similarly, PHMSA did not document how it established the risk tiers and inspection frequency, or the rationale for those decisions. PHMSA officials consider any inspection system with a score of 30 or more as high risk, more than 5 but less than 30 as medium risk, and less than 5 as low risk. PHMSA officials said the thresholds for the risk tiers were determined based on their professional judgment that 25 percent of inspection systems should be considered high risk, 50 percent medium risk, and 25 percent low risk to ensure a relatively consistent workload across regions. Officials said they determined inspection frequencies of 3, 5, and 7 years based on professional judgment, noting that each inspection system should be inspected at least once every 7 years and that the highest risk systems did not require inspection more than once every 3 years. However, PHMSA did not document the rationale for the decisions made or how, if at all, data informed their decision-making process. PHMSA officials told us that although they did not document these decisions, which were made based on their professional judgment, they solicit and receive feedback from PHMSA inspectors each year on the list of inspection systems generated by RRIM, a step that serves as a check on RRIM’s effectiveness. However, without documentation, the rationale for key decisions and assumptions made as part of designing and implementing RRIM is unclear. For example, RRIM’s design places a greater relative weight on longer pipeline units, assuming that longer pipeline segments have greater relative risk than shorter units. In 2016, the average length of a high risk inspection system was 1,841 miles; the average length of a medium risk inspection system was 358 miles, and the average length of a low-risk system was 49 miles. Moreover, in 2016 RRIM assigned approximately 1 percent of all pipeline miles inspected by PHMSA as low risk (7-year inspection cycle) and more than 70 percent as high risk (3-year inspection cycle). While this generally results in more frequent inspections for longer pipeline systems—which may be desirable from PHMSA’s perspective—without documentation, the rationale for the chosen mileage weighting and the assumed risk of this factor relative to other factors is unclear. In addition to a lack of documentation, PHMSA lacks a process that uses data to assess the ongoing effectiveness of RRIM and validate that it appropriately prioritizes inspections. Leading management practices and principles have highlighted the importance of periodic review and evaluation of risk management approaches. Specifically, OMB’s risk management principles state that the risk management process must be subjected to regular review to assess potential changes in risks, their likelihood and impact, and deliver assurance that the risk management process remains appropriate and effective. In addition, Standards for Internal Controls in the Federal Government states that management should use quality information to make informed decisions and evaluate program performance in achieving key objectives and addressing risks. Similarly, PHMSA’s strategic plan includes an objective to use data more effectively to improve its risk-based approach to inspection. PHMSA officials said they have not established a data-driven process to assess the effectiveness of RRIM because they believe that it has been an effective tool for prioritizing inspections for its staff. The officials said that they conducted sensitivity analyses to calibrate the weights in RRIM when they first designed it in 2012, but did not document those analyses, and they have made periodic changes to RRIM, such as adding threat factors or adjusting weights, based on their professional judgment. However, it is unclear what impact these changes have had on the effectiveness of RRIM, as PHMSA officials could not provide documentation of any analyses to support why these changes were made and their impact on RRIM’s results. PHMSA officials noted that the Office of Planning and Analytics has begun evaluating potential strategies to improve RRIM’s risk-modeling capabilities, and to examine how the officials could use existing data to validate RRIM. They also noted that they have recently transitioned RRIM to an operating system that will allow PHMSA to make adjustments more frequently as data are collected and updated. However, officials further noted that these activities are in their initial stages, and that PHMSA has not yet developed tangible steps to assess RRIM. Without a process that uses data to assess the effectiveness of RRIM, PHMSA is unable to demonstrate the validity of RRIM and whether it is effectively prioritizing pipelines for inspection. For example, Standards for Internal Controls in the Federal Government notes that activities such as comparing actual performance to planned or expected results and analyzing significant differences can help organizations achieve objectives. In the context of RRIM, such an analysis could compare the characteristics of pipeline segments involved in recent incidents to pipeline segments assigned to each risk tier by RRIM. This analysis could provide a basis for PHMSA to assess the validity of the threat factors and weighting and to make adjustments over time that would improve RRIM. However, without a process that includes these types of activities, PHMSA lacks assurance that RRIM prioritizes inspections effectively and that its inspection approach maximizes safety benefits to the public. While pipelines are a relatively safe mode of transporting inherently dangerous materials, an incident can pose a profound threat to life, property, and the environment. Though individual pipeline operators deploy a variety of materials and corrosion prevention technologies to guard against these threats, PHMSA has an important role in overseeing operator actions to ensure pipeline safety. Moreover, as PHMSA acknowledges, the size and diversity of the nation’s 2.7-million mile pipeline network necessitates a risk-based approach to oversight. However, because PHMSA has not documented the basis for the design and key decisions of RRIM and has not formally evaluated its effectiveness at prioritizing pipelines for inspection, it is unclear how effectively the model has helped PHMSA manage its inspection resources or maximize safety benefits to the public. Federal management practices and principles identify the need to document decisions, to use the best available data and information to drive decisions, and to periodically assess key management activities. In the context of RRIM, these actions are complementary as documentation of its design could serve as a baseline for a data-driven evaluation of its effectiveness and a review of whether the assumptions and decisions made as part of the design are valid. Such an evaluation could complement the analytical projects planned by PHMSA’s Office of Planning and Analytics to support a data-driven approach to PHMSA’s oversight. Furthermore, these actions could help PHMSA refine and improve its proposal for more specific data collection through the National Pipeline Mapping System before PHMSA’s current data collection authority expires in 2020 and help PHMSA make progress toward its goal of becoming a more data- driven and risk-based safety agency. To assess and validate the effectiveness of PHMSA’s RRIM in prioritizing pipelines for inspection, we recommend that the Secretary of Transportation direct the Administrator of PHMSA to take the following two actions: document the decisions and underlying assumptions for the design of RRIM, including what data and information were analyzed as part of determining each component of the model, such as the threat factors, weights, risk tiers, and inspection frequency. establish and implement a process that uses data to periodically review and assess the effectiveness of the model in prioritizing pipelines for inspection and document the results of these analyses. We provided the Department of Transportation and NTSB with a draft of this report for review and comment. In its comments, reproduced in appendix III, the Department of Transportation concurred with our recommendations. The Department of Transportation and NTSB also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to relevant congressional committees, the Secretary of Transportation, the Chair of NTSB, and other interested parties. In addition, this report will also be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or flemings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. The Protecting our Infrastructure of Pipelines and Enhancing Safety Act of 2016 included a provision for GAO to consult with stakeholders to gather information on the range of pipeline materials used in the United States and other developed countries and the effectiveness of corrosion control techniques. This appendix provides perspectives obtained from interviews with a nongeneralizable sample of eight pipeline operators and eight additional stakeholders with expertise on pipeline materials and corrosion (expert stakeholders) on (1) the use of pipeline materials and corrosion prevention technologies internationally, and (2) the potential improvements in pipeline materials and corrosion prevention technologies. Operators and expert stakeholders we interviewed noted few differences between the types of pipeline materials and corrosion technologies used in the United States gas and hazardous liquid network compared to their counterparts in Canada and European Union countries. Specifically, a majority of operators and expert stakeholders stated that the use of steel, plastic, composites, and other materials in pipelines in the United States is very similar to their use in Canada and the European Union, though these expert stakeholders and operators noted some minor differences in the use of these materials. For example, in comparison with the United States, three expert stakeholders noted that Canada may have wider use of higher grade steel—to provide strength for pipelines to operate at high operating pressures but with a thin pipeline wall—and one expert stakeholder said the European Union uses plastic and composites more widely in its network. A few of these expert stakeholders attributed these differences to less conservative design standards in Canada and the European Union than in the United States. Similarly, most of the operators and expert stakeholders we interviewed stated that similar corrosion prevention technologies used in the United States are commonly used throughout Canada and the European Union. For example, operators and expert stakeholders stated that coatings are commonly used to prevent external corrosion in the United States, Canada, and the European Union, though two operators and expert stakeholders said European Union countries often make more widespread use of a three-layer polyethylene coating than is used in the United States. In addition, two operators and expert stakeholders said the European Union’s regulatory approach is more flexible at accommodating new technologies than the approach taken in the United States. Operators and expert stakeholders were divided on the extent of future improvements in materials. For example, several operators and expert stakeholders stated that they anticipate pipeline operators will increase the use and development of higher grade steel. These operators and expert stakeholders noted these changes could include minor improvements in steel manufacturing to produce more widespread use of higher strength steel, but noted that pipelines manufactured with higher grade steels often have thinner walls and are less resistant to third-party damage. Additionally, a few operators and expert stakeholders stated that they anticipate further development of plastics and increased use of plastics across the pipeline network. For example, one expert stated that he expects the use of polyamide plastics, which are currently used in less than 1 percent of distribution pipelines, to be more widely used and at higher pressures in the future. In contrast, several operators and expert stakeholders stated that they anticipate few changes in pipeline materials in the next 10 years, noting that current materials (i.e., steel and plastic) are well known by the pipeline industry and have been successful in addressing corrosion challenges. Similarly, operators and expert stakeholders had mixed opinions on whether there would be significant improvements in corrosion prevention technologies over the coming years. For example, several operators and expert stakeholders characterized coatings and cathodic protection as mature technologies and did not foresee significant further development in the next 10 years. In contrast, other operators and expert stakeholders stated that they expected that improvements in monitoring, data collection and analysis might help operators improve efforts to combat internal corrosion. For example, these operators and expert stakeholders anticipate greater use of automatic monitoring technology to provide continual information on pipeline conditions, and improved integration of cathodic protection data with other monitoring efforts. Operators and expert stakeholders also identified improved coating technology as a potential area for advancement, with one expert stakeholder noting that nanotechnology may be used to develop self-repairing coatings that do not require operators to excavate pipelines to repair. The objectives of this report were to determine (1) the pipeline materials and corrosion prevention technologies that are used in the gas and hazardous liquid pipeline network and their respective benefits and limitations; (2) how selected pipeline operators train personnel to manage corrosion and the challenges that exist in ensuring personnel are qualified, and (3) how the Pipeline and Hazardous Materials Safety Administration (PHMSA) uses data on pipelines and corrosion prevention to inform its inspection priorities. In addition to the methodology described below, for each of these objectives, we reviewed pertinent PHMSA regulations, documents, and interviewed PHMSA headquarters officials. This report also includes information on the use of pipeline materials and corrosion prevention technologies outside the United States as well as potential future improvements in materials and corrosion prevention technologies, based on interviews with the selected operators and expert stakeholders. (See app. I.) To determine what pipeline materials and corrosion prevention technologies are used to transport hazardous liquids and benefits and limitations, we analyzed the most recent full-year data (calendar years 2010–2015) on pipeline characteristics, including data on mileage, geography, materials transported, and pipeline materials and corrosion prevention technologies reported to PHMSA by pipeline operators. We also analyzed PHMSA data on the cause of pipeline incidents from 2010– 2015. We did not review data prior to 2010 due to a significant change in PHMSA’s reporting requirements in 2009 that PHMSA officials noted limits comparability of the data collected prior to that change. We did not review operator-reported data after 2015 because operator data from 2016 on pipeline characteristics was not expected to be finalized until June 2017, according to PHMSA. We assessed the data’s reliability by reviewing PHMSA reports, analyzing the data to identify any outlier values, and interviewing PHMSA officials. We found the data to be sufficiently reliable for the purposes of answering this objective. We also reviewed Department of Transportation reports and academic literature on the benefits and limitations of various materials and corrosion prevention technologies. We also interviewed a nongeneralizable sample of eight pipeline operators and eight additional stakeholders with expertise on pipeline materials and corrosion (expert stakeholders). (See table 7.) These expert stakeholders represented entities including consultants, research organizations, and other organizations. To obtain a diverse set of viewpoints from pipeline operators, we selected operators based on descriptive data collected by PHMSA on an operator’s pipeline network function (gathering, transmission, or distribution); the types of materials transported by the operator’s network; the size (miles of pipeline) of an operator’s pipeline network and the geographic dispersion; and recommendations from other stakeholders. We identified an initial pool of expert stakeholders by reviewing academic literature, prior GAO work, and trade publications, and based on recommendations made by officials we interviewed from the National Transportation Safety Board, staff from industry trade associations, and other industry stakeholders. We selected expert stakeholders based on their knowledge of pipeline materials and corrosion, as determined from a review of their professional qualifications and experience or position related to these topics, as well as recommendations of other stakeholders. To verify their expertise, we obtained a curricula vitae, resume, or other biographical information, and confirmed their qualifications during the interview. We also asked pipeline operators and expert stakeholders for recommendations on other expert stakeholders during our interviews with them. Our goal in talking to these operators and expert stakeholders was to collect a diverse set of perspectives on our questions and in doing so, there were operators and expert stakeholders we included because of their viewpoint to provide overall balance to the nonprobability, nongeneralizable sample. To mitigate any potential biases in our sample, we selected individuals with significant relevant experience or knowledge who represented a range of alternative perspectives. We assessed this criterion by reviewing the information used to confirm their qualifications and verifying that the individuals have the expertise to participate in our sample. Prior to conducting these interviews, we conducted two pretests to obtain feedback on the questions. We conducted a semi-structured interview with each operator and expert stakeholder and asked each stakeholder the same set of questions. Because broad agreement existed across the operators and expert stakeholders for many of these topics and our sample was non-generalizable, we used indefinite quantifiers to describe the responses. (See table 8.) The views provided by pipeline operators and these expert stakeholders cannot be generalized across all pipeline operators or expert stakeholders on these topics, but do provide perspectives on the benefits, limitations, factors affecting costs and other aspects of the pipeline materials and corrosion prevention technologies discussed by these stakeholders. Furthermore, we did not attempt to identify all pipeline materials or all corrosion technologies. Rather, this information was obtained to provide a variety of perspectives on topics related to pipeline materials and corrosion and relevant to our objective. To analyze how selected operators train personnel to manage corrosion and ensure that personnel were qualified, we reviewed PHMSA regulations and proposed changes to those regulations requiring that pipeline operator personnel are qualified for operational and maintenance tasks, including corrosion prevention activities. We reviewed pipeline operators’ training plans and other documentation. We interviewed staff from 17 stakeholders: 8 pipeline operators, the same as selected above; 3 unions: the International Union of Operating Engineers, the Laborers’ International Union of North America, and the Utility Workers Union of America; 3 training providers: the American Society of Mechanical Engineers, the National Association of Corrosion Engineers, and Veriforce; and 3 industry trade associations: the American Gas Association, the American Petroleum Institute, and the Interstate Natural Gas Association of America. These stakeholders were selected to provide a range of views on approaches, common practices, and challenges associated with corrosion training and operator qualification; however, these views are not generalizable across all industry stakeholders. To determine how PHMSA uses data on pipelines and corrosion to inform its inspection priorities, we analyzed and assessed the reliability of the most recent PHMSA inspection and enforcement data (calendar years 2014–2016) on pipeline materials and corrosion prevention technologies. To assess the reliability of the data used for this objective, we reviewed PHMSA and Department of Transportation Office of Inspector General reports and PHMSA documentation, analyzed the data to identify any outlier values and interviewed PHMSA officials. We also reviewed the Oak Ridge National Laboratory’s assessment of PHMSA’s data management and analysis capabilities and challenges. We also interviewed PHMSA officials about how the data were collected, stored and validated. We determined that the data were sufficiently reliable for the purposes of addressing this objective. We evaluated PHMSA’s use of this data in its risk-ranking index model as part of its effort to rank the relative risk of pipelines and prioritize its annual inspections of pipeline operators using these rankings. We compared this approach to criteria identified in GAO’s Standards for Internal Controls in the Federal Government, criteria for risk analysis developed by the Office of Management and Budget (OMB), and PHMSA’s strategic objectives. In addition, we reviewed Department of Transportation reports on PHMSA’s risk management models and approaches and interviewed former PHMSA officials about these topics. We also interviewed staff from each of PHMSA’s five regional offices, which are responsible for conducting inspections of pipeline operator operations, and conducted a group interview of state officials from all nine interstate agents to understand how inspection data is collected and used to inform PHSMA’s oversight. We conducted this performance audit from July 2016 to August 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Matt Barranca (Assistant Director), Matt Voit (Analyst in Charge), Katrina Ballard, David Blanding, Melissa Bodeau, David Hooper, Katrina Pekar-Carpenter, Adam Peterson, Malika Rice, Jim Russell, and Jack Wang made key contributions to this report. | The U.S. energy pipeline network is composed of over 2.7-million miles of pipelines transporting gas and hazardous liquids. While pipelines are a relatively safe mode of transportation, incidents caused by material failures and corrosion may result in fatalities and environmental damage. PHMSA, an agency within the Department of Transportation, inspects pipeline operators and oversees safety regulations. 2016 pipeline safety legislation included a provision for GAO to examine a variety of topics related to pipeline materials and corrosion. This report addresses: (1) the materials and corrosion-prevention technologies used in the pipeline network and their benefits and limitations and (2) how PHMSA uses data on pipelines and corrosion to inform inspection priorities, among other topics. GAO analyzed PHMSA's 2010–2016 data; reviewed PHMSA regulations; and interviewed PHMSA officials and representatives of nine states selected based on pipeline inspection roles, eight pipeline operators—providing a range of sizes, geographic locations, and other factors—and eight stakeholders selected for expertise on pipeline and corrosion issues. The U.S. gas and hazardous liquid pipeline network is constructed primarily of steel and plastic pipes, both of which offer benefits and limitations that present trade-offs to pipeline operators, as do corrosion prevention technology options. According to data from the Pipeline and Hazardous Materials Safety Administration (PHMSA), over 98 percent of federally regulated pipelines that gather natural gas and other gases and hazardous liquid products, such as oil, and transmit those products across long distances are made of steel. An increasing majority of pipelines that distribute natural gas to homes and businesses are made of plastics. Steel pipelines are manufactured in various grades to accommodate higher operating pressures, but require corrosion protection and cost more than plastics, according to operators and experts. In contrast, plastics and emerging composite materials generally are corrosion-resistant, but lack the strength to accommodate high-operating pressures. Operators use a range of technologies to protect steel pipes from corrosion, including applying coatings and cathodic protection, which applies an electrical current to the pipe. (See fig.) While such technologies are generally considered effective, operators and experts stated that coatings degrade over time and that cathodic protection requires ongoing maintenance and costs to deliver the current over long pipeline distances, among other considerations. PHMSA uses materials and corrosion data collected from operators in its Risk Ranking Index Model to determine the frequency of PHMSA's inspections of operators based on threats, such as ineffective coatings, to pipeline integrity. PHMSA officials said they used professional judgment to develop their model, but did not document key decisions for: (1) the threat factors selected, (2) their associated weights, or (3) the thresholds for high, medium, and low risk tiers for pipeline segments inspected by PHMSA. Moreover, PHMSA has not used data to assess its model's overall effectiveness, as would be consistent with federal management principles. PHMSA officials said they have not established an evaluation process because they consider the model to be effective in prioritizing inspections. Although PHMSA officials said they analyzed the model when they developed it in 2012, they have not done so since that time and did not document the results of this initial analysis. Without documentation and a data-driven evaluation process, PHMSA cannot demonstrate the effectiveness of the model it uses to allocate PHMSA's limited inspection resources. GAO recommends that PHMSA document the design of its Risk Ranking Index Model and implement a process that uses data to periodically assess the model's effectiveness. The Department of Transportation agreed with our recommendation and provided technical comments, which we incorporated as appropriate. |
VETS administers national programs intended to (1) ensure that veterans receive priority in employment and training opportunities from the employment service; (2) assist veterans, reservists, and National Guard members in securing employment; and (3) protect veterans’ employment rights and benefits. The key elements of VETS’ services include enforcing veterans’ preference and reemployment rights and securing employment and training services. VETS’ programs are among those federal programs whose services have been affected by WIA and other legislative changes aimed at streamlining services and holding programs accountable for their results. VETS carries out its responsibilities through a nationwide network that includes representation in each of the Department of Labor’s 10 regions and staff in each state. The Office of the Assistant Secretary for Veterans’ Employment and Training administers VETS’ activities through regional administrators and a VETS director in each state. These federally paid VETS staff are the link between VETS and the states’ employment service system, which is overseen by Labor’s Employment and Training Administration (ETA). VETS funds two primary veterans’ employment assistance grants to states—the Disabled Veterans’ Outreach Program (DVOP) and the Local Veterans’ Employment Representatives (LVER). Fiscal year 2001 appropriation for VETS was about $183 million, including $81.6 million for DVOP specialists (DVOPS) and $77.2 million for LVER staff. These funds paid for 1,327 DVOP positions and 1,206 LVER positions. The roles of the DVOPS and LVERs have been separately defined in two statutes. LVERs were first authorized under the original GI bill (the Servicemen’s Readjustment Act of 1944) and DVOP specialists were authorized by the Veterans’ Rehabilitation and Education Amendments of 1980. A key responsibility of a DVOP is to develop job and job training opportunities for veterans through contacts with employers, especially small- and medium-size private sector employers. LVERs are to provide program oversight of local employment service offices to ensure that veterans receive maximum employment and training opportunities from the entire local office staff. In addition, DVOPS and LVERs traditionally have provided services that include locating veterans who need services, networking in the community for employment and training programs, bringing together veterans looking for work and employers seeking to making referrals to support services, and providing case management for those veterans in need of more intensive services. Increasingly, however, veterans are accessing services on their own, through tools such as internet-based job listings or resume writing software. As part of the DVOP and LVER grant agreements, states must provide or ensure that veterans receive priority at every point where public employment and training services are available. The DVOP and LVER programs give priority to the needs of disabled veterans and veterans who served during the Vietnam era. States’ employment service systems are expected to give priority to veterans over nonveterans. Generally, this means that local employment offices are to offer or provide all services to veterans before offering or providing those services to nonveterans. To monitor the states’ programs, VETS has been using a set of measures that evaluates states’ performance in five dimensions: (1) veterans placed in training, (2) those receiving counseling, (3) those receiving services, (4) those entering employment, and (5) those obtaining federal contractor jobs. These measures primarily count the number of services that veterans receive and compare the totals with similar services provided to nonveterans. To ensure priority service to veterans, VETS expects levels of performance for services provided to veterans to be higher than levels for nonveterans. For example, veterans and other eligibles must be placed in or obtain employment at a rate 15 percent higher than that achieved by nonveterans. (See table 1 for VETS’ specific performance standards.) To report on performance, VETS currently relies on the Employment and Training Administration’s 9002 system to aggregate data reported by states on veterans and nonveterans who register with state Employment Services (ES) offices, track the services provided to them (such as counseling or job referral), and gather information on their employment outcomes. The 9002 system also collects information such as the registrants’ employment status, level of education (e.g., high school, postsecondary degree/certificate), and basic demographic information, such as age and race. Over the past several years, the Congress has taken steps to streamline and integrate services provided by federally funded employment and training programs. WIA, which the Congress passed in 1998, requires states and localities to use a one-stop center structure to provide access to most employment and training services in a single location. WIA requires about 17 categories of programs, including VETS and ES programs, to provide services through the one-stop center. However, because DVOP and LVER staff can provide assistance only to veterans, and because their roles in one-stop centers are not specifically addressed in WIA, it is unclear how they will function with regard to one-stop centers. According to VETS officials, this lack of clarity has been addressed. Agreements made with each state on planned services to veterans now include provisions on how DVOPS and LVERs will be integrated into the one-stop delivery system. In addition to changing the way services are provided, programs are now increasingly held accountable for their results. Through the Government Performance and Results Act of 1993 (GPRA), the Congress seeks to improve the efficiency, effectiveness, and public accountability of federal agencies as well as improve congressional decision making. GPRA does so, in part, by promoting a focus on what the program achieves rather than tracking program activities. GPRA outlines a series of steps in which agencies are required to identify their goals, measure performance, and report on the degree to which those goals were met. Executive branch agencies were required to submit the first of their strategic plans to the Office of Management and Budget and the Congress in September 1997. Although not required by GPRA, Labor’s component agencies, such as VETS, have prepared their own strategic and performance plans at the direction of the Secretary of Labor. To address the goals of GPRA and in response to recommendations by us and other groups, such as the Congressional Commission on Servicemembers and Veterans Transition Assistance, VETS is currently developing a new system to measure the performance of its programs. Over the last several years, VETS conducted pilot programs in about eight states that tested some new performance measures and the use of new data to support these measures. VETS officials told us that they anticipate implementing their new performance measurement system in July 2001. VETS’ proposed performance measures are a significant improvement over current measures, but certain aspects of these measures raise concerns that VETS may need to address. The proposed measures include an (1) entered-employment rate, (2) employment rate following staff- assisted services, (3) employment retention rate, and (4) increase in the number of federal contractor job openings listed. These measures are an improvement over current measures because they focus more on what the programs achieve and less on the number of services they provide, no longer use the level of services provided to nonveterans as the standard for services that must be provided to veterans, adjust expected state performance to economic conditions within establish two measures that are already collected for WIA-funded services and proposed for ES. However, even with these improvements, the proposed measures continue to send a mixed message to staff about where to place their service priorities. In addition, the proposed measures include a redefined measure for tracking federal contractor job openings, but the measure is process- oriented and outside the scope of the work of DVOPS and LVERs. The proposed performance measures improve accountability because they place more emphasis on employment-related outcomes by eliminating process-oriented measures—measures that simply track services provided to veterans. Current process measures that VETS eliminated from the proposed performance system include the number of veterans referred to counseling, the number placed in training, and the number receiving certain other services, such as job referrals. As we noted in past reports, these process-oriented measures are activity- and volume-driven and focus efforts on the number of services provided, not on the outcomes veterans achieve. These measures offer states little incentive to provide services to those veterans who are only marginally prepared for work and who may need more intensive services requiring more staff time. The VETS’ proposal still includes one process-oriented measure that simply reflects the percentage increase in the number of federal contractor job openings listed with the public labor exchange but adds two outcome-oriented measures—job retention after 6 months and the employment rate following staff-assisted services. The VETS’ proposal also retains an outcome measure that is in the current system—the entered-employment rate. (See table 2.) The proposed performance measures also improve the way VETS establishes the level of performance that states are expected to achieve. VETS no longer requires states to compare the level of services provided to veterans with those provided to nonveterans. In past reports, we have pointed out that the use of these relative standards results in states with poor levels of service to nonveterans being held to lower standards for service to veterans than states with better overall performance. For example, in program year 1999, Rhode Island reported an entered- employment rate of 5.49 percent for nonveterans. Because VETS requires states to ensure that they achieve an entered-employment rate for veterans that is 15 percent higher than that for nonveterans, Rhode Island’s 1999 expected performance level was 6.32 percent of registered veterans entering employment–a low level of performance. Under the proposed system, VETS will negotiate performance levels annually with each state based on that state’s past performance, using guidelines similar to those used for WIA. VETS will also be able to adjust these levels based on economic conditions within each state, such as the unemployment rate, the rate of job creation or loss, or other factors. The proposed performance measures are also similar to those established under WIA, making it easier for service providers to achieve WIA’s goal of integrating and streamlining employment and training services. In the current environment, many of the programs that provide services through the one-stop centers have their own unique performance measures and program definitions, requiring multiple systems and multiple data collection efforts to track a single client. In the proposed system, VETS has made an effort to align its performance measures with those of WIA. In fact, two of the five proposed measures—entered-employment rate and employment retention—are nearly identical to WIA’s and to those proposed for ES. If VETS aligns the measures with those of WIA and ES, local offices will be more readily able to establish integrated data systems that will minimize the data collection burden on service providers and clients. (See app. I for a comparison of the WIA performance measures with those proposed for VETS and ES.) While the proposed performance measures are an improvement over those currently in place, there are issues with these measures that VETS should address. First, a comparison of the performance measures with the strategic plan indicates that VETS is sending a mixed message to states about what services to provide and to whom. The strategic plan suggests that states focus their efforts on providing staff-assisted services to veterans, including case management. Yet, none of the proposed measures specifically gauges whether more staff-intensive services are helping veterans get jobs. VETS’ proposal includes a measure that tracks employment outcomes following staff-assisted services. However, this measure is broadly defined, and the list of staff-assisted services includes nearly all services provided to veterans. This makes the outcomes achieved for the staff-assisted measure nearly identical to those reported for the more general “entered-employment rate.” In addition, as VETS has defined it, staff-assisted services include many services that might not be considered “intensive,” such as referral to a job and job search activities. Because the definition is so broadly defined, a veteran who only attended a job search workshop would be counted the same as a veteran who received more intensive services, such as testing and employability planning. Both would be counted in the more general entered-employment rate measure, as well as the staff-assisted service measure. A stricter definition for staff-assisted services that includes only those services that are generally considered staff-intensive would allow VETS to more accurately assess the success of those services and help to clarify the goals of the program. Second, VETS is sending a mixed message about which groups of veterans to target for services. As we noted in past reports and testimonies, VETS has inconsistently identified various “targeted” groups of veterans it plans to help. In its strategic plan, VETS identifies two broad veteran groups that should be targeted to receive special attention—(1) disabled veterans and (2) all veterans and other eligible persons. And consistent with this, VETS proposes that expected performance levels be negotiated separately for each of these same two groups. Yet, the strategic plan also suggests that, when providing services to all veterans, special attention should be given to meeting the needs of certain other target groups, some of which might require more intensive services to become employed. The groups targeted for special attention include (1) veterans who have significant barriers to employment, (2) veterans who served on active duty during a war (or campaign or expedition in which a campaign badge has been authorized), and (3) veterans recently separated from military service. In reviewing VETS’ proposed measures and the plan for negotiating performance levels, staff may be confused as to where they should place their service priorities. It is unclear what steps VETS will take to ensure that DVOPS and LVERs are provided ample opportunity and encouragement to focus attention on the portion of the “all veterans” group who may require more staff time to be successful in getting a job. Last, VETS’ proposal also continues to include a performance measure related to federal contractor job openings listed with the state’s ES office. However, in its proposal, VETS has changed the measure. Under the current system, VETS tracks the number of Vietnam-era and special disabled veterans who were placed in jobs listed by federal contractors— an outcome measure. Now, under the proposed system, VETS will track the increase in the number of federal contractor jobs listed with the state’s ES office—a process-oriented measure. This new measure ultimately holds DVOPS and LVERs accountable for the number of federal contractors in a given state or local area, not for veteran placements with those contractors. The presence of federal contractors in a given state or local area is unpredictable and is determined by the federal agencies awarding contracts. Furthermore, according to state officials that we talked with, the federal contractor measure should be eliminated altogether because it is the responsibility of contractors to list their job openings. In addition, it is the Office of Federal Contract Compliance that is responsible for ensuring that all companies conducting business with the federal government list their jobs with state ES offices and take affirmative action to hire qualified veterans. The proposed data for the new measures will greatly improve the comparability and reliability of these measures, but this change will bring some challenges that VETS will need to address. Consistent with WIA and ES, VETS is proposing that all states use UI wage records to identify veterans who get jobs. UI wage records contain the earnings of each employee reported quarterly by employers to state UI agencies.Currently, the data VETS uses are not comparable across states, in part, because states use different data sources to report employment-related outcomes. Using a single, standardized source for collecting data will improve VETS’ ability to compare performance across states. UI wage records will also provide state officials with a better means to identify veterans who get jobs than does the traditional follow-up method of telephoning veterans and/or employers to verify employment. However, states cannot readily access wage records from other states, wage records do not cover certain types of employment, and these data are not available until 3 to 9 months after an individual gets a job. Using a single data source will help to standardize the way in which states collect data on veterans, thereby making it easier to compare performance across states. Currently, states are using various data sources for performance-reporting purposes. While almost all of the states in our review used a combination of data sources to determine whether or not a veteran got a job, most of the states relied substantially on one data source, but that source differed among states. For example, in program year 1999 7 of the 15 states that we contacted relied to a large extent on wage record data to determine whether a veteran got a job or not; 7 others relied, for the most part, on telephone calls and letters to veterans and employers to determine a veteran’s employment status; and one state relied primarily on its new hire database for employment data. In addition to making state data more comparable, we found evidence that states currently using wage records have been able to better identify those veterans who get jobs after receiving services. A recent study found that UI wage records more accurately identified how many veterans got jobs after receiving DVOP, LVER, or ES services. Using UI wage records, this study tracked veterans who registered with the Maryland Job Service during program year 1997 and found an entered-employment rate that ranged from 65 percent to 82 percent, depending on the way the study defined a registrant. In that same program year, Maryland reported to VETS an entered-employment rate of 31 percent, which was based on staff telephoning veterans and employers to verify employment. In addition, most states in our review that are now using UI wage records, either as their primary data source or to augment other data sources, reported higher employment rates in program year 1999 for veterans they served than that year’s national average of 30 percent. (See app. II for a list of all states and their respective entered-employment rates for program years 1996-1999.) By comparison, all but one of the states that relied either on manual follow-up or the new hire database reported an employment rate below the national average. Another benefit of using UI wage records is that staff assisting veterans will be relying on data already available rather than collecting additional information from veterans or employers. Relying on these already reported data would require less staff time from DVOP, LVER, and ES staff, freeing them to focus more on providing job-related services to veterans. State officials told us that relying on manual follow-up, such as telephone calls, has been labor-intensive and has diverted staff attention away from providing appropriate assistance to veterans. While UI wage records offer advantages over the current data collection system, some challenges need to be addressed. First, states should find ways to identify interstate job placements. Because the UI wage record system resides within each state, states generally do not have access to wage records from other states, making it difficult to track individuals who receive services in one state but get a job in another. Currently, there is no national system in place that facilitates data sharing among states. However, in response to WIA requirements, states are developing an interstate UI wage record information sharing system, known as the Wage Record Interchange System (WRIS). The system is designed to minimize the burden on state unemployment insurance programs in responding to requests for wage record data, to ensure the security of the transactions involving individual wage records, and to produce the results at a low cost per record. In addition, some states have entered into agreements with neighboring states to share wage information in support of WIA. These efforts should help VETS as well. Second, states should find ways to identify those veterans finding jobs in categories not covered by UI wage records. UI wage records cover about 94 percent of wage and salary workers, but certain employment categories are not covered, such as self-employed persons, most independent contractors, military personnel, federal government workers, railroad employees, some part-time employees of nonprofit institutions, and employees of religious orders. Therefore, the UI system will not be able to track and count veterans who get these types of jobs. This is an issue for WIA as well, and states are beginning to assess the extent to which this issue will affect their ability to accurately determine the outcome of WIA- funded programs. There are other issues not related to the use of UI wage records that VETS should consider as it finalizes its performance-reporting requirements. VETS’ proposed performance system does not standardize how states report veterans or nonveterans who use self-service activities, making it difficult to reliably assess nationwide performance. In an environment in which self-service is becoming more common, we found that states vary in whether they register veteran job seekers who access self-service tools, such as internet-based job listings or resume writing software. For example, some states allow job seekers greater access to job listings without requiring that they register, while others have more restrictions on who can access job lists. Table 3 shows how such differences can affect entered-employment rates. In this example, 100 veterans enter the employment service for assistance. In both cases, 40 veterans ultimately get jobs after receiving identical services. In one case, the placement rate is 40 percent and in the other, 50 percent—a 10 percentage point difference. This difference results from counting all job seekers in one case and only those requiring staff assistance in the other. As a result of the different ways states currently count veterans and report outcomes, the entered-employment rate measure is not consistently calculated across states, and nationwide comparisons are misleading. VETS’ proposed performance system does not standardize how long a veteran or nonveteran remains registered after seeking services for performance-reporting purposes. We found that states differ in how long they keep veterans registered. This difference affects the calculation of the entered-employment rate (i.e., the number of veterans that get jobs), making performance comparisons across states less reliable. Many of the states we contacted count individuals as registered who have received a service in the last 6 months. However, two states only count those as registered who have received a service in the last 3 months, while two others count only those who received a service in the last 2 months. And in one state, anyone who has received a service from the state’s employment office since 1998 is counted as a registrant when determining the entered- employment rate. States with shorter registration periods may be able to report a higher entered-employment rate than states with longer registration periods. VETS is improving its performance measurement system by proposing new measures that are more outcome-oriented than its current measures and by requiring that all states use wage record data to improve the comparability and reliability of reported program performance. While these changes move VETS a step closer to implementing an effective accountability system, they may not go far enough. VETS continues to send a mixed message to states about what services to provide and to whom. As presently defined, two of the proposed measures—the entered- employment rate and the employment rate following staff-assisted services—may provide nearly identical results, and neither helps VETS to monitor whether more intensive services are being provided to veterans or whether these services are successful. VETS also continues to inconsistently identify the groups of veterans that it wants states to help. In addition, VETS maintains a measure related to federal contractors—one that is beyond the control of DVOPS and LVERs. Furthermore, in its proposed system, VETS allows states to decide which veterans to include in its performance reports. This results in data inconsistencies that make state-to-state comparisons unreliable. Without clear and consistent direction from VETS’ planning documents and performance measures, staff assisting veterans will be uncertain where to place their priorities. In addition, without stricter guidelines for how to count veterans, VETS will be unable to accurately assess program performance nationwide. Unless further modifications are made, VETS will be unable to fully determine whether its programs and services are fulfilling its mission. In order to establish a more effective performance management system, we recommend that the Secretary of Labor direct VETS to do the following: Redefine staff-assisted services to include only those that may be considered staff intensive, such as case management, so that VETS will be able to evaluate the success of intensive staff-assisted services. Clearly define target populations so that staff assisting veterans know where to place their priorities. If staff are to focus on assisting veterans who need more assistance, VETS should provide incentives and opportunities to do so through appropriate performance measures or negotiated levels of performance. Eliminate the measure related to federal contractor jobs so that staff are not held accountable for the number of federal contractors in a state or local area or for the failure of contractors to list their jobs with ES offices. Establish and communicate guidelines that standardize how to count veterans for performance-reporting purposes so that VETS will be able to assess program performance nationwide. We provided VETS with the opportunity to comment on a draft of this report. Formal comments from VETS appear in appendix III. In addition to the comments discussed below, VETS provided technical comments that we incorporated where appropriate. VETS generally agreed with our findings and two of our recommendations but disagreed with the other two recommendations. VETS acknowledged that its current strategic plan (Nov. 2000) sends a mixed message to the states about which groups of veterans staff should target for special attention. VETS noted that it is revising its strategic and annual plans to reflect a more consistent message about what services to provide and to whom. VETS also explained that it is developing new performance standards specific to DVOP and LVER staff that will clarify the role they play in providing services to veterans. According to VETS officials, states will have the option of using these specific standards or developing their own. When developing these standards, VETS will need to ensure that the specific standards developed for DVOPS and LVERs are consistent with the message in the revised strategic plan and that together they provide a coherent strategy as to where staff should place their service priorities. VETS disagreed with our recommendation for a revised definition of the performance measure related to staff-assisted services. VETS said that any veteran receiving staff-assisted services may require a multitude of the services cited in the definition—any one of which or combination thereof may require extensive staff time. We disagree that any one of these services necessarily requires extensive staff time. As noted in our report, a veteran may be counted as receiving staff-assisted services after receiving only a job referral or labor market information—services that by themselves would not involve extensive staff resources. Moreover, we continue to believe that the broadly defined staff-assisted service measure will likely not report outcomes substantially different from those reported for the more general entered-employment rate measure. As noted in our report, a stricter definition for staff-assisted services that includes only those services generally considered to be staff-intensive would allow VETS to more accurately assess outcomes associated with those services. VETS disagreed with our recommendation to discontinue the measure related to jobs listed by federal contractors. However, VETS agreed to reconsider the suitability of this specific measure after public comments have been received. As we noted in our report, the presence of federal contractors in a given state or local area is determined by the federal agencies awarding contracts. In addition, state officials told us that it is the responsibility of the contractors, not DVOP and LVER staff, to list their job openings with employment services. Current law requires the Secretary of Labor to report annually to the Congress on the number of federal contractor positions listed and the number of veterans receiving job priority through this program. This information could be collected in absence of a specific performance measure. With regard to our recommendation that VETS establish guidelines that standardize how states count veterans for performance-reporting purposes, VETS said that it will be working with ETA to determine how states can uniformly report veterans and nonveterans that use self-service activities. In addition, VETS noted that the revised ETA 9002 report will provide uniform instructions on how long individuals remain registered in the system. We are sending copies of this report to the Honorable Elaine L. Chao, Secretary of Labor; appropriate congressional committees; and other interested parties. We will also make copies available to others upon request. If you or your staff have questions about this report, please contact me on (202) 512-7215 or Dianne Blank on (202) 512-5654. Individuals making key contributions to this report include Elizabeth Morrison and Amanda Ahlstrand. Similar to Workforce Investment Act (WIA) programs, the Employment Service (ES) and the Veterans’ Employment and Training Service (VETS) are proposing that their programs use Unemployment Insurance wage records to report on performance measures. Each calendar quarter, employers submit wage record data to their state’s UI agency or some other state agency. The following table compares the proposed performance measures of VETS and ES and those used by WIA’s adult and dislocated worker programs. ES proposed performance measures Entered-employment rate: The percentage of workers who got a job in the 1st or 2nd quarter after registration. WIA performance measures (adult and dislocated worker programs) Entered-employment rate: The percentage of workers who got a job by the end of the 1st quarter after exit. Employment retention rate: Of those who had a job in the 1st quarter after exit, the percentage of workers who have a job in the 3rd quarter after exit. Employer customer satisfaction: Average of three survey questions on employers’ satisfaction with services received. Job seeker customer satisfaction: Average of three survey questions on job seekers’ satisfaction with services received. Employer customer satisfaction: Average of three survey questions on employers’ satisfaction with services received. Job seeker customer satisfaction: Average of three survey questions on job seekers’ satisfaction with services received. WIA performance measures (adult and dislocated worker programs) Earnings change (adults only): The difference between total post-program earnings (from the 2nd and 3rd quarters after exiting the WIA program) and the total pre-program earnings (from the 2nd and 3rd quarters prior to entering the WIA program) divided by the number of participants leaving the program. Earnings replacement rate (dislocated workers only): Total post-program earnings (in the 2nd and 3rd quarters after exit) divided by pre- dislocation earnings (in the 2nd and 3rd quarters prior to dislocation). Staff-assisted services include: (a) referral to a job; (b) placement in training; (c) assessment services, including an assessment interview, testing, counseling and employability planning; (d) career guidance; (e) job search activities, including resume assistance, job search workshops, job finding clubs, specific labor market information and job search planning; (f) federal bonding program; (g) job development contacts; (h) tax credit eligibility determination; (i) referral to other services, including skills training, educational services and supportive services; and (j) any other service requiring expenditure of time. Application taking and/or registration services are not included as staff- assisted services. | This report discusses the proposed performance measurement system at the Department of Labor's Veterans' Employment and Training Service (VETS). Specifically, GAO reviews (1) VETS' proposed performance measures, including possible concerns about the measures; (2) the proposed data source for the new system; and (3) other measurement issues that would effect the comparability of states' performance data. GAO found that VETS' proposed performance measures would improve performance accountability over the current system, but some aspects of the new measures raise concerns. VETS' strategic plan suggests that states focus their efforts on providing staff-assisted services to veterans, including case management. Yet none of the proposed measures specifically gauge the success of these services. In addition, VETS' proposal includes one measure--the number of federal contractor jobs listed with local employment offices--that is not only process-oriented but also focuses on outcomes that are beyond the control of staff serving veterans. VETS proposes that all states use a single data source--Unemployment Insurance wage records--to identify veterans who get jobs. Using these data will greatly improve the comparability and reliability of the new measures. Although using these data will improve some aspects of data collection, the data present some challenges. States generally do not have access to wage records from other states and, therefore, should find ways to track individuals who receive services in one state but get a job in another. Other issues that affect the comparability of states' performance-related data should be considered. For example, states vary in whether they register and count, for performance reporting purposes, job seekers who use only self-service tools, such as internet-based job listings. |
39 million beneficiaries and spends about $212 billion a year. Its benefits include hospital, physician, and other services such as home health and limited skilled nursing facility care. HCFA administers Medicare and regulates participating providers and health plans. Original, or traditional, Medicare reimburses private providers on a fee-for-service basis and allows Medicare beneficiaries to choose their own providers without restriction. A newer option within Medicare allows beneficiaries to choose among private, managed care health plans. Currently, 17 percent of beneficiaries use Medicare managed care. In original Medicare, beneficiaries must pay a share of the costs for various services. Most Medicare managed care plans have only modest beneficiary cost-sharing and many offer extra benefits, such as prescription drugs. DOD received an appropriation for military health care of almost $16 billion in fiscal year 1999. Of that, an estimated $1.2 billion is spent on the 1.3 million Medicare-eligible military retirees. Under its TRICARE program, DOD provides health benefits to active duty military, retirees, and their dependents, but most retirees 65 and over lose their eligibility for comprehensive, DOD-sponsored health coverage. DOD delivers most of the health care needed by active duty personnel and military retirees through its military hospitals and clinics. DOD gives priority for care to active duty personnel and their dependents, and to certain retirees under 65. Retirees who turn 65 and become eligible for Medicare can get military care if space is available (called space-available care)—that is, after other DOD beneficiaries are treated. Some military facilities have little or no space-available care. covers services of military physicians as well as civilian network providers by drawing on DOD’s appropriated funds and premiums and copayments charged to some enrollees. In TRICARE Prime, DOD generally organizes the delivery of care on managed care principles—for example, an emphasis on a primary care manager for each enrollee. DOD has gained considerable experience with managed care, but it relies heavily on contractors to conduct marketing, build a network of providers, and perform other critical functions. The BBA established a 3-year demonstration of Medicare subvention, to start on January 1, 1998, and end on December 31, 2000. Within the BBA’s guidelines, DOD and HCFA negotiated a Memorandum of Agreement (MOA). The MOA stated the ways in which HCFA would treat DOD like any other Medicare health plan and the ways in which HCFA would treat it differently. The MOA also spelled out the benefit package and the rules for Medicare’s payments to DOD. After DOD and HCFA signed the MOA, they selected six demonstration sites. They would be able to serve about 30,000 of the 125,000 people eligible for both Medicare and military health benefits in these areas. The subvention demonstration made DOD responsible for creating a DOD-run Medicare managed care organization for elderly retirees. This pilot health plan, which DOD named Senior Prime, is built on DOD’s existing managed care model. By enrolling in Senior Prime, Medicare-eligible military retirees obtain priority for services at military facilities—an advantage, compared to nonenrollees. Senior Prime’s benefit package is “Medicare-plus”—the full Medicare benefits package supplemented by some other benefits, notably prescription drugs. only retain a portion of these payments if that year’s costs for the six sites together exceed baseline LOE. VA provides a comprehensive array of health services to veterans with service-connected disabilities or low incomes. Since 1986, VA has also offered health care to higher-income veterans, who must however make copayments for services. Overall, VA serves over 13 percent of the total veteran population of 25 million, with the remaining veterans receiving their health care through private or employer health plans or other public programs. Many of the veterans whom VA serves also get part of their care from other sources, such as DOD, Medicaid, and private insurance. The administration has requested $17.3 billion for VA medical care in fiscal year 2000. To make up the differences between appropriated funds and projected costs, VA estimates that, by fiscal year 2002, it can derive almost 8 percent of the medical care budget from nonappropriated sources, including Medicare reimbursement. Since the early 1990s, VA has shifted its focus from inpatient to outpatient care. At the same time, it implemented managed care principles, emphasizing primary care. In 1995, VA accelerated this transformation by realigning its medical centers and outpatient clinics into 22 service delivery networks and empowering these networks to restructure the delivery of health services. In 1996, the Congress passed the Veterans’ Health Care Eligibility Reform Act that established, for the first time, a system to enroll or register veterans. Enrollment is in effect a registration system for veterans who want to receive care. The law establishes seven priority groups, with Priority Group 1 the highest and Priority Group 7 the lowest. Priority Group 7 includes veterans whose incomes and assets exceed a specified level and (a) do not have a service-connected disability or (b) do not qualify for VA payments for those disabilities. Priority Group 7 veterans must agree to make copayments for health services. broad package that covers inpatient and outpatient care; rehabilitative care and services; preventive services; respite and hospice care; and pharmaceuticals, durable medical equipment, and prosthetics. Enrolled veterans remain free to get some or all of their care from other private or public sources, including Medicare. VA, on the other hand, is committed to serving all enrolled veterans. The structure of any VA subvention demonstration would depend upon the principles and directions that the Congress incorporates in authorizing legislation. We have found certain common elements in all demonstration proposals we reviewed. A VA subvention demonstration would serve certain higher-income, Medicare-eligible veterans (effectively, Priority Group 7 veterans): for a limited time period, such as 3 years; in a limited number of locations; and in compliance with Medicare rules that HCFA applies to the private sector, although HCFA could waive rules that were inappropriate or irrelevant to VA. direct VA to maintain reserves against the risk that appropriated funds would be needed to pay for the care of veterans enrolled in the subvention demonstration. Some proposals authorize VA to establish both fee-for-service and managed care subvention sites, while at least one only authorizes managed care. In implementing the subvention demonstration, DOD and HCFA completed numerous and substantial tasks. DOD sites had to gain familiarity with HCFA regulations and processes, prepare HCFA applications, prepare for and host a HCFA site visit to assess compliance with managed care plan requirements, develop and implement an enrollment process, market the program to potential enrollees, establish a provider network (for care that cannot be provided at the military treatment facilities), assign Primary Care Managers to all enrollees, conduct orientation sessions for new enrollees, and begin service. The national HCFA and DOD offices developed a Memorandum of Agreement, spelling out program guidelines in broad terms. They also developed payment mechanisms, and translated the BBA requirement that DOD maintain its historical LOE in serving dual eligibles into a reimbursement formula. HCFA accelerated review procedures and assigned additional staff so that timelines could be met. But these accomplishments were not without difficulties, and several issues remain that are likely to impact the demonstration’s results. These include the extent to which payment rules can be made more understandable and workable, and the extent to which DOD can operate successfully and efficiently as a Medicare managed care organization. In view of the steep learning curve that DOD faced—it started without any Medicare experience—it is not surprising that the demonstration did not start on time. The BBA was enacted in August 1997 and authorized a demonstration beginning in January 1998. The first site started providing service in September 1998, and all sites were providing service by January 1999. Officials at all DOD sites emphasized to us that the process of establishing a Medicare managed care organization at their facility was far more complex than they had expected. They noted several issues that caused difficulty during this accelerated startup phase, including the following: Delayed notification to sites of their selection for the demonstration. Difficulties in learning and adapting to HCFA rules, procedures, and terms for managed care organizations. For example, DOD had to significantly rework grievance and appeals procedures to comply with HCFA requirements. Difficulties due to shifts in Medicare requirements. All sites started planning as HCFA was developing the new Medicare managed care regulations to replace the rules for the former risk contract managed care program. Consequently, the sites had to adapt to changed rules when they were published. Sites vary significantly in their capacity for caring for Medicare-eligible retirees, how close enrollment is to capacity, and what fraction of eligibles has enrolled. This variation suggests that potential demand for a subvention program is uncertain. Retirees’ enrollment decisions reflect several factors, some that DOD may be able to influence but others—such as the extent of managed care presence in an area—outside its control. In establishing their enrollment capacity—which effectively became an enrollment target—some sites were more conservative than others. Sites’ assessment of their resources focused on the availability of primary care managers—physicians and other clinicians who both provide primary care and serve as gatekeepers to specialist care. Additionally, the national TRICARE office developed a model to show how many enrollees a site would need to meet its LOE threshold and start receiving increased resources from subvention, and these results were made available to sites. Capacity varied from San Antonio, the largest site with four hospitals and a capacity of 12,700, to Dover, which provides only outpatient care in its military health facility and set its capacity at 1,500. Many DOD officials and other observers expected that sites would be deluged with applications and would rapidly reach capacity, but this did not happen. One site is currently at capacity, but only after several months. Other sites have enrolled between 44 percent and 91 percent of capacity as of the end of April 1999. 50 percent of dual eligibles are in private Medicare managed care plans—to two sites with higher percentages of enrollees (Keesler and Dover)—where no one is in managed care because no plans are available. The availability of military care varies. Several sites emphasized in their marketing that retirees who did not enroll could not count on receiving space-available care. This information might spur retirees who prefer military care to enroll in Senior Prime. At other sites, space-available care was less of an issue. At these sites, prospective enrollees who believe that they can continue to receive space-available care may not see an advantage in enrollment but rather a disadvantage—especially because enrolling in Senior Prime locks them out of other Medicare-paid care. Sites may differ in the amount of space-available care they have given in the past and in beneficiaries’ satisfaction with that care. These factors could also affect the decision to enroll. Some retirees expressed reluctance to enroll because the demonstration is due to end in December 2000. They also noted that they did not get information about how, after the demonstration ends, enrollees would transition back to space-available care, traditional fee-for-service Medicare, or a Medicare managed care organization. The subvention demonstration for military retirees aged 65 and over is a new endeavor that highlights challenges for DOD to operate as a Medicare managed care organization. The first is operational—putting in place procedures, organization, and staff to deliver a managed care product to these seniors. The second is economic and organizational—creating the business culture that reconciles delivering services to this illness-prone population with cost-consciousness. DOD’s reliance on contractors (like Foundation Health and Humana) has both enabled it to accomplish key managed care tasks and brought risks with it. DOD overcame obstacles in launching TRICARE Senior Prime as a managed care organization. Specifically, to establish and run a managed care plan requires infrastructure—the ability to market the plan, enroll members, and recruit, manage, and pay a provider network. In building Senior Prime organizations at the six sites, DOD has benefited from its TRICARE Prime experience, and from its contractors who help with or perform many of these tasks. Sites with well-established TRICARE Prime organizations that had worked with the same contractor for several years seemed to us to have a sizeable advantage in establishing Senior Prime. It is not yet known what effect DOD’s extensive use of contractors will have on DOD costs for Senior Prime. But an expanded, permanent subvention program would require establishing and monitoring contractors at many new sites. That would make contractor quality, relationships, and costs a pivotal and uncertain feature of a potential DOD subvention program. get more military primary care doctors or to set up a new program with large up-front costs, even if these actions would promote longer-term efficiency. DOD and HCFA have devised payment rules to meet the statutory requirement that Medicare should pay DOD only after its spending on retirees’ care reaches predemonstration levels—that is, after it has met its baseline, or LOE. These rules have added to the difficulty and the complexity of the demonstration. Furthermore, they have resulted in Medicare payments to DOD not being immediately distributed to the sites. As a result, DOD site managers tend to view DOD appropriations as the sole funding source for all Senior Prime care delivered at military health facilities; the managers are likely to consider Medicare subvention payments as irrelevant to their plans for dealing with capacity bottlenecks or other resource needs in TRICARE Senior Prime. The demonstration’s payment system requires extensive cost and workload data—data that are often problematic and difficult to retrieve and audit. It also involves a complicated sequence of triggers and adjustments for interim and final payments from Medicare to DOD. Interim payments are made to DOD for care delivered at each site that is above a monthly LOE threshold. A reconciliation after the end of the year to determine final Medicare payments can result in DOD returning a portion of those interim payments if the LOE for all sites for the entire year is not reached. DOD would also return Medicare payments if data showed that the demonstration population was in better health than that allowed for in the Medicare payment rates, or if payments exceed the statutory cap ($50 million in the first year, $60 million in the second, and $65 million in the third). managed care payment system, in which payments are made at the beginning of the month to cover care delivered during the month. Based on experience to date with the demonstration, any payment approach for subvention must be even-handed (that is, it should favor neither HCFA nor DOD); straightforward and readily understandable; and prospective (DOD and its sites should receive payment in advance of delivering care to enrollees). The demonstration’s payment mechanism, which relies on LOE, is functional in the short term—although the calculation of LOE has weaknesses. However, this payment mechanism may not be appropriate over the longer term for an extended or expanded subvention program. Moreover, a credible long-term payment system should start with a zero-based budgeting approach: first, determining the cost to DOD of providing TRICARE Senior Prime care to dual eligibles and then deciding how much care will be provided from DOD’s appropriations and how much from Medicare reimbursement. One of the key issues for VA under the proposed demonstration would be how to market subvention and persuade veterans in subvention sites to enroll in the demonstration. This issue is complicated by VA’s own enrollment process and the broad benefits package it offers to all priority groups. VA is committed, as a matter of policy, to serving all enrolled veterans in 1999 and has indicated a desire to do so next year. As a result, it has relatively few options if veterans in a subvention demonstration consume so many resources that they crowd out—or at least put pressure on VA’s capacity for serving—other veterans. Two models are possible for the demonstration—fee for service and managed care. Although fee for service is, in principle, easier to implement and operate, VA’s past difficulties with billing third-party payers raise concern. Proposals for a VA demonstration could be strengthened by taking account of DOD’s difficulties in establishing a subvention demonstration. In particular, DOD experience shows that implementation is difficult and that enough time should be allowed to undertake the numerous operational steps needed to get a demonstration started. Furthermore, payment rules need to be as simple as possible, and data systems are key to managing and evaluating a subvention demonstration. they can currently receive from VA. Priority Group 7 veterans—the only ones eligible for subvention—can now get all services in VA’s broad Uniform Benefits Package. Veterans who are eligible for Medicare can also get care from non-VA providers—either under fee-for-service or through a managed care plan. If it needed to make subvention benefits more attractive, VA could either reduce copayments or increase benefits. However, VA officials tell us that, due to resource constraints, VA may not serve Priority Group 7 veterans in the future. If this happens, these veterans could only get VA services through a subvention demonstration and hence would probably be more likely to enroll. (To make this exception possible, legislation would be required, as eligibility for VA enrollment is uniform nationally.) Some VA officials have suggested to us that, to give Priority Group 7 veterans a reason to enroll, it may be necessary to exclude them from VA services—except through the demonstration. The greatest risk in a VA subvention program is that subvention enrollees could consume so many services that VA patients in higher priority groups would be “crowded out.” However, VA, according to its policy, cannot deny care to an enrolled veteran (that is, one who is registered with VA), even if it does not have sufficient capacity. In the short term, waiting times for appointments would probably increase, or care could be limited to certain facilities, which might be inconvenient for some veterans. VA could also reduce its benefits package, although that would require a change in regulations. In the longer term, some veterans could be denied all VA care if VA excludes one or more priority groups. This would be particularly serious for veterans who lack other insurance. Current proposals for a VA subvention demonstration permit both managed care and fee-for-service sites. Of the two, fee for service appears to be easier to implement, because it only requires submitting claims for covered services to HCFA for payment. However, in the past, VA has had difficulty in collecting from insurance companies because its bills have not had enough detail (for example, diagnosis, service, procedure, and individually identified provider). While VA is moving toward a billing system that will more closely approximate private sector counterparts, its success remains to be seen. Managed care, by definition, places VA at financial risk, and it is also, as DOD’s experience demonstrates, difficult to implement. On the other hand, managed care is highly compatible with the direction in which VA is currently moving. Moreover, VA does not have the experience that DOD gained from TRICARE, and it does not have broad-based managed care contractors that appear to have greatly facilitated implementing and managing the DOD demonstration. If a VA subvention demonstration were to include both managed care and fee-for-service sites, a phased implementation, with one type of delivery system being successfully implemented before the other started, would allow both HCFA and VA to focus their resources. The requirements for Medicare fee for service and managed care differ considerably. As a result, implementing both types of sites simultaneously may place significant strains on both HCFA and VA staffs, particularly at the national level. We see three main lessons for VA in DOD’s experience in establishing its subvention demonstration. Officials at every DOD site told us that establishing a Medicare managed care organization was more difficult and required more effort than they had expected. Months into the implementation, they continue to encounter new issues. Even though the sites took 13 to 17 months after the legislation was passed to establish Senior Prime, hindsight suggest that the goals to get it running earlier were unrealistic. If a VA demonstration is authorized, it should have 12 to 18 months to implement its plans for the demonstration; both VA headquarters and the sites will need that much time. The complexity of the LOE definition and Medicare payment rules, as well as ambiguity about what sites could earn and whether earnings would be distributed to the sites, were issues for DOD. These factors caused many site managers and physicians to largely disregard the potential changes in available financial resources and focus their attention primarily on implementation and patient care issues. As a result, the demonstration may not produce the cost savings and efficiencies that are expected from managed care. VA and HCFA have tentatively agreed to rules that are consistent with the DOD rules and still contain many of the elements that have made it difficult for DOD to manage the demonstration. In particular, payments would be retrospective and an annual reconciliation process could lead to VA returning money to HCFA. DOD’s experience shows that data systems are a point of vulnerability for a successful and credible program. The extent to which data quality would pose an obstacle to a VA demonstration depends in part on how the payment rules are specified. Good data, consistent across sites, would also be needed to manage and evaluate the demonstration. Data quality problems would probably vary by site, with some sites having better data than others. The types of data systems needed would depend in part on the subvention model that is selected. For example, in a fee-for-service model, billing systems are critical. In addition, both DOD and VA will need to develop a strategy to inform and assist beneficiaries with their options in the postdemonstration period. Further, as Medicare enrollment in managed care plans is shifting to an annual open season, it would be desirable to coordinate enrollment in and termination of the demonstration with Medicare’s open season. Subvention holds significant potential for giving military retirees and veterans an additional option for health care coverage, for giving DOD and VA additional funds, and for saving Medicare money. However, at this point—with little systematic data yet available—these outcomes are uncertain. This uncertainty underlines the value of demonstrations of subvention, such as the one that the BBA established for DOD. If a VA demonstration were authorized, VA would clearly need sufficient time to plan and initiate it. VA could also increase its chance of successfully establishing the demonstration if it took advantage of DOD’s experience. Mr. Chairman, this concludes our prepared statement. We will be happy to answer any questions that you or Members of the Committee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO discussed the Department of Defense's (DOD) Medicare subvention demonstration program, focusing on: (1) the early phases of implementing the DOD demonstration; (2) issues raised by that experience for DOD subvention; and (3) lessons from the DOD demonstration for a possible Department of Veterans Affairs (VA) demonstration. GAO noted that: (1) subvention holds the potential to benefit military retirees and veterans, DOD and VA, and Medicare; (2) although it got off to a slow start, DOD has initiated its subvention demonstration and is now serving Medicare-eligible military retirees at six sites; (3) several key operational issues remain; (4) these include development of more understandable payment rules, viable for the longer term, and development of data to manage the demonstration and support its evaluation; (5) most important, the demonstration's final results, in terms of access to health care, quality of patient care, and costs to DOD, Medicare, and retirees, will not be known until the evaluation is completed, several months after the end of the demonstration in December 2000; (6) DOD's early experience with subvention does offer insights if proposals are acted on to permit Medicare subvention for VA; (7) in particular, it would need to consider, in collaboration with the Health Care Financing Administration, how to determine its baseline costs and payment rules, as well as the need for good data for implementation, management, and controlling costs; (8) moreover, VA would need to make its regular enrollment of veterans who wish to use VA health care services interface smoothly with subvention demonstration enrollment; (9) VA would also need to be concerned about potential crowding-out of other, higher-priority veterans by subvention enrollees; and (10) GAO's early work on DOD subvention suggests that VA would have a greater chance of success if it has sufficient time to plan and establish the demonstration, and if the value and feasibility of implementing fee-for-service and managed care subvention models simultaneously were reconsidered. |
Millions of state and local government employees are supplementing their future retirement benefits by contributing to salary reduction plans called salary reduction or defined contribution arrangements. Such plans enable participants to defer part of their current salary for future use. The goal of these plans is to postpone federal income tax until the amounts deferred from an employee’s salary and any earnings or losses thereon are received by the participant at separation or retirement. All salary reduction plans pose some risk of financial loss from poor investment performance. However, amounts in plans organized under Internal Revenue Code (IRC) section 457(b) (hereinafter referred to as 457 plans) bear additional risk because salary deferrals to 457 plans are assets of the sponsoring employer that may be used for nonplan purposes and which are subject in the event of bankruptcy to the claims of general creditors. For example, one municipality’s recent bankruptcy could cause financial losses to employees who participated in its 457 plans. In addition, amounts earmarked to pay another county’s 457 plan obligations could have been at risk when the county intended to use those amounts to meet payroll expenses. In that case, the county might not have had the funds available when the time came to pay out the amounts due the 457 plan participants. In both cases, county officials were entitled to use the money saved to pay 457 plan obligations for nonplan purposes. As a result, concerns have been raised about the security of deferrals that participants make from their salary under 457 plans. A state or local government may elect to offer its employees, among other retirement plans, a deferred compensation arrangement under IRC sections 403(b), 401(k), and 457(b). In all three types of plans, employees may voluntarily defer compensation through payroll deductions. Federal income tax is postponed until employees begin to receive their account balances, usually at retirement or when they are no longer employed by the plan’s sponsor. These three salary reduction plans typically are intended to supplement an employer-sponsored qualified pension plan under IRC section 401(a). In general, 401(k) plans, sometimes referred to as cash or deferred arrangements, are qualified plans that allow employees to choose between receiving current compensation or having part of their compensation contributed to a qualified profit-sharing or stock bonus plan. A 403(b) plan, a qualified-type plan sometimes referred to as a tax-sheltered annuity, is a deferred compensation arrangement that may be sponsored only on behalf of employees of public educational systems and other specific tax-exempt organizations. Section 457(b) plans are nonqualified, unfunded deferred compensation plans that may cover all employees of a state or local government and certain highly compensated employees of a tax-exempt organization. Such plans permit these employees to defer limited amounts of compensation so that, under the principles of constructive receipt and economic benefit, tax will also be deferred on the amounts plus their earnings until some future event. Eligibility and the security of deferred amounts vary among the three plan types. Employees of public schools, colleges and universities, and some private institutions exempt from tax under IRC section 501(c)(3), such as hospitals, typically participate in 403(b) plans. Contributions to these plans are generally maintained as an annuity contract or custodial account, both of which are reserved for the sole benefit of the participant and his or her beneficiaries. Employee deferrals under section 401(k) plans are held in trust for the sole benefit of the participants and their beneficiaries. These participants are primarily employed in the private sector. However, some state and local governments established these plans in the late 1970s and early 1980s for their employees. With the enactment of the Tax Reform Act of 1986,state and local government employers who had not previously established 401(k) plans were prohibited from establishing new 401(k) plans, but existing plans could continue. Despite concerns raised by representatives of state and local governments, among others, the rationale for this exclusion was that allowing public employees to have access to both 401(k) plans and 457 plans would be “inappropriately duplicative.” Only employees of and independent contractors providing service to state and local government and tax-exempt organizations may participate in 457 plans. Unlike 401(k) and 403(b) plans that are funded and must comply with the nondiscrimination and minimum participation rules, section 457 plans are unsecured promises of the employer to pay amounts in the future. A section 457 eligible, salary reduction plan requires that all deferred compensation and income shall remain solely the property of the employer and be subject to the claims of the employer’s general creditors. In 1999, 401(k) and certain 403(b) plans must begin testing for nondiscrimination and minimum participation rules. Generally, the nondiscrimination rule requires that benefits or contributions provided under the plan do not discriminate in favor of highly compensated employees. The minimum participation rule requires that the plan benefit at least the lesser of 50 employees or 40 percent of all employees. The minimum coverage rule requires that the percentage of nonhighly compensated employees who benefit under the plan must be at least 70 percent of the highly compensated employees who benefit under the plan, or the nonhighly compensated employees in the workforce must receive benefits that on average are at least 70 percent of the benefits received by highly compensated employees. State and local government sponsors of these plans have expressed concern that required compliance with these rules will be burdensome and costly. Two tax principles, constructive receipt and economic benefit, are often intertwined in matters regarding nonqualified, unfunded deferred compensation. Under the principle of constructive receipt, income is taxable even when an employee has not actually received current compensation, if the compensation is credited to the employee’s account, set apart for the employee, or otherwise made available to the employee. The principle of economic benefit, on the other hand, taxes assets that have been unconditionally and irrevocably transferred into a fund for the employee’s sole benefit because he or she has received a benefit (that is, some deferred salary) that, although not readily convertible to cash, has an immediate value (that is, a fund for his or her benefit) that is secured from the employer’s creditors. Section 401(k) and 403(b) plans are funded, qualified or qualified-type arrangements where the deferred amounts are placed in trust; that is, set aside for the exclusive benefit of the employees who participate in the plans, secured from an employer’s creditors. So that such arrangements would not cause the participants to be taxed under the basic principles of constructive receipt and economic benefit, the Congress overrode these two principles by providing for income to be taxed only when it is distributed. Section 457 plans, on the other hand, are nonqualified, unfunded deferred compensation plans that follow the basic principles of constructive receipt and economic benefit. Participants are not in constructive receipt of their deferrals because the amounts are not set apart for or otherwise available to them at any time. Participants do not derive the economic benefit of their deferred compensation because the deferred amounts are the property of their employers and subject to the employers’ general creditors. Instead, participants have bookkeeping accounts with balances that represent the amount that the employers promise to pay at some future time. These account balances are comprised of amounts deferred under the plan and any earnings or losses that would have accrued to those amounts if the account balances had been invested as stated under the plan. Although most employers sponsoring 457 plans invest amounts as necessary so that they will be able to provide the promised benefit when due, there is no requirement for them to do so. In 1972, IRS issued the first of a number of private letter rulings holding that tax may be deferred on employee contributions from salary to a nonqualified, unfunded deferred compensation plan where a state or local government was the employer. Nonqualified, unfunded deferred compensation plans of state and local governments and tax-exempt organizations were not subject at that time to certain restrictions placed on qualified plans: (1) they did not need to comply with nondiscrimination rules applicable to qualified plans; (2) there was no limit on the amount participants could contribute; and (3) participants in nonqualified, unfunded plans, unlike participants in qualified plans, could make tax-deductible contributions to individual retirement accounts (IRA). In 1977, however, IRS stopped issuing private letter rulings on the income tax treatment of amounts deferred under nonqualified, unfunded deferred compensation plans, pending formal review of its position. In 1978, IRS changed its position and published proposed regulations that would have subjected participants in nonqualified, unfunded deferred compensation arrangements to immediate taxation on deferred amounts. “cannot have any secured interest in the assets purchased with their deferred compensation and the assets may not be segregated for their benefit in any manner which would put them beyond the reach of the general creditors of the sponsoring entity.” Section 401(k) plans must meet three federal requirements for employee participation to be considered qualified. First, the value of the benefits that highly compensated employees as a group may receive is limited by the value of the benefits the less well paid employees collectively receive; this is the nondiscrimination rule. Second, at least the lesser of 50 employees or 40 percent of all eligible employees must participate in the plan; this is the minimum participation rule. Third, the plan must benefit a percentage of nonhighly compensated employees that is at least 70 percent of the percentage of highly compensated employees benefiting under the plan or the nonhighly compensated employees in the workforce must receive benefits that, on average, are at least 70 percent of the benefits received by highly compensated employees; this is the minimum coverage requirement. Additionally, among many other requirements, sponsoring employers must meet certain nondiscrimination tests and report their annual levels of participation, current assets, and current liabilities. Tax-sheltered annuities under section 403(b) must meet nondiscrimination rules. Salary reduction deferrals to 403(b) plans must also meet special nondiscrimination rules that are deemed satisfied if all employees defer in excess of $200. Starting in 1997, section 403(b) plans that provide employer matching contributions will have to meet special nondiscrimination rules provided by section 401(m). Starting in 1999, section 403(b) plans that provide nonelective contributions (employer contributions that do not reduce a participant’s salary) will be required to meet the nondiscrimination, minimum coverage, and minimum participation rules. For a 457(b) plan to be eligible for tax deferral treatment, the Congress limited the amount of compensation that may be deferred, but permitted participants to wait until after separation from employment to elect the time and method of payout. However, minimum participation, minimum coverage, and nondiscrimination rules that are a cornerstone for the tax-favored status of qualified, funded plans were not imposed. Little information is available on the number of 401(k) and 403(b) plans sponsored by state and local governments or the number of people participating in them. However, a 1993 study of over 400 state and local government general pension plans showed that about 8.4 percent of responding governments sponsored a 401(k) plan and 7.1 percent sponsored a 403(b) plan. In that study, 457 plans were the most frequently used salary reduction plans. About 90 percent of local governments and all 50 states provided their employees access to 457 plans. In 1994, an estimated 1,750,000 people participated in about 10,000 plans sponsored by government entities nationwide. Several bills have been introduced in the 104th Congress to redesign section 457 plans. For example, H.R. 2491, the omnibus budget reconciliation bill, contained provisions that would require all assets and income of a 457 plan to be held in trust for the exclusive benefit of participants and their beneficiaries. However, IRS officials told us that imposition of such a trust requirement would result in immediate taxation for deferrals to a 457 plan because of the requirements of IRC section 457(b)(6). With respect to section 401(k) plans, another provision of the reconciliation bill would have provided a simplified and less costly alternative method of testing for nondiscrimination requirements under IRC. In separate legislation, under section 14212 of H.R. 2517, which was incorporated into the reconciliation bill and then dropped, state and local governments and tax-exempt organizations would have been extended the eligibility to provide 401(k) plans to their employees. In response to concerns about financial losses to state and local government supplemental pension plans, the House Ways and Means Committee asked us to examine the nature and security of such plans. After discussions with Committee staff, we agreed to determine (1) whether amounts held in state and local government salary reduction plans or otherwise promised to participants inherently are at risk of financial loss to the participants and (2) how statutory provisions comparatively treat participants in these plans. To determine how the plan provisions affect financial risk, we interviewed representatives from IRS and Securities and Exchange Commission (SEC) staff. In addition, we discussed the risk of financial loss relative to plan provisions and whether the provisions treat participants comparably with representatives of the Government Finance Officers Association; the International City/County Managers Association-Retirement Corporation; the National Association of Counties; the National Council on Teacher Retirement; the National Association of Government Deferred Compensation Administrators; and the Nationwide Insurance Company and its subsidiary, the Public Employees Benefit Service Corporation. To determine how state and local government plans are generally administered, we contacted plan administrators in Alabama, California, Connecticut, Florida, Georgia, Michigan, Minnesota, Mississippi, Nebraska, New Jersey, Ohio, Oklahoma, Tennessee, Texas, and Wisconsin. We selected these states on the basis of information they reported on their 457 plans in the 1993 PENDAT database. We focused our review on 15 states and on eight counties in California. We selected these counties because of the Committee’s concerns about the impact of the Orange County, California, bankruptcy on 457 plans sponsored by other California counties. We conducted our work from January 1995 through January 1996 in accordance with generally accepted government auditing standards. The statutory requirements that provide for tax deferral for 457 plans also place the assets held to pay participants’ benefits at risk of loss from creditors of the government sponsor in the event of a bankruptcy and, unless the sponsor provides for a rabbi trust, from the government’s using them for other than plan purposes. Two 457 plans in California illustrate these risks. In one case, a county filed for bankruptcy protection, which put amounts set aside to pay the county’s obligations under its 457 plan at risk of being used to satisfy the county’s creditors. Because participants in 457 plans have no greater rights of their employer than general, unsecured creditors, such actions could reduce the amount participants otherwise would receive from the plan. In the other case, a county government intended to use amounts it set aside for its 457 plan obligations to meet payroll expenses. However, if the amounts held for 457 plan purposes had been placed in a rabbi trust—as permitted for all nonqualified, unfunded deferred compensation plans—they may have been protected from use for nonplan purposes by the sponsor but not from a sponsor’s creditors. Orange County kept its tax revenues in an investment pool managed by its treasurer and that permitted investments from cities, municipalities, and political instrumentalities outside Orange County. The county regularly contributed deferrals to the pool to assist it in meeting its obligations under this plan. The 457 plan provides that the experience of the investment pool will be used to determine the final amount due participants when they separate from service or retire. Thus, participants’ bookkeeping accounts are credited at specified intervals with the interest, gains, and losses realized by the investment pool. From July to December 1994, the investment pool sustained heavy losses. This resulted in both the pool and Orange County filing for Chapter 9 bankruptcy on December 6, 1994. On May 2, 1995, the state Bankruptcy Court approved a comprehensive settlement of the pool’s bankruptcy case. Under this court order, Orange County received amounts from the pool at a rate that was lower than 100 percent of its claims. No participant funds were used to pay pool creditors and the participants had no standing as claimants in the pool bankruptcy. The participants are general, unsecured creditors of Orange County only—not of the investment pool. However, Orange County’s claim against the pool included a claim for amounts it invested there so that funds would be available as needed to pay its yet unmatured obligations under the 457 plan. Technically, because performance of the pool serves as a measure used to calculate returns for the 457 plan, the investment loss that occurred in the pool would normally affect the account balances of plan participants. As of March 1996, the bankruptcy in Orange County is still ongoing. It is possible that all creditors may eventually be paid 100 percent of their claims and that participants in the 457 plan that invested in the pool may have their account balances fully restored. Administrators of the 457 plan told us that the bookkeeping accounts for each participant would be credited with interest, but all accounts have been reduced 10 percent for losses on investments in the pool. In 1992, Los Angeles County intended to borrow $250 million of amounts deferred under its 457 plan to make payroll payments. SEC staff learned of Los Angeles County’s intentions and questioned the proposed action as potentially impairing the status of the plan under the federal securities laws. The SEC staff asserted that borrowing amounts set aside to pay the county’s obligations under its 457 plan for any reason other than satisfying obligations to the locality’s general creditors would conflict with representations made earlier to the SEC staff. These representations had been made in connection with a request by the insurance company that operated the separate accounts for participants when it sought the SEC staff’s no-action assurance that the separate accounts and the interests therein did not need to be registered under federal securities laws. The SEC raised concerns that the disposition of the assets needed to pay the county’s obligations under the plan as proposed by Los Angeles County conflicted with the representations made in seeking the no-action letter. As a result of both SEC’s questioning and media reports accusing local officials of wrongdoing, the county created a new investment option under its plan in the form of a loan fund, offering at least a 6-percent return over 15 years. A few participants agreed to have their deferrals treated as though invested in the fund and the county was able to raise $19 million of the $250 million it originally intended to borrow. We note, however, that SEC does not regulate 457 plans and its ability to influence the operation of 457 plans is limited to instances in which an unregistered collective trust or separate account seeks to rely on certain exemptions from federal securities laws in order to hold funds earmarked to pay a 457 plan obligation. Although the county chose not to do so, it could have simply used the assets without paying interest on their use because statutory provisions governing section 457 mandate that amounts deferred remain the property of the employer. There is no requirement that sponsoring governments actually invest amounts participants have deferred or credit their deferrals with interest earned. Instead, the governments are only responsible for making payments to participants under the terms of the plan, usually when they retire or change jobs. The terms of the plans usually provide that the amounts deferred will be treated as though they were invested in some identified asset or fund and that the benefit paid will include earnings that would have accrued on those amounts had they been so invested as well as any gains or losses that might have been experienced had the amounts been so invested. Although it is not required under section 457, sponsors and administrators normally make the actual investments referenced under the arrangement to insure that they will have the amounts necessary to meet their 457 plan obligations. Notwithstanding this normal administration of 457 plans, 457 plan deferrals can never be invested solely for the benefit of the participant. They must always be available to the general creditors of the employer. Also, unless amounts set aside by the employer to meet its obligations are placed in a rabbi trust, these assets may be used for nonplan purposes. Under IRC, 457 plans have a means to restrict a government’s nonplan use of amounts deferred to 457 plans—the rabbi trust. Under such a trust, the plan sponsor typically has no access to the funds but in an insolvency or bankruptcy, such funds can be reached by the general creditors. If the deferrals held by Los Angeles County for its 457 plan had been placed in a rabbi trust, the county may not have been in a position to use them to meet its payroll. However, a rabbi trust arrangement would not have protected Orange County employees, because that government declared bankruptcy. Section 457 plans are substantially different from 401(k) and 403(b) plans. In addition to the differences discussed in chapter 1, these plans have limited portability and in some cases participants in 457(b) plans must irrevocably select a date to begin receiving their benefits. In addition, participants cannot defer as much as participants in 401(k) or 403(b) plans. Participants in 457 plans who leave their government employer before retirement are restricted in their ability to move the amounts in their 457 plan accounts to a funded tax-sheltered account. The plan accounts can only be transferred to another eligible 457 plan if the new government employer will accept the transfer. Amounts deferred under section 457 cannot be rolled over to an IRA and have tax deferred on the distribution as can 401(k) and 403(b) plan funds. Participants in 457 plans who leave government service and do not have another 457 plan that they can transfer their bookkeeping accounts to have only two options: (1) commence immediate payment of benefits and pay income tax on the distribution or (2) defer the commencement of benefits to any date in the future that is before they turn 70-1/2 years old. IRS officials told us that under current law, allowing nonqualified, unfunded deferred compensation amounts to be transferred to an IRA makes the amounts immediately taxable to the participant because any distribution, even to an IRA, results in the participant having an economic benefit in the funds and being in constructive receipt of the money. The mere promise of the employer to pay will have been fulfilled. Additionally, under the qualified plan rules, a transfer of nonqualified, unfunded plan amounts into a qualified, funded plan could disqualify the qualified plan and make funds in it immediately taxable to the participants. If a participant cannot transfer the deferred amounts to another 457 plan or chooses not to do so, he or she must, after leaving employment, select a date to begin receiving benefits. Selecting the date may be difficult because the employee’s retirement date may be years in the future. Moreover, once selected, this date cannot be changed, except for emergencies. In contrast, separating participants in 401(k) and 403(b) plans are not required to declare a date for benefits to begin. These participants may begin collecting their benefits at any time after turning 59-1/2 years old. IRS officials said that the tax principle of constructive receipt would be compromised if participants in 457 plans were permitted to change the date previously selected for receiving benefits. The maximum annual amount that employees may defer and employers may contribute is lower for 457 plans than for the other two plan types. For example, the maximum allowable employee deferral to a 457 plan is $7,500, a limit that is about $2,000 lower than the limits of the other two tax-deferred plans. In 1995, the maximum employee deferral to a 401(k) plan was $9,240, and deferrals to a 403(b) plan could not exceed $9,500. Although employees can defer no more than $7,500 under a 457 plan, this does not include employer contributions to another plan, usually the employers’ regular or basic pension plan. Employers sponsoring 401(k) or 403(b) plans can make annual contributions of no more than $30,000. Moreover, the tax-free deferral limit of a participant in 401(k) and 403(b) plans is reduced if the participant also defers any amounts under a 457 plan. The maximum total deferral a participant can make to a 401(k) or 403(b) plan is governed by the maximum deferral allowed under section 457 when a participant actually makes deferrals under a 457 plan. That is, total deferrals by participants contributing to both a 457 plan and one of the other two plan types cannot exceed $7,500. Thus, any deferrals, even if it is only $1, made to a 457 plan, limits the maximum annual deferral the participant can make to a 401(k) or 403(b) plan to $7,500. Additionally, any employer contribution to a 457 plan will limit the deferral the employee can make under a 401(k) or 403(b) plan to $7,500. When the Congress enacted IRC section 457 in 1978, it set the annual deferral limit at $7,500, an amount that exceeded the $7,000 deferral limit set for 401(k) plans in 1986. However, the 401(k) plan limit was indexed for inflation, and the 457 plan limit was not. In time, the 401(k) limit surpassed the 457 limit. Section 403(b) limits will be indexed for inflation when the 401(k) limit reaches $9,500, the current deferral limit for 403(b) plans. Over time, inflation will continue to reduce the section 457 deferral limit relative to earnings, and the maximum percentage of income participants will be able to defer will decrease. For example, using an average annual inflation rate of 4 percent, 10 years from now the deferral limits for employee contributions in the other two plans will be $13,677, $6,177 more than the current section 457 limit. IRS officials said that the limit on deferrals and the lack of a cost-of-living adjustment, as in sections 401(k) and 403(b), could be changed by the Congress without compromising either the nature of 457 plans as nonqualified, unfunded plans or the tax principles of constructive receipt and economic benefit. Any such changes to section 457(b) would, however, cause a tax revenue loss in the future if participants took advantage of higher deferral limits. With enactment of IRC section 457 in 1978, the Congress specifically authorized a tax-deferred, nonqualified, and unfunded compensation plan to enable employees of state and local governments to provide themselves with additional retirement income. The Congress’ action had been prompted by proposed IRS regulations that would have subjected all nonqualified, unfunded deferred compensation amounts to immediate taxation. Eight years later, in the belief that 457 and 401(k) plans offered duplicative benefits, the Congress excluded state and local employers from establishing new 401(k) plans for their employees. As a nonqualified, unfunded deferred plan, however, section 457 provides significantly less protection for plan participants compared with qualified, funded deferred plans such as 401(k) and 403(b) plans. Until recently, there was little or no evidence that greater protections were needed; however, events in Orange and Los Angeles Counties have posed possible financial risks to participants’ deferred amounts in 457 plans that suggest greater protections may be needed. Section 457 plan participants voluntarily forego current income in order to provide for themselves in their retirement years. Yet the money that these participants forego is at risk. This is because 457 plans are nonqualified, unfunded deferred plans that require that the amounts deferred may not be set aside for the exclusive benefit of the employee but must remain the property of the employer, subject to the claims of the employer’s general creditors. To date, the use of the Orange County Investment Pool to calculate how amounts deferred under its 457 plan are to be treated as invested has resulted in financial paper losses that ultimately may affect county employees’ retirement benefits. Los Angeles County intended to use funds of its plan to meet its payroll. Under current law, potential bankruptcies and financial difficulties of other state and local governments pose similar risks to the salary deferrals that employees have made under 457 plans. Apart from the greater risk to plan participants, as compared with other salary reduction plans, employees who participate in 457 plans are treated differently from those in 401(k) and 403(b) plans. For example, as a result of IRC provisions, the maximum annual amount that may be deferred under an eligible 457 plan is notably less than the maximum annual amount that may be contributed to 401(k) and 403(b) plans. Further, those deferred amount limits are not indexed for inflation. This is particularly noteworthy because a 457 plan is often the only deferred compensation plan available to most state and local employees to supplement their regular government pension. Other disadvantages occur because of differences between nonqualified, unfunded and qualified, funded plans. For example, participants who leave employment before retirement have limited portability for their funds. Participants transferring to another state or local government may transfer account balances in a 457 plan to another 457 plan only if their new employer will accept that transfer and their old employer permits transfers. In lieu of such a transfer, participants leaving state or local government who choose to withdraw their 457 plan amounts are subject to immediate taxation. No legal barrier exists under the principles of constructive receipt and economic benefit for raising the limits that participants could defer or for indexing the limits for inflation. However, changes to portability would not comport with these two principles. Given the risk of financial loss associated with deferrals under 457 plans, imposing a rabbi trust requirement, where a plan sponsor could not use such amounts for its own interest, would not be successful in fully assuring the security of these funds for plan participants. Such a trust requirement would not preclude a bankruptcy court from securing such funds for the general creditors of the state or local government employer. Moreover, a trust may not be successful in barring an employer’s creditors access to these funds, for example, if an employer experiences a temporary liquidity shortfall or financial insolvency. Thus, the existence of a rabbi trust would not have eliminated the risks posed by events in Orange County and may not have eliminated risks of nonplan use in Los Angeles County. However, under the tax theories that drive section 457 and other nonqualified, unfunded deferred compensation plans, any trust that would not subject its assets to the claims of the employer’s creditors and would provide the participant an unconditional and irrevocable right to receive the deferred amounts in it would create an immediate—not a deferred—tax liability for the employee. The complexity of IRC makes amending section 457 very difficult, as proposed in H.R. 2491, for example, because of the many ways section 457 dovetails with other provisions. SEC provided written comments on a draft of this report (see app. I). SEC found the report informative and said that it would serve as a reference for SEC staff as they consider section 457 issues. SEC said it agreed with our recommendation that the Congress amend IRC section 401(k) to permit state and local governments to establish 401(k) plans. We did not recommend that the Congress make such an amendment; rather we concluded that addressing all the problems with section 457 plans that we identified merely by amending IRC section 457 would be difficult. SEC added that if the Congress proceeds with legislation relating to public plans, it should consider statutory changes to clarify the status of 457 plans under federal securities laws. SEC said all qualified plans are now exempt from SEC regulation. Those governmental plans, as defined in IRC section 414(d), that are established for the employees’ exclusive benefit and which cannot be used by the employer for other purposes (exclusivity and impossibility requirements) are exempt. SEC staff told us that they have received numerous inquiries with respect to whether 457 plans also may be considered exempt under federal securities laws, although IRC prevents 457 plans from meeting the exclusivity and impossibility requirements. SEC also noted some technical changes that we incorporated into our report where appropriate. IRS also provided written comments on a draft of this report (see app. II). IRS made several general comments primarily concerning technical terms. IRS pointed out the distinction between the tax-favored status of 401(k) plans and the tax-deferred status of 457 plans. We clarified these differences throughout the report. IRS emphasized the fact that 457 plan deferrals may be treated as invested in a certain way, but in fact there is no requirement to invest such amounts. As a result, if the sponsor becomes insolvent, the rights of participants in a 457 plan are no greater than other general, unsecured creditors. IRS also pointed out that most state and local governments have basic pension plans for employees and 457 plans are additional plans. We refer to them as supplemental plans to reflect this relationship. IRS suggested that we should clarify that only state and local governments can offer nonqualified plans to their rank-and-file employees, which we did. IRS clarified some features of rabbi trusts that we did not include in the report, though the major feature of a rabbi trust—the inability of the sponsor to have access to the assets therein—is the focus of chapter 3. IRS also made technical comments that we incorporated into our report where appropriate. | Pursuant to a congressional request, GAO reviewed the status of public pension funding, focusing on how plans established under Internal Revenue Code (IRC) section 457 differ from plans created under IRC sections 401(k) and 403(b). GAO found that: (1) most state and local government employees are covered under section 457 plans because the Tax Reform Act of 1986 prohibited state and local governments from establishing plans under sections 401(k) and 403(b); (2) section 457 plan participants risk losses if sponsoring governments go bankrupt or the deferred monies are mismanaged or lost; (3) section 457 does not require sponsoring governments to maintain deferred monies to pay future benefits; (4) section 457 plan participants risk losses because sponsoring governments may view deferred monies as available for public use; (5) while funds enrolled in section 401(k) and 403(b) plans can be transferred to investment retirement accounts (IRA) when the employee leaves state or local government employment, amounts payable from section 457 plans can only be rolled over into other section 457 plans; (6) section 457 plan participants must declare a fixed date for when they will begin receiving their benefits shortly after retiring or leaving employment; (7) according to IRS, the transfer of section 457 plan deferrals into IRA or allowing plan participants to change their distribution dates would create a taxable event or be incompatible with the plan's tax deferred condition of government ownership; (8) section 457 plans allow a lower maximum annual employee deferral and employer contribution than section 401(k) and 403(b) plans, and are not indexed; and (9) new legislation could increase the section 457 plan deferral and contribution limit and index section 457 plans to inflation. |
Prior to AIR-21, which was signed into law on April 5, 2000, general aviation airports received AIP funding through funds apportioned to states by using geographic area and population-based formulas, as well as through discretionary funds. These airports also received funds through FAA’s small airport fund. AIR-21 amended the general aviation state apportionment grant program, in part, by creating a special rule, which provides general aviation entitlement grants for any fiscal year in which the total amount of AIP funding is $3.2 billion or more. Under this rule, the amount available for state apportionments increases from 18.5 percent of total AIP funding to 20 percent when AIP’s total funding is $3.2 billion or more. From the state apportionment, FAA computes and allocates the amount available for general aviation entitlements and the remaining funds are provided for “unassigned” state apportionment. The general aviation entitlement grant amount for any one airport represents one-fifth of the estimate of that airport’s 5-year costs for its needs, as listed in the most recently published NPIAS, up to an annual maximum of $150,000. After the aggregate amount of general aviation entitlements has been determined, the remainder is provided for the same type of airports within a state on an unassigned basis, the allocation of which is determined by a state’s area and population relative to all other states. To be eligible for a general aviation entitlement grant, an airport must be listed and have identified needs in the most recently published NPIAS; therefore, an airport’s listed needs largely determine the size of an airport’s annual grant. However, funding is not limited to the projects listed in the most recent NPIAS. The 1998-2002 NPIAS provided the basis for fiscal year 2001 and fiscal year 2002 grants. The 2001-2005 NPIAS, published in August 2002, provides the basis for fiscal year 2003 grants. A general aviation entitlement grant provides funding for 90 percent of an eligible project’s total costs; the airport must finance the remaining 10 percent, although many states pay a share of this local matching requirement. FAA’s regional and district offices work with state aviation officials and sponsors to find appropriate uses for these funds. Grant funds can be used on most airfield capital projects, such as runway, taxiway, and apron construction but generally not for terminals, hangers, and nonaviation development, such as parking lots. Some airfield maintenance and project planning costs are also allowed. Accepting a grant not only requires airport officials to pledge to continue operations and maintenance for 20 years but also precludes the airport from granting exclusive rights to those providing aeronautical services and allowing any activity that could interfere with its use as a general aviation airport. The number of general aviation airports that were apportioned general aviation entitlement funds is expected to increase from 2,100 in fiscal year 2001 to 2,493 in fiscal year 2003, as shown in figure 1. The expected 19 percent increase reflects the fact that more airports identified capital needs in the most recent NPIAS, which serves as the basis for fiscal year 2003 grants. FAA officials explained that before the NPIAS served as a basis for calculating entitlement grants, some FAA officials, sponsors, and state aviation officials did not always give high priority to keeping the general aviation portion of the NPIAS up to date. Thus, they added, the NPIAS used to calculate the fiscal years 2001 and 2002 general aviation entitlement might have understated airport development needs for these airports. In fiscal years 2001 and 2002, entitlement grants for general aviation airports were available because AIP funding levels were at least $3.2 billion. As the number of eligible airports or the value of development identified in the NPIAS for these airports increases, the funding for these grants also increases. However, this increase could result in a corresponding decrease in the amount of AIP funding available in that year for “unassigned” state apportionment grants. Since the latter amount is determined after subtracting the total general aviation entitlements from the total state apportionment, FAA estimates that general aviation entitlement grant funding will rise by about $70 million from $271 million in fiscal year 2002 to about $341 million for fiscal year 2003, as shown in figure 2. Over half of general aviation airports were apportioned the maximum amount of funding. For fiscal year 2001, of the 2,100 airports that were apportioned these entitlements, 71 percent were eligible for the maximum amount of $150,000. With the publication of the new NPIAS in August 2002, 83 percent of the 2,493 eligible airports were apportioned the maximum for fiscal year 2003. Working in collaboration with FAA’s regional or airport district offices, general aviation airports identify projects that will be funded with entitlement grants. These projects are listed in FAA’s Airports Capital Improvement Plan (ACIP), which includes only those projects that FAA has identified as candidates for AIP funding. After FAA has certified that the application materials are in order and all relevant AIP statutory, regulatory, and policy requirements have been satisfied, FAA then sends a grant offer to the airport sponsor or the state aviation agency representing the airport sponsor. The flowchart in figure 3 illustrates this process. When an airport elects not to accept its general aviation entitlement grant funds, the funds revert to AIP’s discretionary fund to be awarded by FAA to another airport, as provided by statute. However, the funds remain available to this airport for up to 3 years. Therefore, in the third year that an airport has entitlement grant funds available, it could have as much as $450,000 available for a grant. In addition, an airport can use part of its general aviation entitlement grant in the first year and carry over the remainder for use later. For example, an airport might have a general aviation entitlement grant of $140,000, but the only AIP-eligible project it can implement during the fiscal year might require just $80,000 in AIP funds. FAA could issue the grant for $80,000 that fiscal year and include the remaining $60,000 of the airport’s available funds in another grant for that airport at any time within the 3 years after the grant was first made available. For general aviation airports in the nine block grant states, the acceptance process for these entitlement grants works differently. Each block grant state is apportioned a lump sum equal to the total of these grants for airports in that state plus total unassigned state apportionment funds. FAA has distributed all general aviation entitlement grant funds to these states in the same year the funds were apportioned. The block grant states are then responsible for distributing the funds to individual general aviation airports according to FAA’s requirements. According to FAA officials, states are required to offer eligible general aviation airports their entitlements in the fiscal year it is made available. If an airport does not accept the entitlement in the first year of its availability, the distribution of a general aviation entitlement grant must nonetheless be made to that airport by the end of 3 years. If an airport has not accepted the funding at the end of the 3-year period, the grant would be reduced by the amount of the funding not accepted. FAA officials explained that each block grant could be adjusted on an annual basis, but this approach is used to provide block grant states flexibility similar to the authority FAA has in managing AIP and general aviation entitlement grants. FAA officials added that the state assumes the risk if funds are used for another airport during the 3-year period. If funds have been expended at another airport and an unassigned state apportionment is not available to provide the general aviation entitlement funding to the original eligible airport, it would be necessary for the state to repay the federal funds with its own state- generated funds. As of October 1, 2002, about 75 percent of the total fiscal year 2001 general aviation entitlement grant funds had been accepted by the airports to which they were apportioned, and about 46 percent of the total fiscal year 2002 general aviation entitlement grant funds had been accepted. The percentage of the total funding accepted by airports for both fiscal years varied widely from state to state. Also, the percentage varied by size, with the larger general aviation airports having accepted 77 percent of their fiscal year 2001 entitlement funds compared to 65 percent for the smallest airports. As of October 1, 2002, $201 million (about 75 percent) of the $269 million in general aviation entitlement grant funding apportioned for 2001 had been accepted, as shown in figure 4. Of the $201 million accepted, $145 million (54 percent) was accepted in fiscal year 2001 and $56 million (21 percent) was accepted in fiscal year 2002. Almost $69 million of the fiscal year 2001 apportionments was carried over for possible future acceptance in fiscal year 2003. These grants were made to 1,599 (76 percent) of the 2,100 eligible airports. As shown in figure 5, airports had accepted $124 million of the $271 million in fiscal year 2002 general aviation entitlement grants (about 46 percent). Grants were made to 1,026 of the 2,108 eligible airports (49 percent). The remaining amount ($147 million) can be accepted in fiscal years 2003 or 2004. General aviation airports in some states accepted a larger percentage of funds in both fiscal years 2001 and 2002 than in other states. For fiscal year 2001 general aviation entitlement grant funds, the percentage of funds accepted ranged from about 48 percent to 100 percent. The acceptance rate for fiscal year 2002 general aviation entitlement grant funds varied from 11 percent to 99 percent. (See app. IV for a complete list of the percentage of funds that were accepted by state for fiscal years 2001 and 2002.) Larger general aviation airports (as measured by the number of based aircraft) have accepted more of their general aviation entitlement grant funding than the smallest airports, as shown in figure 6. Airports with more than 100 based aircraft had accepted 77 percent of their fiscal year 2001 general aviation entitlement grant funds. Similarly, general aviation airports with between 50 and 99 based aircraft had accepted about 81 percent of their general aviation entitlement funds. In contrast, 65 percent of the general aviation entitlement grant funds for fiscal year 2001 for airports with less than 20 based aircraft had been accepted. This pattern is similar for fiscal year 2002 funding. Airports with more than 100 based aircraft had accepted about 58 percent of their fiscal year 2002 grant funds, compared with 39 percent of the grant funds for airports with less than 20 based aircraft. According to the results of FAA’s survey, general aviation airports most often used the funds from fiscal year 2001 entitlement grants to construct landing areas (e.g., runways, taxiways, and aprons). As shown in figure 7, of the 1,373 total projects reported in FAA’s survey, 483 (35 percent) were designated as landing area construction projects. The next three categories of projects most frequently undertaken were as follows: Pavement maintenance (227): This category includes the general upkeep and maintenance of paved areas on airport land, such as filling and sealing cracks, grading pavement edges, and coating pavement with protective sealants. FAA’s Great Lakes Region and New Jersey did not respond to the survey. systems, visual navigation aids, and electronic navigation and weather equipment, which help observe, detect, report, and communicate weather conditions at an airport. Planning projects (162): This category includes the costs associated with preparing the documents that are a necessary part of developing plans to address current and future airport needs. This includes the plans required for airport development (e.g., the master plan and airport capital improvement plan) and environmental assessments as well as the additional elements or costs that are needed to complete such plans. These four largest categories comprise over 75 percent of all projects funded with general aviation entitlement grants in fiscal year 2001. Almost all of the 50 state and 2 territorial aviation officials and the 56 selected general aviation airport managers that we interviewed indicated that these entitlement grants are useful and help meet the needs of general aviation airports. They also told us that airports have easily met the administrative requirements for receiving these grants. Over two-thirds of the selected airport managers said that the grants provided critical funding to undertake projects at their airports. Although positive about the grants, some state officials and airport managers suggested a variety of changes. While the most frequently suggested change was to increase grant funding to better meet the cost of larger projects, some state aviation officials expressed concern that this change would correspondingly decrease the funds available for state aviation apportionments and thus hamper their ability to address statewide aviation priorities. Other frequently mentioned suggestions included extending the time frames for fund use and broadening the categories of eligible projects. Five of the 52 state aviation officials and one airport manager commented that the NPIAS is not an up- to-date list of airport needs and recommended that it not be used to distribute these entitlement grant funds. Over two-thirds of the state aviation officials told us that FAA’s requirements for receiving these entitlement grants are easy to fulfill. These requirements include completing required airport capital improvement and layout plans, submitting grant application forms, getting projects included in the NPIAS, and providing the 10 percent matching funds. Most general aviation airport managers we interviewed agreed with this view. FAA officials reported that they purposely simplified the grant processing paperwork requirements for the general aviation entitlement grants, knowing that many eligible airports would be first time recipients of FAA funding. In addition, FAA officials told us that the regional and district offices conducted extraordinary outreach efforts to ensure that qualifying airports were aware of these new grants. Almost 85 percent of the state aviation officials found entitlement grants useful by allowing general aviation airports to purchase needed equipment and undertake large projects, such as runway repairs. For example, one state aviation official told us that these grants were very important for maintaining safety and preserving runways at general aviation airports in his state. Over two-thirds of the selected general aviation airport managers said that they would not have been able to undertake or complete needed projects without these grant funds, and three-fourths of them said that the categories of projects eligible for funding include most of their capital needs. This means that they can use the funds for needed improvements, even for comparatively smaller projects such as lighting and fencing. Other state officials and airport managers added that the general aviation entitlement grants are important to the viability of general aviation airports. One state official said that the grants are helping to prevent the closure of general aviation airports, some of which provide medical access for small communities. Increasing the maximum amount of general aviation entitlement grants was the most frequent suggestion from state aviation officials to improve their usefulness. Some general aviation airport managers we surveyed supported this view. Almost two-thirds of the state aviation officials stated that the current annual maximum of $150,000 is not adequate to complete some major projects, while about one-third of the airport managers expressed this opinion. Other suggestions to improve the grants’ usefulness included making all 3 years of funding available to airports in the first year and increasing the time frame for funding availability to beyond the current 3- year limit. Almost half of the state officials suggested increasing the annual amount of these grants to better enable general aviation airports to meet the cost of larger capital projects. Most state officials said that because the $150,000 annual amount is not adequate to complete some major projects, some airports rollover the funding to accumulate up to $450,000 of funding over 3 years to complete such projects. However, some state aviation officials and airport managers also expressed concern that even this 3-year total might not be sufficient to complete such expensive capital projects as repairing or improving runways and taxiways. Some state aviation officials, as well as some selected airport managers, reported undertaking comparatively smaller projects, such as fencing for security, lighting, and pavement maintenance. While emphasizing the importance of the grants to general aviation airports, many selected airport managers expressed more concern about the adequacy of funding than any other issue we discussed with them. Their most frequent suggestion to improve the grants was to increase funding amounts to better meet the cost of larger capital projects. Nine state aviation officials also suggested allowing more flexibility in the existing 3-year time frame to use the funds, which would enable airports to afford a broader range of projects. Two airport managers we interviewed also suggested increased flexibility. For example, three state aviation officials suggested making all 3 years of grant funding available to airports in the first year. One of these officials said that making the full 3-year grant amount available to airports during the first year of funding would help airports undertake critical projects earlier because they would not need to wait to accumulate sufficient funding. According to FAA officials, this suggestion would represent a significant departure from current practices for administering entitlement funds. Officials told us that while multiyear grants can be made to primary airports for multiyear projects,AIR-21 did not provide this authority for general aviation entitlement grants. Officials noted that this added flexibility could benefit some airports. Alternatively, other state aviation officials, as well as some of the selected airport managers, suggested extending the current 3-year time frame for using these funds to as much as 4 or 5 years. They expect this extension would allow them to accumulate sufficient funds to undertake a broader range of capital projects and allow some airports sufficient time to complete these projects. While state aviation officials and selected airport managers said that increased funding and time frames could help them complete larger projects, apportionment grants are also available to general aviation airports to help them complete many of these projects. Some state aviation officials reported that general aviation airports use entitlement grant funds in combination with apportionment funds to complete such larger projects as improving, extending, or constructing runways, taxiways, and aprons. Seven of the state aviation officials expressed concern that, as funding for general aviation entitlement grants increases, funding for unassigned apportionment grants could correspondingly decrease because the amount available for unassigned apportionment is determined by deducting the general aviation entitlement grant funding from a fixed percentage of AIP. Two state aviation officials told us that the reduction in aviation apportionment funding could hamper the states’ abilities to address their aviation priorities. Some state aviation officials said that because they can better determine which projects within their states are high priorities, they are better able to distribute the funds to airports on a statewide basis. For example, a state aviation official said that because these entitlement grants have reduced funding for significant projects, the small projects general aviation airports have undertaken have had a minimal impact on that state’s aviation system. FAA officials added that while the grants have been used for worthwhile projects, less unassigned apportionment grant funding could limit a state’s ability to address its aviation priorities and provide access to the national aviation system from rural and nonmetropolitan areas. Because the entitlement funds come directly to general aviation airports, general aviation airport managers we interviewed were not concerned with the shift in funding source. One airport manager commented that projects requested by large airports are generally assigned a higher priority by states than projects requested by small airports. This manager added that, as a result, small airports usually do not receive apportionment funds from state aviation agencies. The manager told us that general aviation entitlement grants have allowed the airport to complete projects that would not have been selected by the state for apportionment funding. However, FAA officials stated that they determine the allocation of unassigned apportionment funds, except in block grant states. FAA officials added that project type and purpose receive more consideration than airport size in allocating these funds. While most officials said that the categories of projects eligible for entitlement grants generally covered the capital needs of general aviation airports, some of them suggested broadening the categories of eligible projects to include revenue-producing facilities. Under current program rules, revenue-producing facilities, such as hangars, terminals, and fueling stations, are not eligible for these grants. However, a few state officials told us that these facilities should be considered eligible for the grants because they would help produce the revenue necessary to allow small airports to supplement grant funding, complete needed projects, and, in some cases, become more self-sufficient and remain open. Many of the airport managers we interviewed supported this view. FAA officials indicated that expansion of eligibility warrants consideration but would require statutory changes. Five state aviation officials suggested that FAA use a more current list of airport projects to determine the amounts of future general aviation entitlement grants. One airport manager also made this suggestion. One state official was concerned that the list of projects in FAA’s NPIAS was between 18 and 24 months old when FAA used it to calculate the entitlement grants for fiscal years 2001 and 2002. That official said that because FAA used an outdated list of airports’ needs, the grant amounts some airports received were based on already completed projects. Another official added that some of these airports then used the grant funds for low- priority projects because higher priority projects had been completed. FAA officials disagreed with this criticism of the use of the NPIAS. They told us that the NPIAS is used only to calculate the amount of an airport’s general aviation entitlement grant. Decisions on the projects to be funded are based on the airport’s ACIP, which is kept up to date through regular consultation between FAA and the airport’s sponsor or state aviation officials. Nevertheless, FAA officials acknowledged that use of the NPIAS added complexity and confusion to calculation of general aviation entitlement grants, and indicated that a simplified method warranted consideration. Since most eligible airports receive the maximum grant amount, an alternative approach might be to establish a uniform general aviation entitlement amount for each general aviation airport listed in the NPIAS. FAA officials pointed out that, under current formulas, care would be needed in selecting the uniform grant level under this approach. Setting the level too low would limit the usefulness of the general aviation entitlement grant to individual airports, but setting the level too high would reduce the amount of unassigned apportionment funding based on state and national priorities. We provided the Department of Transportation with a copy of the draft report for its review and comment. FAA officials agreed with information contained in this report and provided some clarifying and technical comments, which we have incorporated where appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Transportation, and the Administrator, FAA. This report is also available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me or Carol Anderson-Guthrie at (202) 512-2834 if you have any questions. Individuals making key contributions to this report are listed in appendix V. We were asked to review the general aviation entitlement grant funding that was available to eligible nonprimary airports—referred to as general aviation airports for simplicity in this report—including reliever, nonprimary commercial service, and other general aviation airports. Of the universe of 2,943 general aviation airports, we reviewed data for those that were eligible to receive these grants based on the requirements established by the Federal Aviation Administration (FAA). We reviewed data on the general aviation entitlement grant funding that was accepted by general aviation airports, both directly from FAA and through block grant states. To obtain information from block grant states, we asked block grant state aviation officials to provide information on general aviation entitlement grant obligations made by their state to individual airports. We also analyzed a survey conducted by FAA to determine the types of projects that had been undertaken with the general aviation entitlement grant funding. In addition, to determine the stakeholders’ opinions concerning general aviation entitlement grants, we designed and administered a survey of all 52 state and territory aviation officials and 56 airport management officials. Initially, we conducted interviews with FAA and other relevant aviation industry officials to better understand the program and the scope of the issues. We gathered information on industry opinions about the general aviation entitlement program including its usefulness, its limitations, and possible changes to the program. The interviews provided an introductory view of the general aviation entitlement grant program. To establish a background context and understanding of the program’s purpose, we also conducted research on the legislation, statutes, policies, procedures, and guidelines that govern the implementation and operation of the general aviation entitlement grant program. To determine the amount of general aviation entitlement grant funds that were accepted by airports, we received data from FAA’s Airport Improvement Program (AIP) Grants Management Database, which contains all general aviation entitlement grants issued directly by FAA to airports and state sponsorship programs. This database also includes grants issued through October 1, 2002, using fiscal year 2001 and fiscal year 2002 general aviation entitlements. The database includes grants issued directly to airports and grants issued under state sponsorship outside of the block grant program. The database includes data on the aggregate amount of general aviation entitlement grant funding included in state block grants. However FAA’s national database does not track distribution of block grant funds by the states, including general aviation entitlement grants to individual airports in the nine block grant states. After comparing these data with the 1998-2002 National Plan of Integrated Airport Systems (NPIAS), we found some discrepancies. In coordination with FAA officials, we resolved these discrepancies and they are reflected in our report. We classified all grants accepted through state sponsorship program grants as having been accepted directly by individual airports. We then deleted all state sponsorship program grants from the data. The total block grants accepted were removed to avoid overstating the amount that individual airports accepted. To obtain grant acceptance data for airports receiving grants through block grant states, we asked state aviation officials in those states to provide acceptance data as of October 1, 2002, on all the airports that were eligible for general aviation entitlement grants. This information was self-reported, and we did not verify the information provided by the states. After we discussed our methodology with FAA officials and reached agreement that the data from FAA and the individual block grant states were comparable, we aggregated the datasets. Funds not accepted were classified as “carry over/not yet accepted.” We then merged these data with the 1998-2002 NPIAS file and categorized the results by airport size, as measured by the number of based aircraft and state, in order to identify possible trends in the data. Using FAA’s guidance, we stratified airports into four size categories: less than 20 based aircraft, 20-49 based aircraft, 50-99 based aircraft, and 100 or more based aircraft. In order to ascertain the projects that have been undertaken by general aviation airports, we used FAA’s survey of Airport Improvement Program FY2001 Non-Primary Entitlements. FAA surveyed its nine regions and the nine block grant states to gather data for several items, including the projects for which airports used the general aviation entitlement grant funds. FAA created 11 categories for the projects. We reviewed the results of its survey and met with FAA officials to discuss our interpretation. We did not verify the information provided by FAA. However, we raised questions about the overall design of FAA’s data collection effort and the specific steps carried out to help ensure the quality of the collected data. We determined that the data quality was sufficient for the purpose of our review. To assess the usefulness of the general aviation entitlement grants and to identify the potential areas of change, we surveyed state aviation and airport management officials. We designed a computer-assisted telephone interview (CATI) instrument to collect their responses. We conducted a census of state aviation officials (50 states and 2 territories) who oversee the operations of airports and head their respective state aviation programs. All 52 of these aviation officials provided their opinions on the experiences of airports in their respective states and territories. To compare state and airport-level responses about the program, we also obtained the perspectives of 56 general aviation airport management officials to acquire their responses to the same questions and direct illustrations of their experience with the program. The small sample size was not designed to be projectable to the population of general aviation airports. However, measures were taken to help ensure that the airports chosen systematically cover and broadly represent the substantive criteria. Our selection approach was completed in three steps. First, we identified airports that accepted entitlement grant funds in either fiscal years 2001 or 2002. Second, we stratified these airports according to: size—the number of based aircraft—measured as small (less than 20 based aircraft), medium 1 (20-49), medium 2 (50-99), and large (100 or more); FAA regional location; and block grant status (whether the airport is located in a block grant state). We sought guidance from FAA in determining the airport size categories. The stratification process produced 56 exclusive groups (airports in block grant states are not located in each FAA region). Then, in the third step, we randomly selected one airport from each of the groups, which were joined to form the final sample of 56 airports. All of the airport management officials provided responses about their experiences with the program. The CATI consisted of closed- and open-ended questions that asked about an airport’s experiences with and its ability to meet the requirements of the general aviation entitlement grant. Descriptive statistical analyses of close- ended survey data were performed to determine response patterns. Analyses of open-ended responses were conducted to detect broad themes and topics within those themes, summarizing state aviation and airport management responses on program improvements and projects undertaken using entitlement grants. We conducted our review from June 2002 through February 2003 in Easton, Maryland; Odenton, Maryland; Greenville, Texas; Mesquite, Texas; and Washington, D.C., in accordance with generally accepted government auditing standards. The collection of state aviation and airport management interview data was completed in September and November 2002, respectively. The Airport Improvement Program (AIP), which was created in 1982, is funded by the Airport and Airway Trust Fund. AIP distributes funds to airports through grants in a manner that reflects several national priorities and objectives including financing small state and community airports. The distribution system for AIP grants is complex. It is based on a combination of formula grants (also referred to as apportionments) and discretionary funds. Formula funds are apportioned by formula or percentage and may be used for any eligible airport or planning project. Through the AIP, the Federal Aviation Administration (FAA) apportions formula grants automatically to specific airports or types of airports including primary airports, cargo service airports, general aviation airports, and Alaska airports. In administering AIP, FAA must comply with various statutory provisions, formulas, and set-asides established by law, which specify how AIP grant funds are to be distributed among airports. Each year, FAA uses the statutory formulas to determine how much in apportionment funds are to be made available to each airport or state. After determining these amounts, FAA informs each airport or state of the amount of funding available for that year. However, these funds do not automatically go to an airport’s sponsor. To receive the funds it is entitled to, an airport or state has to submit a valid grant application to FAA. In addition, under the act, individual airports and states do not have to use these funds in the year they are made available. The act gives most airports and states up to 3 years to use their apportionment funds. This carryover allows airports to accumulate a larger amount to pay for more costly projects. Once the apportionments have been determined, the remaining amount of AIP funds is deposited in that program’s discretionary fund, which consists of set- asides that are established by statute and other distributions. AIP funds are usually limited to planning, designing, and constructing projects that improve aircraft operations, such as runways, taxiways, aprons, and land purchases, as well as to purchase security, safety, and emergency equipment. AIP funds are also available to plan for and implement programs to mitigate aircraft noise in the vicinity of airports. However, these grants are generally not eligible for projects related to commercial revenue-generating portions of terminals, such as shop concessions, commercial maintenance hangars, fuel farms, parking garages, and off- airport road construction. Outside the national system are many landing strips and smaller airports, most with fewer than 10 aircraft. These airports have at least 10 aircraft based at their locations and fewer than 2,500 scheduled enplanements. General aviation airports (which includes reliever airports) and non- primary commercial service airports may be eligible to receive general aviation entitlement grant funding. enplanements annually. Large hubs (31): at least 1 percent or more of all enplanements. Medium hubs (35): at least 0.25 percent, but less than 1 percent of all enplanements. Small hubs (71): at least 0.05 percent, but less than 0.25 percent of all enplanements. Nonhubs (282): more than 10,000 enplanements, but less than 0.05 percent of all enplanements. In addition to those named above, Jon Altshul, Nancy Boardman, Jeanine Brady, Kevin Jackson, Bert Japikse, Michael Mgebroff, Jeff Miller, George Quinn, and Don Watson made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to GAO Mailing Lists” under “Order GAO Products” heading. | In 2000, Congress created general aviation entitlement grants to provide funding up to $150,000 per fiscal year to individual general aviation airports. These grants fund capital improvements and repair projects. GAO was asked to (1) assess the amount of funding airports used, (2) identify the types of projects undertaken, and (3) convey suggestions made by interested parties to improve the grants in preparation for the reauthorization of the legislation in 2003. By the end of fiscal year 2002, most fiscal year 2001 general aviation entitlement grant funds had been accepted by the airports to which they were apportioned. However, less than half of the fiscal year 2002 entitlement grant funds had been accepted by those airports at the end of fiscal year 2002. The remaining portions of unused entitlement funds for the 2 fiscal years were carried over to use in the following years--up to 3 years. In both fiscal years, the percentage of entitlement grant funds accepted varied widely by state. Larger general aviation airports accepted a greater percentage of their entitlement grants than small airports for both fiscal years. In fiscal 2001, general aviation airports used these funds primarily to undertake landing area construction projects--runways, taxiways, and aprons. In addition, the airports used the funds to undertake pavement maintenance; airfield lighting, weather observation systems, and navigational aids; and planning projects. These four categories constituted over 75 percent of all projects undertaken with these funds. While most state aviation officials, selected airport managers, and FAA officials we spoke with indicated these entitlement grants were useful, they also suggested some changes. The most common concerned the amount of funding. Several state aviation officials and some selected airport managers indicated that the $150,000 annual maximum amount per airport was not adequate to complete projects. However, state officials expressed concerns that increasing the entitlement amount could hinder the states' ability to address their own aviation priorities because any increase would proportionately decrease the states' apportionments. The majority of the selected airport managers indicated that, without these grants, their airports would have been unable to undertake the projects. Other suggestions concerned increasing the amount of time to use the grants, broadening the categories of eligible projects, and using an alternative to FAA's National Plan of Integrated Airports Systems as the basis for funding eligible projects. |
TANF was designed to give states the flexibility to create programs that meet four broad goals: Providing assistance to needy families so that children may be cared for in their own homes or in the homes of relatives; Ending the dependence of needy parents on government benefits by promoting job preparation, work, and marriage; Preventing and reducing the incidence of out-of-wedlock pregnancies; and Encouraging the formation and maintenance of two-parent families. The amount of the TANF block grant was determined based on pre- PRWORA spending on (1) AFDC, a program that provided monthly cash payments to needy families; (2) Job Opportunities and Basic Skills (JOBS), a program to prepare AFDC recipients for employment; and (3) Emergency Assistance, a program designed to aid needy families in crisis situations. To meet the MOE requirement, states must spend 80 percent or 75 percent of their pre-PRWORA share of spending on AFDC, JOBS, Emergency Assistance, and AFDC-related child care programs. States have considerable flexibility in what they spend TANF and MOE funds on. In addition to spending on cash benefits—that is, monthly cash assistance payments to families to meet their ongoing basic needs—states can spend TANF/MOE funds on services for cash assistance recipients or other low-income families. States are allowed to transfer up to 30 percent of their TANF funds to the Child Care and Development Fund (CCDF) and the Social Services Block Grant (SSBG). TANF regulations require states to report to HHS data on families receiving “assistance” under the TANF program. These reported families are referred to as the TANF or welfare caseload. Typically, these families are receiving monthly cash payments. Therefore, families who receive TANF/MOE-funded services but do not receive monthly cash payments are typically not included in the reported TANF caseload. The states’ implementation of more work-based programs, undertaken under conditions of strong economic growth, has been accompanied by a dramatic decline in the number of families receiving cash welfare. The number of families receiving welfare remained steady during the 1980s and then rose rapidly during the early 1990s to a peak in March 1994. The caseload decline began in 1994 and accelerated after passage of PRWORA, with a 53 percent decline in the number of families receiving cash welfare—from 4.4 million families in August 1996 to 2.1 million families in July 2001. Caseload reductions occurred in all states, ranging from 16 percent in Indiana to 89 percent in Wyoming. Between July and September 2001, however, the nationwide welfare caseload increased 1 percent. Between July and December 2001, the welfare caseload in many states increased, with a 5 percent average increase across 18 of 23 surveyed states. While economic changes and state welfare reforms have been cited as key factors to explain nationwide caseload changes, there is no consensus about the extent to which each factor has contributed to these changes. Given the large decline in the number of families receiving cash assistance in recent years, attention has been focused on learning how these families are faring. Studies show that most adults who left welfare had at least some attachment to the workforce. Our 1999 review on the status of former welfare recipients based on studies from seven states found that from 61 to 71 percent of adults were employed at the time they were surveyed. Studies measuring whether an adult in a family had ever been employed since leaving welfare reported employment rates from 63 to 87 percent. A 2001 review of state and local-level studies conducted by the Congressional Research Service (CRS) shows similar patterns. In addition, the Urban Institute, using data from its 1999 National Survey of America’s Families (NSAF)—a nationally representative sample—finds that 64 percent of former recipients who did not return to TANF reported that they were working at the time of follow-up, while another 11 percent reported working at some point since leaving welfare. Studies also show that not all families who leave welfare remain off the rolls. For example, the Urban Institute study using 1999 NSAF data reported that 22 percent of those who had left the rolls were again receiving benefits at time of the survey follow-up. Although most adults in former welfare families were employed at some time after leaving welfare, many worked at low-wage jobs. Of those who left welfare, former recipients in the seven states we reviewed had average quarterly earnings that generally ranged from $2,378 to $3,786 or from $9,512 to $15,144 annually. This estimated annual earned income is greater than the maximum annual amount of cash assistance and food stamps that a three-person family with no other income could have received in these states. However, if these earnings were the only source of income for families after they leave welfare, many of them would remain below the federal poverty level. On the basis of additional information from the NSAF, a 2001 Urban Institute study estimated that about 41 percent of those who left the welfare rolls were below the federal poverty level, after including an estimate of the earned income tax credit and the cash value of food stamps and subtracting an estimate of payroll taxes. While some former welfare recipients are no longer poor, others can be considered among the working poor. Nationwide, about 16 percent of the nonelderly population lives in families in which adults work, on average, at least half of the time yet have incomes below 200 percent of the federal poverty level. Prior to welfare reform, states focused their welfare spending on providing monthly cash payments. However, since welfare reform, states are spending a smaller proportion of welfare dollars on monthly cash payments and a larger share of welfare funds on services. Rather than emphasizing income maintenance among welfare families, under TANF, states are focusing their welfare spending on work support services that help both welfare families and other low-income families find and maintain employment. In addition to using welfare dollars to support work, the flexibility of TANF also allows states to use these funds to provide other services designed to promote self-sufficiency among low-income families. As shown in figure 1, in fiscal year 1995, spending on AFDC—a program that primarily provided monthly cash payments—totaled 71 percent of welfare spending. In contrast, in fiscal year 2000, spending on cash assistance totaled only 43 percent of welfare spending. During that same period, the percent of total welfare dollars spent on other benefits and services increased from 18 percent to 48 percent. Overall, welfare spending declined from fiscal year 1995 to fiscal year 2000, in part because (1) states chose to leave part of their TANF block grant allotments for fiscal year 2000 as unspent reserves in the U.S. Treasury, as allowed under PRWORA and (2) MOE requirements for states are only 80 percent or 75 percent of states’ pre-PRWORA share of welfare spending. Child care (State share) Note 1: Categories shown for fiscal year 2001 but not for fiscal year 1995 (such as tax credits) could have existed in fiscal year 1995 but been paid for with nonwelfare dollars not included in this chart. Note 2: The chart does not include the $8,625,779,575 (36%) of available TANF funding that was left unspent at the end of fiscal year 2000. Note 3: TANF funds transferred to the CCDF and SSBG may not have been expended in fiscal year 2000; rather, these funds may have been reserved in the CCDF and SSBG for future use. Also indicative of the shift from cash to service spending is that in fiscal year 1995, no state spent more than 50 percent of its welfare dollars on services or benefits other than monthly cash payments, compared to fiscal year 2000 when 26 states used more than 50 percent of their TANF/MOE expenditures for services. Nationwide, child care was the noncash service for which the greatest proportion of TANF/MOE funds were used. Overall, in fiscal year 2000, states spent 19.2 percent of their TANF/MOE funds on child care. Among all of the welfare service categories, 32 states spent the greatest proportion of TANF/MOE funds on child care. Unlike AFDC, which focused on income maintenance for welfare families, federal and state welfare policies under TANF have focused on helping welfare families secure and maintain employment. To achieve this objective, states have expanded and intensified their provision of work support services. Officials in all five of the states we visited said their states are providing employment services to more welfare families under their current TANF programs than they were under pre-welfare reform employment programs. The types of work-support services that many states provide for their welfare recipients include job search, job placement, and job readiness services; intensive case management services to assess individual clients’ barriers to work and provide referrals for support services aimed at removing those barriers; and services to help clients obtain and maintain employment, including subsidized child care, transportation, and short-term loans for work-related supplies. Prior to welfare reform, welfare spending was generally focused on families receiving monthly cash payments. Since welfare reform, states have more flexibility in how and on whom they spend welfare dollars. As a result, states are providing more services to low-income families who are not on welfare, including those who have recently left welfare. For example: Most of the surveyed states use TANF/MOE funds to provide child care subsidies to the general low-income population. Wisconsin uses TANF/MOE funds to provide employment, education, and training services to low-income families not receiving cash assistance. Pennsylvania uses TANF/MOE funds to provide job retention, advancement, and rapid reemployment services to persons not receiving TANF cash assistance. The flexibility of TANF/MOE funds has also allowed states to establish services aimed at protecting and developing children, strengthening families, and promoting self-sufficiency. For example: Orange County, California, uses TANF dollars to help fund centers that provide after school activities, literacy programs, domestic violence services, and substance abuse prevention programs. Indiana uses TANF/MOE funds for child development programs and to subsidize textbook rental fees for low-income children. Texas uses TANF funds to provide high-risk parents with intensive services, beginning prior to the birth of a child, to prevent low birth-weight and child abuse and to promote school completion for teen parents. While states are using TANF/MOE dollars to provide services to many families who do not receive monthly cash assistance payments, these families are not included in the reported TANF caseload, and the actual number of these families is unknown. Based on our survey of 25 states, we estimate that at least 46 percent more families than are in the reported TANF caseload are receiving TANF/MOE-funded services. Data available from most states give an incomplete picture of the number of families served with TANF/MOE dollars, and state officials raised concerns about the possibility of additional TANF reporting requirements being imposed to provide more complete data on these families. As shown in figure 2, we found that in addition to the approximately 1.8 million families counted in the TANF caseload for 25 surveyed states, at least another approximately 830,000 families were receiving a TANF/MOE-funded service but were not included in the reported TANF caseload. These approximately 830,000 families are not included in the reported TANF assistance caseload because they do not receive monthly cash assistance payments and the services they receive do not fall under the definition of assistance in the TANF regulations. Our estimate likely understates the number of families receiving TANF/MOE-funded services that are not part of the reported TANF caseload. For most states, our estimate only takes into consideration a single TANF/MOE-funded service being provided to low-income families who are not included in the TANF caseload. Usually, this single service is child care because states have extensive data on child care, and because child care is often the TANF/MOE-funded service that serves the most families not receiving cash assistance. Our estimate does not take into consideration many of the services offered by states to low-income families who are not in the TANF caseload because the states could not provide the type of data on those services that we needed to include them in our estimate. For additional information on how we developed our estimate and on data obtained from states, see appendixes I and II. Many of the families included in the counts of “other low-income families” in figure 2 are receiving a service that is only partially funded with TANF/MOE dollars. This is because states often mix TANF/MOE funds with funds from other sources to provide a single service. Although TANF/MOE dollars may not have paid for 100 percent of the cost of providing a service, the TANF/MOE portion of the cost can be significant. For example, for states included in our review, the TANF/MOE portion of monthly child care subsidies averaged approximately $266 per family out of a total average subsidy of $499 per family. The average child care subsidy per month per family compares to an average cash benefit per month per family of $407. Two of the 25 states we surveyed—Indiana and Wisconsin—had more comprehensive data than could be provided by other states on the number of low-income recipients being served with TANF/MOE dollars. Indiana and Wisconsin had these data because they have information systems that can sort through recipients of subsidized child care and other TANF/MOE-funded services to produce one unduplicated count of recipients across several services. As shown in figure 3, Indiana and Wisconsin found that at least 100 percent more families than are in the states’ reported TANF caseloads received TANF/MOE-funded services. The data that are available from most states we surveyed give an incomplete picture of the number of families being served with TANF/MOE dollars. TANF reporting requirements have focused on families who are receiving monthly cash assistance, that is, families in the TANF caseload. Therefore, most states we surveyed have not developed data on families receiving TANF/MOE-funded services who are not in the TANF caseload. During our review, some state officials raised concerns about the possibility of additional TANF reporting requirements being imposed on states to collect information on families not included in the TANF caseload. These concerns included that (1) states lack the information systems that would be needed to fulfill additional requirements, (2) fulfilling additional requirements will increase administrative costs, (3) additional data collection requirements could deter states and service providers from offering services because they would not want the administrative burden associated with them, and (4) requiring all service recipients to provide personal identifying information for every service may deter some people from accessing services because of the stigma associated with welfare. Since the Congress passed welfare reform legislation in 1996, states have taken steps to implement a work-based, temporary assistance program for needy families. As cash assistance caseloads declined in recent years, freeing up resources for other uses, states used some of these funds to involve increasing numbers of welfare families in welfare-to-work activities and to provide services to other low-income families in keeping with the goals of TANF. The increased emphasis on work support and other services for recipients of cash assistance and those not receiving cash assistance represents a significant departure from previous welfare policy that focused on providing monthly cash payments. While the goals and target populations of welfare spending have changed, the key measure of the number of people served remains focused solely on families receiving monthly cash assistance. Although this measure provides important information for administrators and policymakers, it does not provide a complete picture of the number of people receiving benefits or services funded at least in part with TANF/MOE funds. While a more complete accounting of people receiving services could be helpful to understanding how states are using TANF/MOE dollars, requiring states to provide a more complete accounting raises concerns from state officials, including concerns about creating a reporting burden and discouraging people from accessing services. Mr. Chairman, this concludes my prepared statement. I will be happy to respond to any questions you or other Members of the Committee may have. For future contacts regarding this testimony, please call Cynthia M. Fagnoni at (202) 512-7215 or Gale Harris at (202) 512-7235. Individuals making key contributions to this testimony included Kathy Peyman, Kristy Brown, and Rachel Weber. To be included in our estimate of the number of low-income families receiving TANF/MOE-funded services who were not in the TANF caseload, a service or the data on the service had to meet each of the following criteria: Service had to be funded with at least 30 percent TANF/MOE dollars—If a service was funded with at least 30 percent TANF/MOE dollars (and the other criteria were met for our estimate), we included all service recipients not receiving monthly cash payments. Data could distinguish between cash and non-cash families—We only included counts of families who were not receiving monthly cash assistance payments and were not on the TANF caseload. Data represented an unduplicated count of recipients—If counts for different services could not be combined without ensuring that families receiving more than one service were only counted once, we used the count for the largest single service. If a state had information systems that could sort through recipients of various services and develop an unduplicated count of recipients across those services, we used that count for our estimate. Other aspects of our estimate include the following: Number of families—We used data on the average number of children per family receiving subsidized child care in each state to convert data on child care recipients into estimates of the number of families receiving subsidized child care. When services were determined to have only adult recipients, data for these services were treated as family counts. Time period—We used the most recent available data on service recipients from each state. These were either for a month in 2001 or a monthly average for 2001. For our comparison with TANF caseload, we used the TANF caseload count for the same time period covered by the data on service recipients. The surveyed states varied in their ability to provide data on low-income families receiving TANF/MOE-funded services. States were able to provide these data for families receiving subsidized child care. However, only 11 states were able to provide these data for at least one TANF/MOE-funded service other than child care. Figure 4 shows the data we obtained from states on child care. To show how the number of these families compares to the TANF caseload, each state’s count is shown as a percentage of the state’s TANF caseload. Although officials from all surveyed states said the states were providing TANF/MOE-funded services other than child care to low-income families who are not in the TANF caseload, they usually did not have data on the number of these families. Only 11 states were able to provide data on at least one service other than child care. Figure 5 shows the data we obtained from states. To show how the number of these families compares to the TANF caseload, each state’s count is shown as a percentage of the state’s TANF caseload. Table 2 shows the services included for each state in figure 5. | The Temporary Assistance for Needy Families (TANF) block grant makes $16.5 billion available to states each year, regardless of changes in the number of people receiving benefits. To qualify for their full TANF allotments, states must spend a certain amount of state money, referred to as maintenance-of-effort funds. As states implemented work-focused reforms during the strong economy of the 1990s, welfare caseloads dropped by more than 50 percent. GAO found that most former welfare recipients were employed at some point after leaving welfare, typically with earnings that did not raise them above the poverty level. Under welfare reform, spending shifted from monthly cash payments to services, such as child care and transportation. This shift reflects two key features of reform. First, many states have increased spending to engage more welfare families in work-related activities and to provide more intensive services. Second, many states have increased their efforts to provide services to low-income families not receiving welfare. Services for these families include child care, case management, and job retention and advancement services for families who have recently left welfare for employment as well as other low-income working families. Although states have the flexibility under TANF to use their federal and state welfare-related funds to provide services to families not receiving monthly cash assistance, these families are not reflected in caseload data reported to the Department of Health and Human Services. As a result, caseload data do not provide a complete picture of the number of families receiving benefits and services through TANF. |
State accountability systems under ESSA include four key components: 1) determine long-term goals, 2) develop performance indicators, 3) differentiate schools, and 4) identify and assist low-performers (see fig. 1). ESSA requires states to submit state plans to the Secretary of Education to receive Title I funds. These funds support schools and districts with high concentrations of students from low-income families. ESSA requires that states develop these plans with “timely and meaningful consultation” with a variety of stakeholders, and also coordinate the plans with certain other federal programs. Education has developed a state plan template that states can use when formulating their consolidated state plans and procedures for submitting these plans. ESSA requires that state plans be peer reviewed and that the Secretary of Education approve them if they meet the requirements in the law. As of May 2017, 16 states and the District of Columbia had submitted their plans to Education for review; the remaining plans are due by September 18, 2017, according to Education’s guidance. Both states we visited as part of our review intend to submit their plans by the September deadline. Representatives of all nine national stakeholder groups we spoke with saw ESSA’s accountability provisions as somewhat flexible, with most indicating that ESSA strikes a good balance between flexibility and requirements. One stakeholder said, for example, that ESSA “threads the needle very well” between giving states flexibility in designing their accountability systems and placing requirements on states to help ensure that all children have an opportunity to get a good education. Most stakeholders also mentioned ESSA provisions related to developing performance indicators as an example of flexibility. One stakeholder, for example, saw, these provisions as flexible because they allow states to define the exact indicators they will use, including indicators that measure student growth in addition to student proficiency when assessing academic performance. Representatives of four national stakeholder groups that have worked directly with states to help them develop and revise their accountability systems told us that the extent to which states are revising their accountability systems varies because some states are satisfied with their current systems and others are using the flexibilities in the law to make significant overhauls. According to representatives of one stakeholder group, for example, many states already began revising their accountability systems as a result of waivers Education granted under the previous reauthorization, the No Child Left Behind Act of 2001 (NCLBA). They further said that ESSA is generally flexible enough for states to continue down the path they started in implementing their NCLBA waivers. In addition, representatives of several stakeholder groups mentioned that for states that see their current accountability systems as lacking in some way, or because consultation with state stakeholders has pointed to the need for significant change, ESSA provides room for them to consider innovative revisions. Ohio and California, the two states we visited, illustrate how different states are using the flexibilities in ESSA to develop accountability systems that are tailored to meet state needs as well as ESSA requirements for each of the four key components of state accountability systems: determine long-term goals, develop performance indicators, differentiate schools, and identify and assist low performers. (See sidebars for summaries of ESSA requirements for these components.) Highlights of Selected ESSA Requirements: Long-Term Goals ESSA requires states to design and establish ambitious long-term goals, including measurements of interim progress toward meeting them. For example, states are to set goals for all students, and separately for each subgroup of students, for improved academic achievement and high school graduation rates, among other things. Student subgroups include economically disadvantaged students, students from major racial and ethnic groups, children with disabilities, and English learners. Ohio officials told us that they chose a 10-year timeline for meeting their long-term goals to help address stakeholder concerns about providing schools and districts sufficient time to meet the new goals. For example, one of Ohio’s proposed goals is that at least 80 percent of students score proficient or higher on Ohio’s statewide assessments in English Language Arts and math within 10 years. Meeting this goal may be easier for some schools and groups of students than others, as some are further away from the goal than others. To close this “achievement gap,” the state plans to set its proficiency goals for each student subgroup such that those groups furthest behind will be expected to make greater annual gains in an effort to catch up over the 10-year period. Further, in an effort to make the 10-year long-term goals achievable for lower- performing groups, the state’s draft plan proposes to set the 10-year proficiency goals for them lower than the 10-year goals for higher- performing subgroups. Ohio state officials and stakeholders told us that some stakeholders were concerned about having different goals for different subgroups: Some find the annual or long-term goals for low performing subgroups too ambitious and others find it problematic that certain students would be held to different standards than others. State officials said to meet the ESSA requirement of having ambitious long-term goals, they designed their approach to significantly close the achievement gap over 10 years. At the time of our work, Ohio was still working on its final approach to address this issue. According to California’s draft plan, California plans to achieve its goals within 5 to 7 years—a timeframe that coincides with regularly scheduled reviews of the performance indicators used in its accountability system. Unlike Ohio, California is proposing that schools and districts propose their own interim goals to close achievement gaps and that the same timeline for long-term goals (5 to 7 years) apply to all student subgroups. State officials mentioned that district interim goals must take into account the current performance of student subgroups and how far this performance is from the state’s long-term goals. Highlights of Selected ESSA Requirements: Indicators States are required to annually measure, for all students and for student subgroups, four “academic” indicators. These indicators include academic achievement for all public schools, as measured by proficiency on the annual state assessments, and the four-year adjusted cohort graduation rate for public high schools, among other things. In addition, states are also required to have, for all public schools, at least one statewide indicator of school quality or student success that meets certain criteria. This indicator may include measures of student and educator engagement, student access to and completion of advanced coursework, postsecondary readiness, school climate and safety, or any other indicator the state chooses that meets the requirements in the law. Ohio officials told us that they plan to use their current indicators as the foundation for meeting ESSA’s requirements for academic indicators, and make some revisions or refinements as needed. With regard to ESSA’s required indicator of school quality or student success, state officials said they plan to include chronic absenteeism because studies show that school attendance is strongly correlated with successful student performance. Because the state already collects attendance data, the indicator also reduces the need for additional data collection. Ohio officials and stakeholders said that ESSA has prompted many substantive conversations about what to use for the school quality or student success indicator. For example, Ohio stakeholders and a school district official told us that they have concerns about using chronic absenteeism as a measure because schools and districts cannot control whether students come to school and that other indicators might be beneficial measures. State officials mentioned that in response to these concerns, Ohio’s draft plan now includes a commitment to pilot a school climate survey for potential inclusion as an indicator of school quality or student success in future years. Although California’s draft plan proposes using its existing indicators to meet ESSA’s requirements for academic indicators, the state also plans to develop some new ones. For example, as an additional academic indicator, the state proposes to use chronic absenteeism. According to its draft plan, there is a strong correlation between strong academic performance and school attendance. For the school quality or student success indicator, California chose suspensions, with high rates indicating poor quality and failure, and low rates indicating success. State officials said that ESSA flexibilities allowed them to differentiate what was considered high and low rates of suspension by grade level (i.e., elementary, middle, and high school). They explained that this is important because it allows them to tailor the indicator for each level. Differentiate Schools (distinguishing between levels of performance) accountability system, including the four academic indicators, for all students and for each subgroup of students; and include differentiation of any school in which any subgroup of students is determined by the state to be consistently under-performing. Ohio officials told us that they propose to continue to use the state’s current system of six indicators, with modifications, to assess school and student performance. Under the proposal, schools would receive a letter grade on each indicator. Some of the indicators, such as academic achievement, would measure current performance while other indicators, such as academic progress, would measure growth. Ohio state officials told us that they also intend to roll up indicator scores into an overall letter grade for schools in 2018. They said that reporting a letter grade on each indicator provides detailed information, while an overall letter grade provides an easily understandable overview of performance. Ohio stakeholders and school district officials expressed concerns about both the use of letter grades and rolling up grades on each indicator into an overall score. They explained that words, such as meets or exceeds expectations, could more accurately communicate performance than letter grades. California officials said they plan to distinguish performance of schools and student subgroups by using a dashboard in which school and student subgroup performance would be color-coded based on each of six state indicators. These officials said that each indicator measures current student performance as well as changes in performance over time. Unlike Ohio, California does not plan to aggregate the indicators into overall scores for schools and student subgroups. California officials told us that they chose their approach for two reasons. First, aggregating scores on indicators into an overall score can mask individual areas where a school may be struggling. In contrast, reporting individual indicators allows key distinctions to be maintained in performance across a variety of factors. Second, officials said that measuring performance in both the current year and over time on each indicator provides a more complete picture of performance. Ohio state officials told us that their processes for identifying low performing schools (known in Ohio as priority schools) and schools with underperforming subgroups (known in Ohio as focus schools) will be similar to the process they used under their NCLBA waiver and will include new indicators, such as chronic absenteeism as one indicator of school quality or student success. Furthermore, Ohio officials in one district discussed a requirement in ESSA that they believe will improve Ohio’s system of intervening in low-performing schools and subgroups— that states establish criteria for how schools can exit certain ESSA improvement categories. As part of meeting this requirement, these officials said that Ohio is developing benchmarks for graduation rates and student growth indicators, which they said should make it clear to districts when they can release schools from improvement categories. addition, states are to notify each school district about schools in which any subgroup of students is consistently underperforming, and ensure the district provides notification to these schools. For each school identified, the school or district is to develop and implement, in partnership with stakeholders, either a comprehensive support and improvement plan or a targeted support and improvement plan, as applicable, to improve student outcomes. These plans must be informed by all state indicators and include evidence- based interventions. States are to, among other things, establish statewide exit criteria for schools identified for comprehensive support and improvement and additional targeted support. California’s draft state plan proposes to identify low-performing schools and student subgroups based on where they fall on its dashboard of color-coded performance indicators, and lists three options for how the state may do this. Regarding assisting low-performing schools and student subgroups, California state officials said they plan to give districts the authority to develop interventions. California officials in one district said that ESSA provides flexibility to reconsider how they provide school interventions. They said, for example, that they can now provide an intervention such as tutoring when they feel it will be most effective— before, during, or after the school day—and that this was partly because of ESSA. Given current timelines, Education officials said that the department is currently focused on the review and approval process for state plans and providing assistance to states in developing their plans. Under ESSA, the Secretary of Education is responsible for establishing a peer-review process to assist in the review of state plans, and for approving state plans that meet the requirements of ESSA. Education officials told us that the peer reviewers will consider the technical, educational, and overall quality of specific portions of state plans when making their recommendations to the Secretary. According to guidance Education provided to peer reviewers, another goal is for reviewers to provide states with objective feedback on the technical, educational, and overall quality of their plans. Education officials told us that they are developing monitoring protocols that they will pilot with eight or nine states in early 2018. These protocols are intended to guide in-depth reviews of state activities related to ESSA implementation. The officials noted that they are piloting the protocols to ensure that they have an appropriate monitoring tool to obtain information on how states are implementing ESSA requirements. Officials told us that Education used similar in-depth state reviews when developing past monitoring protocols, reviewing a select number of states each year with the goal of reviewing all states within a 3- to 4-year cycle. Given that some states have submitted their state plans earlier than others for approval, officials also noted that they will pilot the monitoring protocol in states that progressed enough to warrant monitoring. To complement the in-depth monitoring, Education officials said they also plan to continue their past practice of maintaining regular contact with all states. Education officials also told us they are determining whether there is a need for additional guidance to states on aspects of ESSA implementation. Education has provided assistance to states in a number of ways. For example, the department hosted webinars on the state plan template that states may choose to use, and on the peer review process. Education has also implemented a technical assistance initiative called the State Support Network to support state and district school improvement efforts under ESSA. This network aims to connect states and districts with technical assistance providers and subject matter experts to develop strategies for supporting schools. According to the network’s website, it aims to help states and districts learn from prior school improvement efforts, assess needs and assets to inform strategies, and build sustainable systems to support continuous improvement. During our review, representatives of most national stakeholder groups with whom we spoke told us that states could use guidance on a number of issues. One example of guidance that they told us states might consider useful is identification of appropriate evidence- based interventions. As part of its ongoing assistance to states, Education has addressed this topic in a number of ways, including non- regulatory guidance, resources via the State Support Network, and case studies. ESSA requires states and Education to report annually on specific aspects of ESSA implementation and states to submit significant changes to their plans to Education for review (see sidebar for a summary of these requirements). states to submit annual reports to the Secretary of Education. These annual reports must include information on student achievement based on the annual state assessments, including disaggregated results for student subgroups. The reports must also include certain information on English learners, schools identified for support and improvement, and teacher qualifications, among other things. Under Education’s current reporting procedures, states submit information for each school year the following fall. Education officials said that they plan to continue this practice, so state submissions in fall 2018 would be the first to include information based on ESSA requirements, i.e., for school year 2017-2018. Annual report to Congress: ESSA also requires the Secretary of Education to submit an annual report to specified congressional committees that provides both national and state-level data on the information collected from the states’ reports. that once a state plan is approved it remains in effect for the duration of the state’s participation in Title I, though it also directs states to periodically review and revise plans as necessary to reflect any changes in state strategies or programs. If a state makes any significant changes to its state plan, such as adopting new academic assessments, the state must submit a revised plan or amendment to Education for review. On June 16, 2017 we provided a draft of this report to Education for comment. That same day, Education issued additional guidance for states on developing their state plans, including some guidance related to accountability systems. Education provided technical comments on our draft, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the U.S. Secretary of Education. In addition, the report will be available at no charge on the GAO website at http://gao.gov. If you or your staff have any questions about this report, please contact me at (617) 788-0580 or nowickij@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. In addition to the contact named above, Bill Keller (Assistant Director), Nancy Cosentino (Analyst-in-Charge), James Bennett, Deborah Bland, Mindy Bowman, Sarah Cornetto, Randolfo DeLeon, Anna Duncan, Holly Dye, Brian Egger, Sheila R. McCoy, and Monica Savoy made key contributions to this report. | Federal, state, and local governments spent about $640 billion in 2015 to educate nearly 50 million public school children in the United States. ESSA, enacted in December 2015, reauthorized the Elementary and Secondary Education Act of 1965. To receive federal education funds for school districts with high concentrations of students from low-income families, ESSA requires states to have accountability systems that meet certain requirements, but gives states flexibility in how they design their systems. GAO was asked to review states' early experiences with ESSA. This report examines (1) selected stakeholders' and states' views of ESSA's flexibilities as states redesign accountability systems, and (2) Education's next steps in implementing ESSA. GAO interviewed representatives of nine prominent national education stakeholder groups, selected for their knowledge about state accountability systems; met with education officials in California and Ohio—states that were among those stakeholders cited as offering differing approaches to developing their systems; interviewed Department of Education officials; reviewed relevant federal laws and guidance; and reviewed accountability system guidance from California and Ohio and these states' draft state plans. GAO is not making recommendations in this report. The Department of Education provided technical comments on a draft of this report, which we incorporated as appropriate. According to most of the nine education stakeholder groups GAO interviewed and officials in the two states GAO visited, the Every Student Succeeds Act (ESSA) strikes a good balance between flexibility to meet state needs and ESSA requirements. Accountability systems measure student and school performance to identify and assist low-performers. States are currently developing plans for accountability systems under ESSA. According to stakeholders, some states are using ESSA's flexibilities to significantly change their accountability systems while others are making more limited changes. Changes stakeholders discussed pertained mostly to four key components (see figure). GAO visited California and Ohio and these two states reported using ESSA's flexibilities to distinguish between levels of school performance, among other things. For example, Ohio plans to assign letter grades to schools on each of six performance indicators. Under Ohio's proposal, schools will also receive overall letter grades beginning in 2018. California plans to distinguish performance with grades for performance on each of six state indicators. Their proposed system will not provide overall scores for schools. California officials said reporting on individual indicators will allow them to show key distinctions in performance that an overall score could mask. Four Key Components of Accountability Systems Under the Every Student Succeeds Act Education officials said next steps in implementing ESSA are the review and approval of ESSA-required state plans, and to continue to provide technical assistance to states. Officials also said that they are developing monitoring protocols for in-depth reviews of states' ESSA-related activities and will pilot them in early 2018. ESSA also includes certain reporting and review requirements, for example, (1) annual state reports to Education on student and school performance; (2) annual Education reports to Congress on state reported data; and (3) approval by the Secretary of Education of significant changes to state plans. |
The Homeland Security Act of 2002 established DHS and required the agency, among other things, to build a comprehensive national incident management system comprising all levels of government and to consolidate existing federal government emergency response plans into a single, coordinated national response plan. DHS developed the National Response Framework that identified core capabilities necessary to ensure national preparedness, such as operational planning at the federal and state level and emergency communications capabilities that enable emergency responders to effectively communicate with each other. States and localities provide the first response to any disaster and thus must plan and coordinate, across state lines, and with federal entities as well. States have developed plans and made efforts to coordinate in support of emergency communications. For example, state plans, called Statewide Communication Interoperability Plans, are intended to define the current and future direction for interoperable and emergency communications within the state. In addition to DHS, other federal agencies play a role in supporting emergency communications during disasters. Specifically, FCC manages the use of spectrum by non-federal entities, including commercial enterprises and state and local governments, and administers policies related to 911 and E911 services. The Department of Commerce’s National Telecommunications and Information Administration (NTIA) is responsible for managing spectrum used by the federal government and can temporarily assign spectrum during an emergency to aid the response. Along with FCC, NTIA deploys personnel to support disasters in response to a mission assignment from DHS. Furthermore, the First Responder Network Authority (FirstNet), an independent authority within NTIA, is in the process of planning for the deployment of a high-speed, interoperable nationwide wireless broadband network for use by federal, state, tribal, and local public safety personnel. Congress passed PKEMRA in 2006 to address issues that arose during Hurricane Katrina, including emergency communications issues. PKEMRA contains 10 emergency communications provisions that, according to our 2008 report and updates we obtained from DHS, have all been implemented, as shown in table 1. In this report, we focus on the first three emergency communications provisions listed in the table, requirements that are related to planning and federal coordination. While all of these PKEMRA provisions have been addressed, DHS continues to meet the requirements of some provisions, for example: Grant program coordination: DHS, in conjunction with other agencies, coordinates grant guidance across the government annually through SAFECOM’s Guidance on Emergency Communications Grants. The guidance provides grantees with directions on applying for funds to improve emergency communications and the current standards for grant award recipients. DHS developed the guidance to align with the first NECP and it now reflects the most recent NECP. Interoperability research and development: Since PKEMRA, DHS has conducted research and development to support emergency communications interoperability. Among other things, DHS is responsible for establishing research, development, testing, and evaluation programs for improving interoperable emergency communications. Assessments and reports: DHS intends to issues the next biennial progress report in November 2016. OEC has enhanced support of state and local planning and other emergency communications activities. OEC has taken a number of steps aimed at ensuring that federal, state, local, tribal, and territorial agencies have the plans, resources, and training they need to support interoperable emergency communications. After being established in 2007, OEC focused on enhancing the interoperability and continuity of land mobile radio systems. However, OEC’s scope has expanded since then to include other technologies used to communicate and share information during emergencies, including devices that have advanced telecommunications capabilities, such as broadband access. OEC has developed policy and guidance supporting emergency communications across all levels of government and various types of technologies. Table 2 describes key guidance OEC has provided to state and local entities. In addition to developing policy and guidance, OEC has provided technical assistance in the form of training, tools, and online and on-site assistance for federal, state, local, and tribal emergency responders. According to OEC, the technical assistance is designed to support interoperable emergency communications by helping states develop and implement their statewide plans to enhance emergency communications, standard operating procedures, and communications unit training, among other things. All states responding to our survey reported receiving technical assistance provided by OEC, and almost all of those states were satisfied with the support they received from OEC. For example in response to our survey, one state commented that OEC had provided invaluable training for the state’s first responders and assistance to the state’s governing authority. According to DHS, the PKEMRA provision requiring the NECP has improved state and local emergency communications activities, including governance and planning. The NECP, first issued by DHS in 2008, served as the first national strategy aimed at improving emergency communications interoperability and provided a road map to improve emergency communications capabilities. For example, the 2008 NECP encouraged states to have standard operating procedures for specified events. To assist the states in this effort, OEC developed a toolkit that provides general guidance and tools for state communications planners in developing a plan for special events and made a variety of templates available online for states to use in developing standard operating procedures. In 2014, DHS released its second NECP, which contains the following five goals: Governance and leadership: Enhance decision making, coordination, and planning for emergency communications through strong governance structures and leadership. Planning and procedures: Update plans and procedures to improve emergency responder communications and readiness in a dynamic- operating environment. Training and exercises: Improve responders’ ability to coordinate and communicate through training and exercise programs that use all available technologies and target gaps in emergency communications. Operational coordination: Ensure operational effectiveness through the coordination of communications capabilities, resources, and personnel from across the whole community. Research and development: Coordinate research, development, testing, and evaluation activities to develop innovative emergency communications capabilities that support the needs of emergency responders. DHS has taken various actions to support states’ efforts to address these goals. For example, with respect to the first goal, DHS issued The Governance Guide for State, Local, Tribal, and Territorial Emergency Communications Officials. This guide identified challenges related to emergency communications governance, as well as best practices and recommendations to overcome these challenges. In addition, OEC completed the 911 Governance and Planning Case Study, which examined the governance, planning, and funding challenges that states are facing regarding 911 and made a number of recommendations for OEC to improve coordination. DHS has taken steps towards addressing the other NECP goals. For example, related to the planning and procedures goal, DHS has coordinated with the Department of Transportation to identify risks and mitigation strategies to enhance the continuity and operability of emergency communications. Among other things, DHS has also partnered with FirstNet to conduct an assessment of the potential cybersecurity challenges facing the public safety broadband network. According to DHS, it will provide information on additional progress on meeting the NECP goals in its biennial report to Congress scheduled to be completed in November 2016. The ECPC, the interagency collaborative group established by PKEMRA, provides a venue for coordinating federal emergency communications efforts. The ECPC works to improve coordination and information sharing among federal emergency communications programs. It does this by serving as the focal point for emergency communications issues across the federal agencies, supporting the coordination of federal programs, such as grant programs, and serving as a clearing house for emergency communications information, among other responsibilities. There are 14 member agencies of the ECPC that have staff on an Executive Committee responsible for setting the ECPC’s priorities. In addition, the ECPC has a Steering Committee and focus groups that develop plans to address the priorities. Currently, there are three focus groups examining issues related to grants, research and development, and 911 issues. The focus groups report on their issues at Executive Committee and Steering Committee meetings and in the Annual Strategic Assessment. DHS serves as the administrative leader of the ECPC, organizes the ECPC quarterly and other meetings, and drafts the Annual Strategic Assessment and other documents. In a 2012 report, we examined interagency collaborative mechanisms, such as interagency groups, and identified certain key features and issues to consider when implementing these mechanisms. We reported that following leading collaboration practices can enhance and sustain collaboration among federal agencies. For this report, we compared the ECPC’s collaboration efforts with six of these key features and issues to consider, as shown in table 3. We found the ECPC’s efforts were consistent with the key features related to leadership, participants, resources, and written guidance and agreements. However, the ECPC’s efforts were not completely consistent with the key features related to (1) outcomes and accountability, and (2) clarity of roles and responsibilities, as explained below. The ECPC has not documented its strategic goals and outcomes. We previously reported that establishing shared outcomes and goals that resonate with, and are agreed upon by all participants, is essential to achieving outcomes in interagency groups, but can also be challenging. Participants each bring different views, organizational cultures, missions, and ways of operating. Participants may even disagree on the nature of the problem or issue being addressed. Furthermore, agency officials involved in several of the interagency groups we previously reviewed cautioned that if agencies do not have a vested interest in the outcomes, and if outcomes are not aligned with agency objectives, participant agencies would not invest their limited time and resources. However, by establishing outcomes and strategic goals based on the group’s shared interests, a collaborative group can shape its vision and define its own purpose, and when articulated and understood by the group, this shared purpose provides a reason to participate. Although DHS identified four long-term goals for the ECPC in response to our questions, these goals do not appear in the ECPC charter, program plan, or Annual Strategic Assessment. In May 2016, DHS officials told us the ECPC’s Executive Committee agreed to develop a strategic plan to highlight the ECPC’s goals and provide additional guidance for the focus groups. However, the DHS officials could not specify a time frame for completion. Without clearly defined strategic goals, the member agencies might not understand the ECPC’s goals or have a chance to ensure that the goals align with their own agencies’ purposes and goals. Furthermore, it remains unclear whether all member agencies have agreed on the ECPC’s goals and outcomes. In fact, ECPC member agencies we spoke with were able to provide a general idea about the ECPC’s purpose but could not articulate its specific goals. Also with respect to outcomes and accountability, the ECPC does not track or monitor its recommendations. The ECPC uses its Annual Strategic Assessment to: (1) provide information on federal coordination efforts, (2) define opportunities for improving federal emergency communications, and (3) report on progress implementing some of the focus groups’ recommendations. For example, the ECPC grants focus group made nine recommendations for federal grant program managers, including that the managers should use the ECPC Financial Assistance Reference Guide when planning and developing grant documents, and should invest in standards-based equipment. According to DHS, the grants focus group conducts annual surveys of member agencies to assess whether the agencies had implemented any of these recommendations. However, recommendations made by the other ECPC focus groups are not tracked, and therefore it is unclear the extent to which the recommendations have been implemented by ECPC’s member agencies. For example, the research and development focus group identified five recommendations in 2015 that were aimed at improving collaboration and information sharing around research and development for emergency communications. Specifically, one recommendation was for agencies to share technology profiles to prevent duplicative research. However, it is voluntary for member agencies to implement the focus group’s recommendations, and it is unknown whether agencies are sharing their technology profiles or if duplicative research is being conducted. According to DHS officials, the ECPC does not have a mechanism to determine whether the focus groups’ recommendations are implemented because it is up to the member agencies to decide if they will implement recommendations and if so, to track them on an individual basis. We have previously reported about the importance of federal agencies engaged in collaborative efforts to publicly report performance information as a tool for accountability. By having a mechanism to track the focus groups’ recommendations, the ECPC would have the means to monitor progress in achieving them. The ECPC has not clearly defined the roles and responsibilities of its member agencies. We previously reported that clarifying the roles of all member agencies will help establish an understanding of who will do what in support of the collaborative group. In addition, member agencies’ commitment to their defined roles helps the group overcome barriers to working in the collaborative group and can facilitate decision making within the group. The roles can be described in laws, policies, memorandums of understanding, or other documentation. As described in the ECPC charter, DHS is the administrator of the ECPC; however, it is unclear whether all member agencies have defined and agreed upon their respective roles and responsibilities. For example, the Department of Labor is a member of the ECPC, but according to DHS, it might not be clear to all members why the Department of Labor is a participating member. Similarly, officials from the General Services Administration told us they do not know the roles of the other ECPC members and could only speak to us about their own agency’s role. DHS officials told us it would be beneficial to have member agencies’ roles and responsibilities clearly defined but expressed concern that some members, who participate voluntarily, might not want defined responsibilities if such responsibilities would require additional staff time and resources. Nevertheless, lacking defined roles and responsibilities may result in member agencies’ not knowing their roles and responsibilities or those of other members, which may create additional barriers to effectively working together. States, the District of Columbia, and territories (hereafter, states) responding to our survey reported that to better prepare for emergency communications during disasters, they have: (1) developed emergency communications plans, (2) established the Statewide Interoperability Coordinator (SWIC) positions, and (3) implemented governance structures to oversee emergency communications planning. States have made progress since PKEMRA in establishing emergency communications plans. Based on survey responses, prior to the enactment of PKEMRA in 2006, only a few states had emergency communications plans in place. In 2007, OEC began requiring states to have a Statewide Communications Interoperability Plan (SCIP) to be eligible for DHS’s Interoperable Emergency Communications Grant Program. These state emergency communications plans are intended to be comprehensive strategic plans that outline the current and future emergency communications environment in a state. The NECP encourages states to align their plans with the emergency communications goals in the NECP to establish a link between national communications priorities and state emergency communications planning. Of the states responding to our survey, 51 reported having a SCIP, and 36 state plans were implemented after PKEMRA’s enactment. In addition to the SCIP, 16 states reported having other planning documents that support operational plans for emergency communications in addition to the high-level strategic plan the SCIP represents. For example, some states reported using tactical documents such as the Tactical Interoperability Communications Plan as their primary emergency communications planning document. The 2014 NECP encouraged states to update their plans and procedures to enhance emergency communications during disasters, and 46 states responding to our survey reported that they had updated their plans. States reported updating their plans for various reasons, including reflecting routine review processes, technological advancement, and changes in state governance, among others. According to DHS, as of the end of fiscal year 2015, OEC worked with 53 states and territories to update their SCIPs to align with the 2014 NECP. In response to our survey, 50 states reported being satisfied with the level of support for emergency communications planning they received from OEC. Further, as shown in table 4, most of the states responding to our survey reported that they now have plans that contain the key elements of the SAFECOM Interoperability Continuum. The NECP considers the SAFECOM Interoperability Continuum as the essential foundation for achieving the NECP goals. According to the NECP, first responders’ proficiency with communications equipment and their ability to execute policies, plans, and procedures can improve with training and exercises. In response to our survey, 40 states reported conducting training and exercises based on their emergency communications plans. Furthermore, the NECP notes that training and exercises helps emergency responders be properly prepared to respond to disasters and 43 states reported that they are likely to use their emergency communications plans when responding to future disasters. Since PKEMRA, states have made considerable progress in establishing a key coordinator position. The SWIC provides a single point of contact for statewide emergency communications activities. The NECP identifies the SWIC as a key stakeholder in emergency communications. In 2008, DHS noted that the lack of SWICs in each state was a primary obstacle to improving emergency communications and recommended that every state have a SWIC within 12 months. All but two states responding to our survey reported that they now have a SWIC. DHS officials stressed the importance of the SWIC position and told us that SWICs can contribute to emergency communications initiatives by supporting the development of governance structures, standard operating procedures, and high-level policy. In addition, SWICs can coordinate grants and other types of funding and training and exercises, and support implementation of the SCIPs. Although DHS has stressed the importance of the SWIC position, according to our survey, most SWICs now have responsibilities outside those of the SWIC role. In December 2009, according to DHS, 44 states had a full time SWIC, but most survey respondents reported that their SWICs now have other non-SWIC responsibilities. In particular, 37 states responding to our survey have SWICs with additional non-SWIC related responsibilities. For example, 21 SWICs are also the FirstNet Single Point of Contact. States funded the SWIC position in part by the Interoperable Emergency Communications Grant Program. According to DHS, funds were not appropriated for this grant program after 2010. Subsequently, funding dedicated to improving interoperability was used for other DHS grant programs that supported improving emergency preparedness, which included interoperable emergency communications. According to our survey results, 26 SWIC positions are funded by federal grants, state grants, or a combination of both federal and state grants. In April 2016, the House of Representatives acknowledged the importance of the SWIC position by passing the Promoting Resilience and Efficiency in Preparing for Attacks and Responding to Emergencies Act, which includes a provision that would require states to have a SWIC position or delegate the responsibilities to other individuals. Since PKEMRA, the NECP identified the need for formal governance structures to manage the systems of people, organizations, and technologies that need to collaborate to effectively plan for emergency communications during disasters, and most states responding to our survey reported that they have governance structures in place. According to DHS, governance structures should include key emergency communications stakeholders such as emergency communications leaders, multiple agencies, jurisdictions, disciplines, subject matter experts, and private sector entities, among others to enhance information sharing and ensure emergency communications needs are represented. Almost all of states (49) responding to our survey reported having governance structures in place that include key stakeholders. For example, 48 states reported that their governance bodies include emergency responders from local agencies while 33 states reported that non-government stakeholders, such as the Red Cross, are included. In response to our survey, 24 states reported that their governance bodies meet 3 to 7 times a year, and the governance bodies for 16 other states meet 8 to 12 times a year while the remaining states with governance structures meet less than 3 times a year. In our survey, we asked states about the challenges that affect their ability to ensure operable and continuous emergency communications during disasters and states identified a lack of funding as the primary challenge. In particular, 48 states responding to our survey indicated that a lack of funding sometimes or always affected their state’s ability to ensure operable and continuous emergency communications during disasters. In written comments, 12 states specifically identified the need for dedicated funding for emergency communications including funds to support the role of the SWIC. For example, one state reported that when it no longer received federal funding for emergency communications, the state lost its full time SWIC position, support personnel, and governance group. In addition, 45 states responding to our survey mentioned that the lack of staffing sometimes or always presented a challenge for their states. In the written responses, one state indicated that the lack of staffing was difficult to address because of the funding issue, while another indicated the state was under a hiring freeze. In other written responses to our survey, states identified additional challenges. For example, six states mentioned issues with technology, such as challenges in learning to use different radio systems and understanding new and emerging technologies. We also asked the states if they have experienced interoperability difficulties when communicating or attempting to communicate with federal partners during disasters. In response, 23 states reported that they have experienced difficulties and noted in written comments that the issues included a lack of understanding by federal responders about the local radio systems, federal radios not configured to the interoperable channels or talk groups, and federal responders not using the statewide system. Furthermore, two states noted a lack of planning between federal and state entities prior to emergencies that led to federal responders trying to figure out the systems during the emergency. Some states responding to our survey reported that they have taken action to address challenges related to funding, technology, and interoperability concerns with federal partners. First, related to funding, some states reported pursuing state level funding and grants to continue emergency communications governance and planning, including funding the SWIC position and building statewide emergency communications systems. Second, some states reported addressing technology challenges through training and upgrading old communication systems. For example, one state reported that his state provides training to emergency responders on radio operations and how to effectively use talk groups. Another state reported his state is upgrading its 26-year old land mobile radio system so that emergency responders can more effectively communicate within the state and during emergencies. Lastly, some states reported that they are trying to address interoperability issues through training and the purchase of interoperable equipment. For example, the training can improve coordination with federal and other users that can result in improved interoperability during emergencies. In addition, by purchasing interoperable equipment, emergency personnel could have fewer issues connecting with emergency responders at all levels of government. One state indicated that his state provided information on interoperable equipment to local entities to promote the purchase of such equipment. According to DHS officials, they continue to provide training programs to the states to help improve interoperability. PKEMRA established the ECPC to improve coordination and information sharing among federal emergency communications programs. As a collaborative entity, we found that while the ECPC’s efforts were consistent with most of the key features for effective collaboration, its efforts were not completely consistent with key features related to outcomes and accountability and clarity of roles and responsibilities. Regarding outcomes and accountability, the ECPC has not documented its strategic goals or established a mechanism to track the outcomes of the focus group’s recommendations. DHS officials told us the ECPC has agreed to develop a strategic plan that would contain goals for the ECPC, but there is no firm timetable for such a plan to be completed. Lacking clearly defined strategic goals, the ECPC’s member agencies might not understand the ECPC’s goals or have a chance to ensure that the goals align with their own agencies’ purposes and goals. Furthermore, the ECPC’s focus groups have spent time and resources to make recommendations for improving emergency communications, but we found the focus groups’ recommendations, such as those related to federal grant programs and research and development efforts, are implemented at the discretion of the member agencies. Without a mechanism to track the recommendations, it is unclear the extent to which the recommendations are being implemented by the member agencies, and the ECPC is missing an opportunity to monitor its efforts. Regarding clarity of roles and responsibilities, the ECPC has not defined the member agencies’ roles and responsibilities, and some member agencies do not know the roles and responsibilities of other members, a situation that may create barriers to working together effectively. Clearly defining the members’ respective roles and responsibilities would help to provide an understanding of who will do what to support the ECPC’s efforts and facilitate decision making. To improve the effectiveness, transparency, and accountability of the ECPC’s efforts, we recommend that the Secretary of Homeland Security, as the administrative leader of the ECPC, take the following actions: clearly document the ECPC’s strategic goals; establish a mechanism to track progress by the ECPC’s member agencies in implementing the ECPC’s recommendations; and clearly define the roles and responsibilities of the ECPC’s member agencies. We provided a draft of this report to DHS, Commerce, and FCC for their review and comment. In response, DHS provided written comments, which are reprinted in appendix III. In written comments, DHS concurred with our recommendations and provided an attachment describing the actions it would take to implement the recommendations. DHS noted that enhancing the communications capabilities for emergency responders is one of its top priorities and that DHS will use the recommendations provided in our report to enhance a DHS initiative aimed at remediating many of the foremost emergency communications challenges facing our nation. Separately, DHS, Commerce, and FCC provided technical comments that we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretaries of Homeland Security and Commerce, the Chairman of FCC, and appropriate congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or members of your staff have any questions about this report, please contact me at (202) 512-2834 or goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix IV. This report focuses on three Post-Katrina Emergency Management Reform Act of 2006 (PKEMRA) emergency communications provisions related to planning and federal coordination: the Office of Emergency Communications (OEC), the National Emergency Communications Plan (NECP), and the Emergency Communications Preparedness Center (ECPC). Specifically, we examined (1) federal efforts to implement these PKEMRA emergency communications provisions and (2) how states’ emergency communications planning has changed since PKEMRA and what challenges remain for the states. To determine federal efforts to implement the three PKEMRA emergency communications provisions, we reviewed our 2008 report and other relevant reports and documentation from the Department of Homeland Security (DHS), such as DHS’s biennial reports to Congress on emergency communications, and reports from other agencies, such as the Federal Communications Commission’s (FCC) 911 and Enhanced 911 services report. We also reviewed the NECP from 2008 and the subsequent reports on the progress meeting its goals, as well as the 2014 NECP. We interviewed officials from DHS, FCC, and the Department of Commerce (Commerce) to determine their roles and the progress implementing the provisions. We compiled information from the reports and interviews to assess how the provisions were implemented and if they were fully implemented. To understand the ECPC’s collaborative practices, we reviewed the ECPC charter, program plan, and Annual Strategic Assessments prepared for Congress, and interviewed ECPC member agencies. Specifically, we interviewed 5 of 14 ECPC member agencies (DHS, FCC, Commerce, the Department of Transportation, and the General Services Administration) to determine their roles on the ECPC, their understanding of the ECPC goals, and the member agencies’ responsibilities. We selected agencies to interview with a range of emergency communications experience, and the views we obtained do not necessarily represent the views of all ECPC member agencies. We assessed the ECPC’s collaborative efforts against six of seven key considerations for implementing collaborative mechanisms that we identified in a September 2012 report. To understand how state emergency communications planning has changed since PKEMRA and the challenges states still face, we surveyed Statewide Interoperability Coordinators (SWIC) in 50 states, the District of Columbia, and 5 territories. The list of SWICs was obtained from DHS and confirmed via email. We conducted a web-based survey that addressed issues pertaining to state planning, governance, and challenges, specifically asking about the Statewide Communications Interoperability Plan (SCIP) and other emergency communications plans, and the role of the SWIC. To ensure the survey questions were clear and logical, we pretested the survey with three states: North Dakota, Texas, and Wyoming. These states were selected based on the types of disasters facing the states, the number of recent disasters, and geographic diversity. We administered our survey from February 2016 to April 2016 and received 52 responses for a 93 percent response rate. American Samoa, Massachusetts, the Northern Mariana Islands, and Puerto Rico did not respond to our survey. In addition, we interviewed selected SWICs from Kentucky and Wyoming to understand how the SCIPs and other emergency communications plans are used in preparing for emergencies. We selected these SWICs to interview based on geographic region, an occurrence of a recent disaster in the state, and because the Wyoming SWIC was the chair of the National Council of Statewide Interoperability Coordinators. We conducted semi-structured interviews with each SWIC to understand if they had a SCIP, how they used the SCIP, the governance structures the state uses to manage emergency communications, and the challenges their states encounter with emergency communications during disasters. The questions we asked in our survey of Statewide Interoperability Coordinators and the aggregate results of responses to the closed-ended questions are shown below. We do not provide results for the open-ended questions. We received 52 completed survey responses. However, all respondents did not have the opportunity to answer each question because of skip patterns, and some respondents decided not to respond to particular questions. For a more detailed discussion of our survey methodology see appendix I. 1. Does your state have a Statewide Communications Interoperability Plan (SCIP)? 1a. If no, why doesn’t your state have a SCIP? (Written responses not included) 1b. Does your state have a primary planning document for ensuring operable and interoperable emergency communications during disasters in your state? 1c. Is your SCIP your primary plan to ensure operable and interoperable emergency communications during disasters in your state? 1d. If no, what, is your primary planning document for ensuring, operable, and interoperable emergency communications during disasters in your state? (Written responses not included) 2. What year was the emergency communications plan implemented? 3. Has your state used the emergency communications plan in response to disasters? 3a. If the emergency communications plan has never been used in response to disasters, why not? (Written responses not included) 4. Has your state used the emergency communications plan during training exercises? 4a. If the emergency communications plan has never been used during training exercises, why not? (Written responses not included) 5. Are you likely to use the emergency communications plan in response to disasters in the future? 6. Has the emergency communications plan been updated since it was initially implemented? 6a. When was the emergency communications plan last updated? 6b. Why was the emergency response plan updated? (Written responses not included) 6c. Do you think your plan needs to be updated? 6d. If yes, why hasn’t the emergency response plan been updated? (Written responses not included) Elements of the Emergency Communications Plan 7. Does your emergency communications plan address the following elements? Planning (standard operating procedures, protocols) Technology (data and voice elements, common applications, base sharing, custom applications, swapping radios, gateways) Usage (how often interoperability communications are used in planned events, localized emergency incidents, regional incidents, and daily use) 8. Does your emergency communications plan address the following types of events? Significant events (i.e., terrorist attacks, major disaster, and other emergencies that pose the greatest risk to the state) Routine events (i.e., localized emergency incidents, regional emergency incidents, special events, large public gatherings, state and national exercise) If “Other standardized elements” is checked, what other elements are contained in your operating protocols and procedures? (Written responses not included) Planning and Standard Operating Procedures 9. Does your emergency communications plan contain the following standardized elements in your operating protocols and procedures? If “Other standardized elements” is checked, what other elements are contained in your operating protocols and procedures? (Written responses not included) 10. Does your state have a Statewide Interoperability Coordinator (SWIC)? 10a. What best describes the SWIC in your state? 10b. If “Other” is checked, what describes the SWIC in your state? (Written responses not included) 10c. How, if at all, have current SWIC responsibilities changed in the past 5 years? 10d. If “Other change” is checked, what other SWIC responsibilities have changed in the past 5 years? (Written responses not included) 10e. Does the SWIC also serve in the role of the FirstNet state Point of Contact (SPOC)? 10f. How is the SWIC position in your state funded? 10g. To what extent have the following factors contributed to your state NOT having a SWIC? If “Other factor” is checked, what other factors contributed to your state not having a SWIC? (Written responses not included) 11. Does your state have a governance body supporting emergency communications planning? 11a. Generally, how often does the governance body in your state meet to discuss planning efforts to ensure emergency communications during disasters? 11b. Are public safety representatives from the following categories represented in your governance body? International (states/territories near national boarders) Non-Government (i.e., American Red Cross, public safety association groups, etc.) 11c. If no, what entities and individuals are responsible for overseeing emergency communications in the state? (Written responses not included) 12. Generally, how involved are the public safety representatives from the following categories in the planning and coordinating efforts to ensure continuous operable emergency communications in your state? International (states/territories near national boarders) Non-Government (i.e., American Red Cross, public safety association groups, etc.) 13. In developing and/or maintaining your state’s emergency communications plan, have you received technical assistance services offered by the Office of Emergency Communications (OEC) within the Department of Homeland Security? No (requested not received) No (not requested) 14. Overall, how satisfied or dissatisfied are you with the level of support for emergency communications planning from the OEC? 15. Have you experienced interoperability difficulties when communicating or attempting to communicate with federal partners during disasters? 15a. If yes, what interoperability difficulties did you experience when communicating or attempting to communicate with federal partners? (Written responses not included) 16. Since 2008, has your state received federal grant funding in support of emergency communications? 16a. What areas did the grant funding support? Planning (standard operating procedures, protocols) Technology (data and voice elements, common applications, base sharing, custom applications, swapping radios, gateways) Usage (how often interoperability communications are used in planned events, localized emergency incidents, regional incidents, and daily use) 17. How, if at all, has the federal grant funding your state received in support of emergency communications changed in the past 5 years? No change- (about the same) 18. What additional federal efforts, if any, are needed to help ensure operable, interoperable, and continuous emergency communications in your state during disasters? (Written responses not included) 19. Generally, how often, if at all, do the following challenges affect your state’s ability to ensure operable and continuous emergency communications during disasters in your state? If “Other challenge” is checked, what other challenges affect your state’s ability to ensure operable and continuous emergency communications during disasters? (Written responses not included) 20. What actions, if any, has your state taken to address the items you identified as challenges in question 19? (Written responses not included) 21. What actions, if any, can the federal government take to address the items you identified as challenges in question 19? (Written responses not included) 22. If you would like to expand upon any of your responses to the questions above, or have any other comments about your state’s planning efforts to ensure operable and interoperable emergency communications, please enter them below. (Written responses not included) In addition to the individual named above, Sally Moino (Assistant Director), Enyinnaya David Aja, Cynae Derose, Eric Hudson, Cheryl Peterson, Kelly Rubin, Erik Shive, Andrew Stavisky, and Nancy Zearfoss made key contributions to this report. | During emergency situations, reliable communications are critical to ensure a rapid and sufficient response. PKEMRA was enacted in 2006 to improve the federal government's preparation for and response to disasters, including emergency communications. Since that time, natural and man-made disasters continue to test the nation's emergency communications capabilities. Given that states and localities are the first line of response following a disaster, states' emergency communications planning is very important. GAO was asked to review the implementation of PKEMRA. This report examines (1) federal efforts to implement PKEMRA emergency communications provisions related to planning and federal coordination, and (2) how states' emergency communications planning has changed since PKEMRA. GAO reviewed relevant reports and documentation from DHS and other agencies; surveyed SWICs from 50 states, the District of Columbia, and 5 territories, receiving 52 responses; assessed the ECPC's collaborative efforts; and interviewed federal and state officials selected for their emergency communications experience. GAO plans to review the implementation of other PKEMRA emergency communications provisions in future work. Implementation of the Post-Katrina Emergency Management Reform Act of 2006 (PKEMRA) provisions related to emergency communications planning and federal coordination has enhanced federal support for state and local efforts; however, federal coordination could be improved. PKEMRA created within the Department of Homeland Security (DHS) the Office of Emergency Communications, which has taken a number of steps aimed at ensuring that state and local agencies have the plans, resources, and training they need to support reliable emergency communications. PKEMRA also directed DHS to develop the National Emergency Communications Plan (NECP). The NECP includes goals for improving emergency communications and encourages states to align their plans with these emergency communications goals. PKEMRA further established the Emergency Communications Preparedness Center (ECPC), comprising 14 member agencies, to improve coordination and information sharing among federal emergency communications programs. GAO previously identified key features and issues to consider when implementing collaborative mechanisms, including interagency groups like the ECPC. GAO found that the ECPC's collaborative efforts were consistent with most of these features, such as those related to leadership and resources, but were not fully consistent with others. For example, one of the key features calls for interagency groups to clearly define goals and track progress, yet the ECPC has not done so. As a result, the ECPC's member agencies might not understand the ECPC's goals or have a chance to ensure that the goals align with their own agencies' purposes and goals. Furthermore, the ECPC puts forth recommendations that could improve emergency communications. But the recommendations are implemented at the discretion of the ECPC's member agencies and are not tracked. Without a mechanism to track the ECPC's recommendations, it is unclear the extent to which the recommendations are being implemented and the ECPC is missing an opportunity to monitor its progress. Almost all of the Statewide Interoperability Coordinators (SWIC) responding to GAO's survey reported that to better plan for emergency communications during disasters, their states have taken the following steps since PKEMRA: (1) developed comprehensive strategic plans for emergency communications that align with the NECP; (2) established SWIC positions to support state emergency communications initiatives, such as developing high-level policy and coordinating training and exercises; and (3) implemented governance structures to manage the systems of people, organizations, and technologies that need to collaborate to effectively plan for emergencies. GAO did not independently verify state responses. In responding to GAO's survey, most SWICs reported not having a comprehensive emergency communications plans in place prior to PKEMRA's 2006 enactment. In particular, prior to the enactment of PKEMRA, only a few states had comprehensive emergency communications plans in place, but now all but one have such a plan. Most of the SWICs also reported that their statewide plans cover key elements, such as governance, standard operating procedures, and training and exercises, which are considered by DHS as the essential foundation for achieving the NECP goals. GAO is making recommendations to DHS aimed at improving the ECPC's collaborative efforts, including defining its goals and tracking its recommendations. DHS concurred with the recommendations. |
Coins serve as a medium of exchange in everyday commerce. In 2012, Concurrent Technologies Corporation, a contractor to the U.S. Mint, estimated that there were from 355 billion to 370 billion coins in circulation—about two-thirds of them pennies. Many of these coins are not in active circulation because people hold coins in storage containers in their homes, automobiles, or office desk drawers, among other places. However, coins in active use are accepted across the nation as payment in hand-to-hand transactions and for products and services in millions of machines ranging from vending and laundry to amusement and parking machines. These automated, unattended machines validate U.S. coins and their denominations by measuring one or more of the diameter, thickness, weight, and electromagnetic signature (EMS) of each coin. In addition to the four primary coins in circulation—the penny, nickel, dime, and quarter—the 50-cent piece and 1-dollar coin are also considered circulating coins. The Constitution gives Congress the power to coin money, and under this authority, Congress has specified that the current metal composition of coins be as follows: the penny (1-cent) is made of copper-plated zinc and consists of 97.5 percent zinc and 2.5 percent copper, the nickel (5-cent) is made with an alloy of 75 percent copper and 25 percent nickel (a combination known as “cupronickel”), and the dime (10-cent) and the quarter (25-cent) consist of three layers of metal. The inner layer is copper and the two identical outer layers are a silver-colored alloy of 75 percent copper and 25 percent nickel. (A multi-layer coin is called a “clad coin.”) The Federal Reserve determines the number of coins required to meet the public’s needs. Specifically, depository institutions (e.g., commercial banks and credit unions) order new coins from the Federal Reserve through an online coin-ordering system called FedLine. Then, the Federal Reserve’s Cash Product Office submits a new coin order to the U.S. Mint. In turn, the U.S. Mint produces and distributes new coins each month to the 12 Federal Reserve Banks that fulfill the orders made by the depository institutions. In general, coin production varies from year to year depending on several factors, such as public demand, the need to replace mutilated or worn coins, and the price of copper, as well as orders from the Federal Reserve to maintain its targeted inventory levels. The U.S. Mint produced about 5-billion circulating coins in 2010 and about 13- billion circulating coins in 2014. When the cost to produce and distribute a coin is less than its face value, the federal government experiences a financial gain, creating a value known as “seigniorage”. In fiscal year 2014, the U.S. Mint realized about $315 million in seigniorage from circulating coins. The quarter and dime resulted in seigniorage of $406 million, whereas the nickel and penny resulted in a loss of seigniorage in the amount of $91 million. Seigniorage is returned to the Treasury General Fund and reduces government’s borrowing and interest cost, resulting in a financial benefit to the government, whereas loss of seigniorage is absorbed as part of the U.S. Mint’s operating costs. Table 1 shows the amount of seigniorage from fiscal year 2010 through 2014. The Act authorized the Secretary of the Treasury to conduct research and development on new metals for all circulating coins with the goal of reducing production costs. The Act also required, among other things, that the Secretary: consider “actors relevant to the ease of use of and ability to co- circulate of new coinage materials, including the effect on vending machines and commercial coin processing equipment and making certain, to the greatest extent practicable, that any new coins work without interruption in existing coin acceptance equipment without modification;” “include detailed recommendations for any appropriate changes to the metallic content of circulating coins in such a form that the recommendations could be enacted into law as appropriate;” “to the greatest extent possible, may not include any recommendation for new specifications for producing a circulating coin that would require any significant change to coin accepting and coin-handling equipment to accommodate changes to all circulating coins simultaneously;” and submit a biennial report to Congress “analyzing production costs for each circulating coin, cost trends for such production, and possible new metallic materials or technologies for the production of circulating coins.” The Secretary of the Treasury issued the first biennial report in December 2012 and another in December 2014. These reports summarized the U.S. Mint’s research and development efforts to identify new metallic materials or technologies for the production of circulating coins and also included information on the U.S. Mint’s outreach efforts to industry and industry cost estimates. In identifying potential new materials for circulating coins, the U.S. Mint needed to ensure that the material could be used as a viable, durable coin. After testing 29 different metal compositions for circulating coins, the U.S. Mint has identified four viable metal compositions—a new version of the current cupronickel (copper and nickel), nickel-plated steel, multi-ply plated steel, and stainless steel. The U.S. Mint plans to issue another report in December 2016. According to U.S. Mint officials, the 2016 report will highlight areas of further study as discussed in the 2014 Biennial Report to Congress. The areas of study include further testing and evaluation of the new cupronickel alloy, stainless steel research and development, improvements in production, and outreach to the coin industry, among other things. Other countries have also taken steps to reduce metal composition of coins to reduce costs. For example, the Royal Canadian Mint and the Royal Mint of the United Kingdom have both changed the metal composition of their coins from cupronickel to steel-based coinage in an effort to reduce production costs. The Royal Canadian Mint manufacturers its coins using a patented multi-ply plated steel technology. The Royal Mint produces its coins using its own plated steel technology known as aRMour® plating. A variety of businesses rely on coins. Some industries sell products or services through the use of coin machines, such as vending and coin- operated laundry machines. According to the vending machine industry, it generally sells products using modern, technologically advanced coin machines that validate and accept many types of circulating coins as well as dollar bills. Its members include well-known, large corporations. In contrast, representatives from the coin-operated laundry industry stated that the industry is comprised of small “mom-and-pop” business owners and operators, provides services using mechanical technology, and depends heavily on the quarter. Representatives from the amusement industry also stated that their industry depends heavily on the quarter for playing games, billiards, and juke-boxes. Businesses in the amusement industry range from large national chains to small owners and operators. Another type of industry that deals with coins is the armored car industry. According to this industry, it sorts, counts, wraps, and transports all denominations of circulating coins from Federal Reserve Banks to commercial banks and other privately owned businesses. This industry is dominated by four large armored car carriers. Finally, manufacturers that make coin acceptance and handling equipment support the entire coin industry. The U.S. Mint’s analysis estimates that the government could potentially save from about $8 million to about $39 million per year through different changes in coin composition to the nickel, dime, and quarter. The U.S. Mint developed four alternatives for coin composition and estimated the savings for each alternative (see fig. 1). The U.S. Mint’s analysis shows that changing to a new version of a copper and nickel combination (cupronickel) could potentially save the government about $8 million per year, based on 2014 production costs, and should not affect industry. This change is known as the “seamless” alternative because it would not significantly change the characteristics of the nickel, dime, or quarter. The savings would result from a change in metal costs alone, specifically (1) increasing the amount of copper in the nickel and the outer clad layer of the dime and the quarter from 75 percent to 77 percent and decreasing the amount of nickel, which is more expensive, from 25 percent to 20 percent, and (2) adding manganese to the coins. The seamless alternative is designed to have the same diameter and EMS characteristics and nearly the same weight as the current cupronickel composition. According to the U.S. Mint, this alternative would not require any changes to coin acceptance machines and would not affect industry. As of September 30, 2015, the U.S. Mint was conducting further testing of another version of this alloy that substitutes zinc or zinc and manganese for some of the nickel, to help ensure that the alloy has the same characteristics as the current nickel and would not require changes to coin acceptance machines. The U.S. Mint’s analysis also shows that the largest savings the government could potentially achieve is about $39 million per year by changing the coin composition of the nickel and dime to multi-ply plated steel. This type of change is referred to as a “co-circulating” alternative because different types of coin compositions for the same coin denomination would circulate together in the economy for 30 years or more. Under this co-circulating alternative, savings would result from both metal changes (as steel is less expensive than copper or nickel) and production changes. According to U.S. Mint officials, their metal suppliers would supply coin “blanks” for multi-ply plated steel coins, thereby eliminating the need for the U.S. Mint to make its own blanks. Currently, suppliers provide sheets of metal that the U.S. Mint uses to produce coin blanks. A change to steel-based coins would require the coin industry to make modifications to current coin acceptance machines to recognize and accept both new and existing coins, because the new coins would have a different weight and EMS than existing coins and would also be magnetic. The U.S. Mint had originally included the quarter in its savings estimates for a co-circulating coin change. When including the quarter, the U.S. Mint had estimated that the government could save $83 million per year by changing the quarter, nickel, and dime to multi-ply plated steel. In October 2015, U.S. Mint officials told us that they determined that neither multi-ply plated steel nor the nickel-plated steel compositions were viable for the quarter due to security concerns. Specifically, as the use of steel coins has increased around the world, the U.S. Mint determined that there is too great a risk that the size, weight, and EMS of any steel- based U.S. quarter may be close to that of a less valuable foreign coin. According to Mint officials, this disparity may result in fraud because machines would not be able to differentiate between the U.S. quarter and lower-value foreign coins. According to U.S. Mint officials, it is viable to change the nickel and dime to multi-ply plated steel because these coins are lower in value and therefore do not provide a similar incentive to counterfeiters. The Act provides the parameters that the U.S. Mint used to direct its research and development and evaluate the impact to industry as it developed estimates. For example, the Act limits the U.S. Mint to considering metallic materials during its research and development for co- circulating alternative coinage. Specifically, the Act specifies that metallic materials be tested for coinage, and this testing prevents the U.S. Mint from considering less expensive materials and nonmetallic alloys that could be suitably fabricated as coins and limits the co-circulating alternatives. Additionally, the U.S. Mint did not include metallic changes to the penny because the U.S. Mint could not identify a less expensive metal for the penny. In its 2012 biennial report, the U.S. Mint reported that while the penny costs more to produce than its face value, there is no viable metal alternative that is cheaper than zinc. Representatives from a raw material supplier, and consulting firm told us that they agreed with the U.S. Mint’s assessment. The Act also specified that any proposed changes may not allow for greater risk of fraud from counterfeiting or substituting cheaper foreign coins. Finally, according to U.S. Mint officials, the Act’s language on minimizing the effect on vending machines and commercial coin processing equipment led the U.S. Mint to pursue research on a seamless alternative as well as a co-circulating alternative. In addition, the U.S. Mint’s analysis did not include information on how the organization would transition from the current coin composition to a new coin composition. For example, the U.S. Mint did not determine the disposal costs of equipment that would no longer be used if coin composition changed or consider potential changes to its workforce. According to officials, the U.S. Mint has improved its internal processes through a separate effort. Specifically, according to U.S. Mint officials, the U.S. Mint reduced plant overhead by 7 percent and general and administrative costs by 18 percent from between 2009 to 2014; reduced employee shifts—from 3 to 2 shifts per day—at the two Mints that produce circulating coins; and streamlined its die- manufacturing process. Although these estimates can provide an understanding of the general magnitude of potential government savings, our analysis found that the U.S. Mint’s cost-estimating process and resulting analyses are limited because they did not fully align with best practices for estimating costs, as outlined in the Cost Guide. Without following best practices, the U.S. Mint’s estimates may not be reliable. The Cost Guide includes a 12-step process to develop a reliable cost estimate. These best practices are the basis for developing high-quality, reliable cost estimates and help ensure that the cost estimates are comprehensive, well documented, accurate, and credible. For example, following these practices should result in cost estimates that can, among other things, be replicated and updated. According to the Cost Guide, these best practices can guide government managers as they assess the credibility of a cost estimate for decision-making purposes for a range of programs. Of the 12 steps in the Cost Guide, our analysis found that the U.S. Mint’s cost-estimating processes fully met 1, partially met 7, minimally met 3, and did not meet 1 of these 12 steps. More detailed information describing how the U.S. Mint’s cost estimating process aligned with the Cost Guide can be found in appendix II. In summary: Fully met: The U.S. Mint fully met one step. This step was to brief its management as part of its review process and obtain and document management’s approval of the estimate. Partially met: The U.S. Mint partially met 7 steps of the cost- estimating process. These steps generally occurred during the cost assessment portion of the cost-estimating process. To its credit, the U.S. Mint partially met the steps of (1) defining the estimate’s purpose, (2) developing the estimating plan using technical staff, (3) defining the program characteristics, (4) determining the estimating structure from the cost of raw materials to overhead, (5) obtaining the data from the U.S. Mint’s cost-accounting system, (6) developing the point estimate and comparing it to an independent cost estimate, and (7) updating the estimate. For example, the U.S. Mint partially met defining the purpose of the savings estimate because the U.S. Mint defined the scope of the estimate but did not fully consider all costs. Specifically, the U.S. Mint did not include about $5.7 million in one- time expenses to conduct research, development, and testing of viable metals and did not consider expenses to dispose of equipment that may no longer be needed if a decision is made to produce steel- based coins. Minimally met: The U.S. Mint minimally met 3 steps that generally occurred during the analysis of the cost-estimating process. These steps include (1) conducting a sensitivity analysis, (2) documenting the estimate, and (3) identifying ground rules and assumptions. While U.S. Mint officials discussed a sensitivity analysis, it was not conducted. Without a fully documented sensitivity analysis, the U.S. Mint cannot determine how a change in inputs—such as the price of metal—would affect the potential for savings because the cost of metal is an important factor in the U.S. Mint’s overall costs to produce coins. Also, when documenting estimates, the U.S. Mint used 2014 metal prices to determine its estimates. However, metal prices change over time. For example, from 2011 to 2015, the price of copper ranged from $2.24 per pound to $4.58 per pound. Such a change in metal prices impacts the U.S. Mint’s costs and therefore can significantly impact its savings estimates. Finally, certain assumptions were not made or documented since the U.S. Mint’s savings estimate did not project savings into the future, but rather all analyses were based on one year. Not met: The U.S. Mint did not meet the step that required it to conduct a cost-risk and uncertainty analysis. These analyses examine a broad range of factors, such as unforeseen technical problems or changes in staff availability and expertise, that could possibly occur and would affect the estimate. The U.S. Mint’s cost-estimating process did not fully align with the best practices described in the Cost Guide and therefore the estimates may not be reliable as a precise indication of government savings. However, the efforts taken by the U.S. Mint nonetheless provide an understanding of the general magnitude of government savings. We discuss later in this report how the magnitude of estimated government savings compares to the magnitude of estimated industry costs to illustrate the scale and relationship between estimates of government savings and industry costs. Our review of other estimates of potential government savings found these estimates to be narrow in scope. For example, a 2012 Navigant study that was commissioned by a supplier of coin material estimated that the U.S. government could achieve savings of up to $207.5 million per year by changing the current coin compositions of the nickel, dime, and quarter to multi-ply plated steel, which is the same composition used in Canadian coins. However, by design, the estimate is not comprehensive because it does not account for other costs associated with making this change such as production, processing, transportation, and new equipment costs as well as licensing fees to use multi-ply plated steel technologies. In addition, this study was limited in scope because it was not designed to be a comprehensive cost-benefit analysis of government savings and industry costs. Generally, according to guidance from the Office of Management and Budget (OMB), changes in federal programs should be informed by an assessment of whether the benefits exceed the costs. Navigant determined how the use of multi-ply plated steel that is used by the Royal Canadian Mint could be applied to U.S. government coins to potentially achieve raw material savings. Navigant did not make comparisons to other seamless or other co-circulating metal-composition options and did not consider how a statutory requirement, such as minimizing industry conversion costs, might be applied. Another 2012 Navigant study explored potential government savings by eliminating the penny but did not consider how a change in metal composition other than multi-ply plated steel for the other coins could be made to reduce costs and achieve savings. This study concluded that eliminating the penny would not result in government savings, as more nickels may be required and the government also loses money on the production of nickels. The six selected industry associations that provided cost estimates to the U.S. Mint stated that there would be significant cost impacts ranging from $2.4 billion to $10 billion. These costs result from modifying an estimated 21.9-million coin acceptance machines as a result of potential changes to the metal composition of coins, as shown in table 2. Industry associations developed these cost estimates and provided them to the U.S. Mint in response to an April 2014 Federal Register notice. The U.S. Mint reprinted the estimates in its December 2014 Biennial Report to Congress. The cost estimates in table 2 presuppose a metal change from cupronickel to steel for the nickel, dime, and quarter as well as the need to accept both current and new coins co-circulating together. These estimates reflect a level of uncertainty about the dimensions (diameter and thickness) and other technical specifications of any new coins. For example, the vending industry estimated a cost impact of at least $700 million—a cost of at least $100 per machine—to update the software in 7- million modern, electronic coin machines to accept new metallic coins with different EMSs. The vending industry representative we interviewed said the low estimate assumes little or no changes to the dimensions of new coins, and therefore costs for mechanical hardware changes are not estimated, but presumed are changes in the EMS or weight due to a metal change. Two of the six industry associations we contacted, as well as a coin machine manufacturer and the contractor for the U.S. Mint, told us that software modification costs are primarily driven by labor costs to update software, not the software cost itself. That is, businesses would have to hire a certified technician to update the software on every electronic coin machine in order for that machine to accept any new coins that do not have the same properties as the current coins that would remain in circulation. These representatives stated that updates would typically take less than an hour and each service call would cost up to $100. The vending industry’s high cost estimate of $3.5 billion reflects changes to coin dimensions as well as EMS specifications. According to a vending industry representative, changes in coin dimensions would require the vending industry to update the software as well as remove the hardware associated with coins within the vending machine and replace it with redesigned and expanded hardware in order to accept both current and new coins with different dimensions. The situation for the amusement industry is similar to that of the vending industry. The amusement industry estimated costs ranging from $100 million to $500 million depending on the need to either update the software on the estimated 1-million amusement machines, or remove and replace their coin machines with new machines designed to accept both current and new coins with different dimensions, which would be more costly. For the owners and operators of pool tables and coin laundries that use older mechanical (not electronic) coin machines, their cost estimates reflect a need to remove and replace their machines with new machines that could accept both current coins as well as steel-based coins, yet reject steel-based slugs. We reviewed industry estimates and identified factors that indicate that these estimates may be overstated: The published cost estimates do not account for the U.S. Mint’s position not to alter the dimensions of coins. According to U.S. Mint officials, new coins would retain the same dimensions as current coins. Consequently, the high cost estimate of $10 billion may be overstated because it is based on the need for mechanic hardware changes in machines to accommodate new coins with different dimensions. The low cost estimate of $2.4 billion assumes no changes to coin dimensions. The cost estimates do not account for the U.S. Mint’s position to not alter the quarter to a steel-based coin. Specifically, these published estimates assumed that the characteristics of the quarter would change, but U.S. Mint officials have determined that it is not viable to produce a steel-based quarter. According to U.S. Mint officials, they are currently only exploring changing the cupronickel composition of the quarter, which would not require any modification to a machine that accepts only quarters. The number of coin machines needing modifications may be overstated. Industry costs to modify machines are proportional to the number of coin machines needing modifications. As the number of coin machines decreases, these costs would also decrease. Two examples illustrate that the number of coin machines may be overstated. First, the vending association reported in its 2014 written response to the U.S. Mint that there were about 7-million food, beverage, and product vending machines in the United States. However, a 2015 study developed by the vending association, in partnership with a food research and consulting firm, reported that there are now 4.5-million vending machines—a decrease of about 36 percent. A lower actual number of vending machines would translate to a decrease in estimated cost from $700 million to about $450 million—assuming the cost of $100 per machine. Second, in a 2014 written response to the U.S. Mint, the amusement park industry association—whose members include family entertainment centers and arcades, among others—stated that there are about 10-million coin-operated machines in the United States and that changes in the metallic content of coins would result in a cost impact ranging from $1 billion to $5 billion. However, our review found the its cost estimate may have double-counted coin machines from the larger amusement sector, which also represents family entertainment centers and arcades. The industry did not provide enough detail to determine the scope and breadth of its coin machine estimate. The parking industry is shifting from coin-operated to coinless parking meters. According to a parking industry representative, the number of parking meters is decreasing due to a trend from single-space, coin- operated, parking meters to multi-space, smart meters that allow payment by credit card or phone. Because of the many benefits associated with smart meters, the representative believes that within 15 years, nearly all parking meters will no longer accept coins. However, the parking industry estimated its costs by estimating that 2- million parking meters would need to be updated to accept new coins. According to a parking industry representative, this information was based on data collected from an informal phone survey in 2007 and does not reflect industry changes since then. According to U.S. Mint officials, they did not independently verify industry cost estimates to help ensure that they are reliable. Rather, officials said that they obtained and reported industries’ written responses to the U.S. Mint’s Federal Register notice, dated April 10, 2014, which requested estimates from industry within 60-days after the notice was published. Although we interviewed industry representatives to understand their cost estimates, we did not independently verify the estimates as this was outside the scope of our work. One foreign mint, which has changed the metal composition of coins found that actual industry costs were less than industry estimated. Specifically, a Royal Mint memorandum stated that initial vending industry estimates to accept new coins in the United Kingdom were about £40 million (about $60 million). However, after the United Kingdom had completed its transition to steel coins, studies showed that actual conversion costs were about £17 million (about $26 million), or about 58 percent less than estimated. According to a Royal Canadian Mint official, the Royal Canadian Mint did not compare industry cost estimates to the actual costs incurred. In written responses to the U.S. Mint’s Federal Register notice, two of the six industry associations we contacted explicitly said that they do not support changes to the metallic composition of any coins. Specifically, representatives for industries that handle or accept coins of all denominations—such as the banking, armored carrier, and vending industry—called for no changes to be made to the metallic content of coins because such a change would require these industries to spend money to update their coin machines. Representatives for industries that accept certain coin denominations—such as the parking, amusement, and coin laundry industries that rely primarily on the quarter—were not opposed to metallic changes as long as changes were not made to the quarter. These representatives told us that changing to a steel-based quarter would complicate their business operations because these industries tend to have mechanical, rather than electronic, coin machines that currently use magnets to reject steel-based materials, commonly called slugs. A coin machine manufacturer that updates or sells machines to other businesses was generally supportive of potential changes to circulating coins. Five industry associations we contacted said that if new metallic coins are introduced, the changes should be seamless to avoid any cost impact to industry. Three industry associations we spoke with stated there is little benefit to phasing in the introduction of new coins. Representatives from these associations explained that even if the U.S. Mint introduced new coins at a rate of 3 percent per year (thereby taking a number of years for a substantial percentage of new coins to be in circulation), their industries would take immediate action to modify their equipment because they would not want to lose any potential revenue from customers who could not use new coins in unmodified machines. They believed that those customers would be unlikely to return to machines that rejected their coins due to a perception that the machines were faulty. Representatives from the six associations that provided cost estimates to the U.S. Mint generally stood by their estimates and said that the assumptions on which their cost estimates are based are reasonable. Nonetheless, due to the Act’s mandate that the conversion costs to industry be minimized, we discussed various potential changes in business practices with some industry representatives to determine if implementation of these practices could reduce costs. In general, they did not believe these practices would reduce costs or identify other practices that could reduce costs. Like other industries that provided cost estimates, the vending industry estimated its potential cost by multiplying the total number of coin machines by the cost to update each machine. Because two or more vending machines (one food and one drink machine) are often at the same location, the total cost to update the software on the machines may be less if a technician would be able to modify multiple vending machines for the cost of one service call. When we asked its representative whether it would be reasonable for businesses to update the software on multiple machines at the same location, the representative said that costs may be reduced by efficiencies in servicing machines at the same time, but cautioned that the overall cost reduction was not large. The representative was not able to provide other business practices that would result in a lower cost estimate. Representatives from three of the six industry associations we contacted said that it was not a viable business practice to update their machines during routine maintenance cycles, rather than scheduling special one- time service calls to accommodate new coins. Amusement industry representatives said that they rarely replace coin acceptance machines because their coin machines are built and designed to work for decades without routine maintenance. Similarly, the coin laundry representatives said there would be few opportunities for updating and replacing of coin machines since washers and dryers tend to be in-service without failure for a minimum of 12 to 15 years. The vending industry representative said that drivers who stock vending machines are not trained, nor is it in their skillset, to update software on vending machines. Representatives from these industries said that if and when a decision is made to change the metal composition of coins, they would take immediate actions to modify their machines because doing nothing would result in lost revenue. Given a potential cost to modify coin machines, we interviewed four selected industry associations to determine whether their costs would be lower if they moved to a coinless business model that accepts various forms of electronic payments, not coins. Representatives from the coin laundry and amusement machine industry association told us that moving to a coinless business may increase, not decrease, costs because of the substantial capital investment needed to install the necessary infrastructure (i.e., payment mechanisms, Internet, modem, Wi-Fi, and associated wiring) to move to a coinless system. For example, amusement industry representatives said that a small number of entertainment businesses have switched to coinless card readers at large venues having 25 or more game machines, but it is not financially viable for some business owners with a small number of amusement machines in multiple locations to make this investment. Additionally, two representatives from the coin laundry industry stated that their industry serves individuals who are often “unbanked.” These unbanked individuals who do not have or use bank accounts, debit cards, and other banking services may prefer using only coin-operated laundry machines rather than coinless-operated laundry systems. In lieu of modifying all of the coin machines to accept new coinage, we interviewed representatives from two selected industry associations to determine whether a viable business practice would be to install change machines that dispense current coins. Representatives from both industry associations told us that buying and installing change machines would not be a cost-effective alternative to modifying their existing coin machines due to procurement and installation costs as well as any maintenance and servicing costs associated with these change machines. Although the estimates of potential government savings and industry cost may not be precise, the estimates provide enough information to show that metal compositions that increase the potential government savings may also increase the potential industry costs. As discussed previously, the U.S. Mint has determined that it is not viable to change the quarter to a steel-based coin. As a result, the potential cost impact to industry is greatly reduced. Specifically, industries that only accept the quarter— such as the coin laundry and amusement industries— would not incur any costs if the quarter did not change. Table 3 shows options for changing coin composition and the potential government savings and industry cost impact of each option. These options do not include the possibility of making no changes to the current coin composition, which would result in no government savings and no costs to industry. The first option calls for changing the nickel and dime to a steel-based coin (either multi-ply or nickel-plated steel). The U.S. Mint estimates that this option could save the government from $32 million to $39 million per year. Using U.S. Mint data, we estimate that if these savings were consistently realized over 10 years, the savings would be from $320 million to $390 million. However, coin machines for some industries, such as the vending industry, would require a one-time update to accept these new nickels and dimes because the properties of these coins would change. Industry costs are unknown for this option because industry estimates reflect the cost to change all coin acceptance machines for all denominations. Under this option, there may be significant costs for those industries that accept the nickel and the dime (about $1.1 billion to about $4.1 billion for the vending, parking, and armored car as shown in table 2). However, the costs would likely be less than currently reported. The second option is to change only the nickel to a plated or stainless steel coin. The U.S. Mint estimates that this option could save the government from $25 million to $32 million per year. As with option 1, there may be significant costs for a few industries that accept the nickel, but overall industry costs would likely be less than currently reported (about $1.1 billion to about $4.1 billion for the vending, parking, and armored car industries, as shown in table 2). It is also unclear if some industries would choose not to modify their coin acceptance machines if only the nickel would change. Some owners of coin acceptance machines may decide to no longer accept the nickel instead of updating their coin acceptance machines to accept the new nickel. The third option—the seamless option—would increase the amount of copper in the nickel, dime, and quarter. The U.S. Mint estimates that the government could save about $8 million per year. The U.S. Mint is conducting research to ensure it could reduce the amount of nickel in each coin and increase the amount of copper while ensuring that the coins work in current coin acceptance machines. The U.S. Mint is expected to report on the results of this research in 2016. If the new coins worked seamlessly in current machines, it is expected that there would be no costs to industry. As previously discussed, the Act requires that any new coins work without interrupting existing coin acceptance equipment “to the greatest extent practicable.” In addition, it requires that any recommendations to change coin composition may not have “significant” changes to coin acceptance machines. The U.S. Mint has not yet determined how to quantify “significant” change and “to the greatest extent practicable” to determine what, if any, recommendations could be made to change the composition of coins that would be authorized under the Act. The Act does not set a time frame for providing recommendations to Congress and the U.S. Mint has not established a time frame for making any recommendations. However, U.S. Mint officials told us that if and when the Department of the Treasury makes any recommendations to Congress, the U.S. Mint and Treasury officials will ensure that the recommendations are within the framework of the Act. We provided a draft of this report to Secretary of the Treasury for review and comment. In its comments, the U.S. Mint expressed concerns regarding two issues–our use of the Cost Guide to assess the U.S. Mint’s cost-estimating process and our lack of discussion regarding a 2012 report by Concurrent Technologies Corporation (CTC). The U.S. Mint also provided additional context regarding the Act’s requirements that the U.S. Mint consider the effect on industry of any coin change. Regarding our use of the Cost Guide, the U.S. Mint took exception to our statement that the U.S. Mint’s cost estimates may not be reliable because they did not consistently follow the best practices outlined in the Cost Guide. The U.S. Mint stated that there is no requirement for agencies to use the Cost Guide and that it is not intended for use on non-capital, operational changes such as manufacturing coinage. The U.S. Mint also stated that manufacturing coinage is covered under the Statement of Federal Financial Accounting Standard 4 (SFFAS 4), which the U.S. Mint used in developing its cost estimates. While we agree that there is no requirement for agencies to use the Cost Guide, the guide consists of best practices that can guide government managers as they assess the credibility of a cost estimate. In addition, SFFAS 4 primarily refers to fundamental elements of managerial cost accounting rather than cost analysis and estimating. In our view, the Cost Guide is the most appropriate criteria to assess the reliability of cost estimates. The Cost Guide includes best practices, from the private and public sectors, in cost estimating for capital assets. A coin is produced using capital equipment and is a physical asset. Based on all of these factors, we continue to assert that the Cost Guide is both sufficient and reasonable criteria for assessing the U.S. Mint’s cost-estimating procedures. In its letter, the U.S. Mint also stated that our report should have discussed a 2012 report by CTC. We used this report to inform our assessment of the U.S. Mint’s cost-estimating process and interviewed CTC representatives. However, the U.S. Mint’s 2014 Biennial Report to Congress contained both updated cost estimates on viable metal alternatives and updated information from industry stakeholders. As a result, we obtained information on the U.S. Mint’s cost estimates from that report and reflected them in our report. Regarding the Act’s requirement to consider the effect on industry of any coin change, the U.S. Mint emphasized that it has analyzed the Act in its entirety, including the statutory provisions that require special considerations for the vending industry regarding any recommendations for new metal coin compositions. The U.S. Mint also stated that it will be important to provide a current analysis of the effect on industry when the U.S. Mint is ready to recommend new coinage materials to Congress because technology used by the industry is constantly changing. The U.S. Mint also provided technical comments, which we incorporated as appropriate. The U.S. Mint’s comments are reproduced in appendix III. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees and the Secretary of the Treasury. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or members of your staff have questions about this report, please contact me at (202) 512-2834 or rectanusl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. We addressed the following questions: (1) What is known about potential government savings from changes to the metal composition of coins? (2) What is known about potential industry costs from changes to the metal composition of coins? (3) How potential coin composition options could affect government savings and industry costs? To determine what is known about the potential government savings from changes to the metal composition of circulating coins, we reviewed savings estimates reported by the U.S. Mint in its December 2012 and 2014 biennial reports to Congress, which were required by the Coin Modernization, Oversight, and Continuity Act of 2010 Act (the Act). These estimates identified savings by coin denomination (nickel, dime, and quarter) and by alternative metals (cupronickel and steel-based alternatives) when compared to fiscal year 2014 actual costs. We also reviewed the Concurrent Technologies Corporation study on alternative metals conducted under contract with the U.S. Mint. We interviewed U.S. Mint officials to understand (a) the rationale for not considering metal alternatives to the penny; (b) the purpose, data sources, methodology, and assumptions used in developing savings estimates; and (c) the process followed in developing these costs estimates. We compared the U.S. Mint’s cost-estimating process with best practices. Specifically, the GAO Cost Estimating and Assessment Guide (Cost Guide) identifies best practices that represent work across the federal government and are the basis for a high-quality, reliable cost estimate. We analyzed the extent to which the cost-estimating process used by the U.S. Mint to develop these cost and savings estimates followed the 12-step process described in cost estimating best practices—and assigned each step with a rating of not met, minimally met, partially met, substantially met, or fully met. We also held detailed discussions with U.S. Mint officials and reviewed their documentation to identify key factors that could affect the potential costs and savings such as changes in coin production or workforce and operational changes that may not have been included directly in the estimates. We shared our Cost Guide, the criteria against which we evaluated the Mint’s savings estimates, and our preliminary findings with U.S. Mint officials. When warranted, we updated our analyses based on the agency response and additional documentation provided to us. Finally, we corroborated our analyses in interviews with U.S. Mint officials responsible for developing the savings estimates. In addition, we reviewed two other reports prepared by Navigant Consulting—a global, independent consulting firm—under contract from Jarden Zinc—a material supplier to the Royal Canadian Mint. These reports contained cost-savings estimates based on (a) producing the nickel, dime, and quarter using multi-ply plated steel —the material currently used by the Royal Canadian Mint for its coin denominations—and (b) costs estimates should the U.S. Mint eliminate the penny. We interviewed the authors of these reports to better understand the purpose, scope, and methodology used in developing these estimates. We did not assess the reliability of the Navigant cost estimates using GAO best practices because this assessment was not within the scope of the review. To determine what is known about potential industry costs from changes to the metal composition of coins, we reviewed all 20 industry stakeholder’s written responses that were reprinted in the December 2014 biennial report. The U.S. Mint obtained these responses through an April 10, 2014 Federal Register notice in which it solicited written responses from industry on the impacts of changing the metal composition of circulating coins. To focus our review, we selected a non- generalizable sample group of 11 industry stakeholders. See table 4. We made our selection using a variety of criteria—such as a mix of industries (manufacturing, logistics, and commerce); being specifically identified in the Act; industries that reported sizeable cost impact; the size of industry; and mix of coin denominations, among others. This resulted in three stakeholders coming from the coin-machine manufacturing industry and a raw material supplier, two from the logistics industry, and six from the commerce industry. While information from our industry stakeholders is not generalizable, the diverse perspectives of the stakeholders gave us a better understanding of the impacts to industry costs should a change in coin composition occur. We then contacted these selected stakeholders to understand the data sources, methodology, and assumptions used to develop their cost estimates. We asked these stakeholders to identify the coin denominations that are of importance to them, the type of coin acceptance machines that are used in their industries, and the circumstances that would require software and/ or hardware changes that that would need to occur to accept new coins. Finally, we asked these stakeholders to comment on potential changes to business practices that we developed. These changes were designed to reduce the conversion costs to industry. We did not, nor did the U.S. Mint, validate any industry cost estimates. Finally, we interviewed officials from the Royal Canadian Mint and the Royal Mint of the United Kingdom about their experiences in transitioning to steel-based coins. To identify how potential coin composition options could affect government savings and industry costs, we reviewed the legal framework in Coin Modernization, Oversight, and Continuity Act of 2010 and the cost estimates prepared by the U.S. Mint, Navigant, and industry associations. We also interviewed industry representatives, who generally called for no changes to the quarter (and in one case, no changes to coins at all). From this work, we independently identified some options for changing coin composition. This list of options is not exhaustive. For each option we identified, we described how each option could affect government savings and industry costs. We conducted this performance audit from March 2015 to December 2015, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We developed the GAO Cost Estimating and Assessment Guide in order to establish a consistent methodology that is based on best practices and that can be used across the federal government for developing, managing, and evaluating program cost estimates. We have identified 12 steps that, followed correctly, should result in reliable and valid cost estimates that management can use for making informed decisions. We assessed the U.S. Mint’s cost estimation process using the 12 steps associated with high-quality, reliable cost estimates. Table 5 provides a summary assessment on our comparison of the estimate to best practices. In addition to the contact name above, John W. Shumann (Assistant Director), Aisha Cabrer, Tim Guinane, Dave Hooper, Jennifer Leotta, Steve Martinez, Josh Ormond, Amy Rosewarne, and Elizabeth Wood made key contributions to this report. | The U.S. Mint, a bureau of the Treasury, produced about 13 billion coins in 2014. Since 2006, metal prices have risen to where the unit costs of a penny and nickel exceed their face value. The U.S. Mint was directed by statute to develop and evaluate the use of new metals that would reduce the costs of coin production while minimizing the impact on coin accepting equipment. Treasury is authorized to recommend coin changes to Congress based on the U.S. Mint's analysis and has not yet done so. GAO was asked to examine the U.S. Mint's efforts. This report examines (1) what is known about potential government savings from changes to the metal composition of coins; (2) what is known about potential industry costs from changes to the metal composition of coins; and (3) how potential coin composition options could affect government savings and industry costs. GAO reviewed legislative provisions and U.S. Mint estimates of government savings; compared the U.S. Mint's estimating process to best practices; and reviewed cost estimates from associations that represent selected businesses that submitted estimates to the U.S. Mint, such as the vending and laundry industries. GAO interviewed U.S. Mint officials and industry representatives to understand how their estimates were developed. GAO is not making recommendations in this report. In comments, the U.S. Mint questioned GAO's use of the Cost Guide to assess the U.S. Mint's estimates. GAO continues to believe it is appropriate to use the Cost Guide to assess the U.S. Mint's estimates. The U.S. Mint estimated that the government could potentially save between $8 million and $39 million per year by changing the metal composition of the nickel, dime, and quarter. The estimated savings of $8 million would come from slightly changing the current metal in coins, which would decrease metal costs and retain the characteristics of existing coins. The savings of $39 million would come from changing the nickel and dime to a plated steel coin, which would change the coin's weight and other characteristics. While the U.S. Mint previously estimated potential savings of $83 million per year by changing the nickel, dime, and quarter to a plated steel-based coin, the U.S. Mint determined that it was not viable to change the quarter because less-valuable foreign coins would have similar characteristics to a steel quarter and could be used as counterfeit quarters. GAO found that the U.S. Mint's cost-estimating process does not fully align with best practices outlined in the GAO Cost Estimating and Assessment Guide ( Cost Guide ) and as such may not result in precise estimates. For example, U.S. Mint officials discussed but did not conduct a sensitivity analysis—a best practice— that would have allowed them to know how savings estimates could be affected by changes in metal prices. However, the U.S. Mint's estimates can provide insight into the general magnitude of potential savings. Associations representing selected industries that use coin acceptance machines estimated a cost impact ranging from $2.4 billion to $10 billion to modify an estimated 22-million coin machines, such as vending machines, to accommodate steel-based coins. According to these associations, these costs would be incurred because coin machines would require modifications to accept new coins while continuing to accept current coins. However, GAO found that these estimates may be overstated for several reasons. For example, the vending industry assumed 7-million vending machines would require modification, but a 2015 industry study estimated that there are 4.5-million vending machines in the United States. Second, the cost estimates assumed steel changes to all coins, but the U.S. Mint has determined it is not viable to change the quarter. Therefore, machines that only accept quarters (such as coin laundry machines) would not require modification. However, any change in coin composition that requires changes to coin acceptance machines will result in some industry costs. Although government savings and industry cost estimates may not be precise indicators of savings and costs, they nonetheless show that metal compositions that would increase government savings also increase industry costs. U.S. Mint estimates show that one change could result in no industry costs but show a savings of only $8 million annually. In contrast, changing the nickel and dime to multi-ply plated steel coins could save $39 million annually but result in substantial industry costs. The Coin Modernization, Oversight, and Continuity Act of 2010 requires that any new coins work in existing machines that accept coins “to the greatest extent practicable.” U.S. Mint officials have not yet analyzed whether the options they are considering meet these criteria for making recommendations to Congress. U.S. Mint officials said when and if the Department of the Treasury (Treasury) makes recommendations to Congress, they will ensure that recommendations are within the framework of the Act. |
A long-standing challenge for federal agencies has been developing credible and effective performance management systems that can serve as a strategic tool to drive internal change and achieve results. According to OPM regulations, performance management is the systematic process by which an agency involves its employees, as individuals and members of a group, in improving organizational effectiveness in the accomplishment of agency mission and goals. Performance management includes such actions as setting expectations, continually monitoring performance, developing the capacity to perform, periodically rating performance in a summary fashion, and rewarding good performance. In December 2009, the LMR Council was created to establish a cooperative and productive form of executive branch labor-management relations in an effort to improve the productivity and effectiveness of the federal government. The LMR Council is co-chaired by the Director of the Office of Personnel Management (OPM) and the Deputy Director for Management of the Office of Management and Budget (OMB) with members representing executive branch departments and agencies, several labor unions, and other management associations. The LMR Council serves as an advisor to the President on matters involving executive branch labor-management relations; it also promotes partnership efforts between labor and management within the executive branch. For example, the order states that management should discuss workplace challenges and problems with labor and endeavor to develop solutions jointly, rather than advise union representatives of predetermined solutions to problems and then engage in bargaining over the impact and implementation of the predetermined solutions. In April 2011, the LMR Council, in conjunction with the CHCO Council, set out to examine the federal government’s performance management accountability framework and to make recommendations for improvement. The CHCO Council supports OPM in the strategic management of human capital at federal agencies and provides a forum for senior management officials to exchange human resources best practices. By May 2011, a work group was formed (with members representing various federal agencies, labor unions, and management organizations from both the LMR and CHCO Councils) to discuss ways to examine the existing system of employee performance management in ways that the work group felt were different from previous attempts that focused on actions such as increasing employee accountability through changes to regulations. The work group developed the GEAR report, issued in 2011, which identified challenges with the federal performance management system, such as a lack of evidence that the mechanical aspects of public and private sector performance management systems, such as rating levels and awards, do a good job of improving employee and organizational performance; and a disconnect among the various functions responsible for organizational performance improvement and employee performance improvement. For example, there were few formally structured opportunities for the individuals responsible for these functions to interact and communicate. As a result of identifying these challenges, according to the GEAR report, the work group agreed to focus on the relational elements of organizational and employee performance management, such as how to set clear expectations in work plans through frequent formal and informal feedback between supervisors and employees; and engage employees and agency managers, through their labor unions and CHCOs, and identify successful practices that have improved the selection and training of supervisors and the engagement between employees and supervisors such as two-way communication and continual informal feedback. The work group constructed the GEAR framework, as presented in the GEAR report, with the following recommendations: 1. Articulate a high performance culture. 2. Align employee and organizational performance management. 3. Implement accountability at all levels. 4. Create a culture of engagement. 5. Improve supervisor assessment, selection, development, and training. OPM officials told us that the practices in the GEAR report are well known and credible but not universally applied in federal agencies. According to an OPM senior official, the GEAR recommendations are “common sense but not common practice.” The same official noted that the GEAR recommendations align with current regulations addressing federal performance management, but that GEAR adds specific strategic and Agency officials said technical guidance and communication strategies.that combining the recommendations into one framework can aid officials in maintaining focus and attention on improving performance management in the context of competing priorities. The GEAR recommendations and framework were presented to and accepted by the LMR Council in November 2011. By December 2011, five agencies had volunteered to implement the GEAR framework. Of these, three agencies – DOE, HUD, and OPM – implemented GEAR agency-wide with few exceptions. Two agencies – Coast Guard and NCA – implemented GEAR in single units. The agencies all created GEAR implementation plans and periodically met as a group to discuss strategies and challenges, and to share successes and additional training opportunities throughout implementation. The pilot agencies also periodically reported their progress to both the LMR and CHCO Councils. No additional agencies have volunteered to pilot GEAR. However, in Analytical Perspectives: Performance and Management section in the President’s Budget for Fiscal Year 2014, there is a stated goal of broader application of the GEAR framework across the federal government, with no stated timeframe for completion. We have previously identified a set of key practices for modern, effective performance management, which appear with summary descriptions in table 1. Our current review found that the GEAR framework generally addressed seven of these key practices, but it does not give clear attention to two other practices. As shown in table 1, two practices—connecting performance expectations to crosscutting goals, and linking pay to organizational performance—do not receive clear attention in the GEAR framework. LMR work group members who helped to draft the GEAR report told us they did not specifically discuss agency crosscutting goals during the drafting and instead, focused on aligning employee and organizational performance management. However, they also told us that GEAR does not deter agencies from connecting performance expectations to crosscutting goals, and that they felt that crosscutting goals would be subsumed under individual and agency performance goals, as appropriate. OPM officials agreed that connecting performance expectations to crosscutting goals is essential to improving performance across the federal government. In addition, these same LMR work group members said that they excluded linking pay to performance because Congress authorizes pay. Congress is responsible for authorizing pay-for-performance plans, and tools to reward good performance with pay are increasingly limited in the federal government. They said that the GEAR report suggests appropriate action for both good and poor performance and that use of pay-for-performance could be understood as such an action. According to the GEAR report, effective, and productive relationships between managers and employees are necessary for performance improvements. The greater an employee’s level of engagement, the more likely he or she is to go beyond the minimum required. GEAR emphasizes promoting employee-supervisor engagement through continual informal feedback and frequent (up to four times per year) formal feedback. Beyond the establishment of competencies and aligned goals, GEAR recommends frequent interaction and accountability—both for supervisors and employees—based on measurable goals. The GEAR report also recommends that agencies improve the assessment, selection, development, and training of supervisors, emphasizing that agencies should select and assess supervisors based on supervisory and leadership proficiencies, rather than technical competencies, and should hold them accountable for performance of supervisory responsibilities. In addition, GEAR emphasizes the need to hold supervisors accountable for providing feedback, documenting performance discussions, and holding poorly-performing employees accountable. According to the GEAR report, focusing on accountability of managers and supervisors helps to ensure continual, effective communication and to make performance management a daily feature of work. Senior officials at three of the pilot agencies reported that supervisors sometimes say there is not enough time for supervisory responsibilities. For example, officials at DOE and HUD said that supervisors sometimes see their supervisory responsibilities as secondary to programmatic responsibilities and may not allocate sufficient time to set clear expectations or provide frequent feedback to employees. As part of improving supervision, GEAR recommends greater integration between the agency units dealing with organizational performance and those dealing with individual employee performance. Specifically, GEAR proposes greater integration between the CHCO and Performance Improvement Officer (PIO) functions to improve overall program execution, either through creation of a Performance Management Integration Board or through continuous communication so that the functions work in tandem. Similarly, GEAR emphasizes long-term integration of performance management into agency functions. For example, it recommends mentoring to ensure general organizational continuity, continual planning and assessment to ensure continued focus on personnel management and high performance, and targeted training and development programs to ensure agencies will have qualified supervisors and be able to fill other critical occupations. We have previously concluded that integrating human capital planning with broader organizational strategic planning is essential for ensuring that agencies have the talent and skill mix needed to cost-effectively execute their mission and program goals. While OPM and the CHCO Council have been supportive of the pilot effort to date, neither entity has identified their specific respective roles or responsibilities for achieving the current administration’s goal of implementing GEAR more broadly across the federal government going forward. Clearly defined roles and responsibilities will be important for maintaining the momentum of the pilot, sharing the pilot effort’s lessons learned, and encouraging additional agencies to implement the GEAR framework. OPM has taken several steps to help the pilot agencies implement GEAR. For example, OPM officials have facilitated conversations among the pilot agencies to share information and lessons learned, and have made web- based training on performance management available to federal agencies at no cost. According to OPM officials, OPM, as co-chair of the CHCO Council, will provide leadership and direction for the government-wide implementation of GEAR, but the future of GEAR will be determined by the CHCO Council as a whole and not by OPM alone. They noted that the GEAR framework is intended to be flexible and thus did not want to mandate how agencies should implement it. CHCO Council officials agreed that the framework provides flexibility to the agencies; however, the same CHCO officials also cited the need for some standardization of GEAR metrics to ensure accountability among agencies. As noted, in Analytical Perspectives: Performance and Management of the President’s Budget for Fiscal Year 2014, “the CHCO Council is currently reviewing the progress of GEAR and lessons learned in these agencies and identifying other leading practices across the Federal sector and private sector with the goal of broader application of the GEAR framework across the Federal Government.” For its part, the CHCO Council is reviewing the pilot agencies’ implementation of GEAR with an eye toward identifying promising practices. Specifically, CHCO Council officials told us that an internal work group is identifying promising practices and metrics that, when combined, may provide additional guidance in the form of a diagnostic toolkit to assist agencies in implementing the GEAR framework. According to CHCO Council work group members, a toolkit is needed because the GEAR recommendations were high-level in nature, and the report itself did not include practical guidance. For example, an agency may want to implement GEAR’s recommendation to create a culture of engagement, but the GEAR report provided limited guidance on how to do so. According to a CHCO Council work group member, the diagnostic toolkit will enable agencies to identify their level of “maturity” on a scale by analyzing the results of the Federal Employee Viewpoint Survey (FEVS), the Human Capital Framework (HCF), the non-SES Performance Appraisal Assessment Tool (PAAT), and the SES certification process. Federal agencies can use these tools to measure whether, and to what extent, conditions characterizing successful organizations are present in federal agencies (FEVS); to assist officials in achieving results in personnel management programs (HCF); and to assess agencies’ performance appraisal systems and develop plans and strategies for making improvements (non-SES PAAT and SES certification process). From this, agencies can use the results to identify a range of promising practices that the agency may choose from, based on identified resources and needs. According to the same CHCO Council work group members, these promising practices align with the five GEAR recommendations and reflect a range of initiatives (compiled from both public and private sector efforts, as well as pilot agencies’ experiences), that provide options for agencies based on their specific needs. The CHCO Council’s subgroups plan to complete the diagnostic toolkit by August 2013 and to present the final product to the CHCO Council by the end of September 2013. However, beyond its completion in September 2013, the CHCO Council has no stated plans to update the toolkit, such as collecting lessons learned on an ongoing basis from the pilot agencies, or including additional promising practices as needed. Without gathering information from the pilot agencies as they continue their implementation process, valuable information that could help agencies implement GEAR in the future may be lost. While the toolkit shows promise for refining the future implementation of GEAR, its utility in this regard could be limited because neither the CHCO Council nor OPM have stated plans to broadly disseminate it. CHCO Council officials told us that the completed diagnostic toolkit will be presented to the entire CHCO Council but there is no plan to share that information on a broader basis, which would be consistent with the Administration’s goal to more broadly implement GEAR government-wide. CHCO Council officials stated that it was OPM’s responsibility to determine the future of the diagnostic toolkit. One vehicle for disseminating the toolkit’s information more broadly could be the OMB Max Federal Community Website where OPM officials have already posted informational material on GEAR and where pilot agencies can add their own information and experiences. Without making such information more widely available, front-line managers, supervisors, representatives of labor unions, and other federal human capital stakeholders may not have access to information that could improve employee engagement. Moving forward, it will be important for both OPM and the CHCO Council to agree upon their specific roles and responsibilities if the goal of broader application of the GEAR framework government-wide is to be realized. Once specific roles and responsibilities have been established going forward, plans may be made to update the GEAR framework and diagnostic toolkit as needed and to broadly disseminate the information so that stakeholders government-wide who are already implementing GEAR (or may implement in the future) will have access to the most recent guidance available. The five pilot agencies adopted various approaches to implementing all five of the GEAR recommendations based on their needs and available resources. The GEAR report provides agencies with a framework focused on feedback, employee-supervisor engagement, and improving supervision; it gives agencies the flexibility to implement as they see fit. Thus, three of the five agencies – DOE, HUD, and OPM – implemented GEAR agency-wide, while two agencies – Coast Guard and NCA– established pilots in single units. DOE, HUD, and OPM officials all said that the project had the best chance of changing the organizational culture if they implemented it agency-wide. For example, HUD officials said they felt that implementing GEAR agency-wide would permit it to be better embedded in the performance culture. An OPM official said that GEAR focused attention on previously identified human capital needs, such as supervisor assessment, that had not been addressed due to limited resources and competing priorities. VA is implementing GEAR within its NCA agency as a single unit pilot in its Memorial Service Network (MSN) II in the southeastern region of the country. According to VA officials, VA chose the NCA because it has a reputation as a high-performing organization and an existing focus on metrics and feedback. NCA officials said they chose to implement GEAR as a single unit, rather than agency-wide, because doing so allowed them to provide closer attention and support. The Coast Guard is implementing GEAR at Base Boston, which, according to officials, was chosen as the pilot site for several reasons: the presence of both general schedule (GS) and wage grade (WG) employees (who happen to be supervised by both military and civilian personnel) a wide variety of occupations, and the representation of a single union – the American Federation of Government Employees (AFGE) – rather than the multiple unions that are present at other Coast Guard bases. Coast Guard officials said they chose to pilot GEAR at a single unit because implementation brought challenges such as dealing with such a large military population (approximately 42,000 military personnel that rotate bases approximately every two to three years and operate on a different performance management plan compared with approximately 9,000 civilian employees) and negotiating with approximately nine unions. According to one Coast Guard official, to develop more than one site would create a capacity issue that would make it very difficult to plan and implement. Table 2 shows how many employees will be covered by GEAR at each agency, as well as each agency’s time frame for implementation. Additional information on the actions taken by each agency is included in appendices V-IX. Each of the agencies said they had engaged labor unions during GEAR implementation. Of the agencies that mentioned the results of their engagement, DOE officials said that prior to launching the pilot, the Office of the Chief Human Capital Officer briefed the local unions that cover their employees, but received no feedback. In contrast, NCA officials said that the two unions that cover their employees, AFGE and the National Association of Government Employees, have been involved in GEAR implementation and made contributions to NCA’s approach to GEAR. Coast Guard officials said the agency met with union representatives for several extensive discussions involving facilitators and brain-storming sessions, in order to include union input in the final GEAR implementation plan. As a result of these discussions, Coast Guard officials told us that the agency has the approval of the union and that both parties are mutually responsible for the successful implementation of GEAR. Representatives from labor unions that we interviewed generally agreed that the GEAR pilot agencies had briefed them during GEAR implementation. Finally, although the five pilot agencies have defined different benefits that they hope to achieve through implementing GEAR, some common benefits include improved performance management overall, better engagement between employees and supervisors, and improved communication throughout the agencies. The GEAR pilot agencies have demonstrated leadership over their own implementation efforts. Our work with the pilot agencies identified lessons learned both from implementation within their agencies, and from hearing about the experiences of others including the following: Strong agency leadership support from the beginning helped provide needed attention and focus upon implementation efforts. For example, DOE’s Secretary sent a memo to all DOE employees stating his personal support for the LMR Council GEAR report and the agency’s commitment to the pilot effort. Early stakeholder involvement with employees, management, executive team and unions resulted in greater transparency and fewer obstacles during implementation. For example, NCA officials told us that two labor unions that have been involved in GEAR implementation contributed valuable input that enhanced the agency’s approach to GEAR – such as suggesting that exceptional performance be defined in performance plans. Timing GEAR implementation to coincide with the beginning of the annual performance cycle in order to better track changes resulting from GEAR. For example, Coast Guard officials told us that they decided to implement GEAR at the start of their annual performance cycle, based on the experience of another pilot agency that implemented GEAR within a cycle and faced greater difficulty because of their timing. Administer employee surveys prior to implementation to identify the greatest needs and to establish a baseline to better track results. For example, Coast Guard administered a survey to its Base Boston employees to solicit their thoughts, experiences, and assessment of the current state of the employee performance management systems. Leverage shared training opportunities across agencies to conserve limited resources. For example, OPM officials told us that they worked with the Office of the Director of National Intelligence to make a suite of performance management web courses available to all agencies free of charge. Incorporate supervision as part of an agency’s performance expectations. For example, according to DOE officials, they developed a mandatory supervisory element which holds supervisors accountable for each phase of the performance process. Allow additional time for supervisors to meet their requirements. For example, HUD instituted quarterly in-service days for managers for the purpose of focusing on communication and employee feedback. Ensure continued momentum for implementation by accepting GEAR as a long-term culture change commitment. For example, HUD officials told us that agency leadership is committed to cultural transformation and the time required to accomplish it. As the pilot agencies continue with their implementation plans, additional lessons may be learned. As we stated earlier in this report, OPM and the CHCO Council officials have not yet identified a plan following the completion of the diagnostic toolkit to capture additional lessons learned. As we concluded in May 2012, a well-developed and documented project plan can help ensure that agencies are able to gauge progress, identify and resolve potential problems, and promote accountability at all levels of the project, increasing the likelihood of successful implementation. Project planning is the basis for controlling and managing project performance, including managing the relationship between cost and time. As we have previously concluded, preparing a project plan encourages agency managers and stakeholders to systematically consider what is to be done, when and how it will be done, what skills will be needed, and how to gauge progress and results. Agency approaches to such planning can vary with each agency’s particular needs and mission. Nevertheless, existing best practices stress the importance of accountability and sound planning for any project. Inherent in such planning is the development and use of a project management plan that describes, among other factors, the project’s objectives, implementation actions, lines of responsibility, estimated schedule for development and implementation, and performance measures. Having accurate and transparent project cost and schedule information is also essential to effective oversight. Each of the pilot agencies developed a GEAR project plan that outlined specific actions. Other elements (such as schedules and roles and responsibilities) were either implemented to varying degrees, or (in some cases) not implemented at all. Best practices emphasize the importance of establishing a complete description that ties together all project activities and evolves over time to continuously reflect the current status and desired outcome of the project. For example, DOE’s GEAR project plan was the most thoroughly documented of the five plans we assessed and included project planning best practices for all five elements we assessed. The other four agency plans did not include all project planning best practices. Not including such best practices could limit their effectiveness in further implementing the GEAR framework. As a result, the information included in the plans may not be specific enough to improve engagement, provide accountability, or measure progress in the agencies. We identified project planning best practices from Standards for Internal Control in the Federal Government, our definitions of performance measurement, and our guide for leading practices in project planning. We assessed information contained in the agencies’ GEAR plans against the inclusion of the following best practices: (1) plan objectives, describing a goal; (2) specific actions needed to attain that goal; (3) roles and responsibilities identified and assigned to project stakeholders; (4) schedules containing logically related project elements leading to the goal; and (5) valid performance measures that permit comparison between desired outcomes and actual results. Since the volunteer pilot agencies are in the process of implementing GEAR, we assessed them on the contents of the plan and not against the progress made towards implementation of the plan. The GEAR report did not include guidance on developing project plans. Table 3 identifies the extent to which the five agency GEAR project plans address each of the project planning best practices. We found that the Coast Guard’s GEAR plan includes information on objectives, specific actions, and roles and responsibilities, as shown in table 4. According to the plan, the Coast Guard selected its pilot location at Base Boston, with both military and civilian personnel, as a result of discussions between management and labor unions. The plan clearly states that a primary project manager in the Coast Guard’s human resources office and a project manager at Base Boston are in charge of GEAR implementation. The Coast Guard’s plan has a high-level schedule that includes nine different actions and dates, such as deploying a communication plan to the workforce in February 2013, with the last date being the start of the pilot in April 2013. This is the last date in the plan associated with a specific action. Elsewhere in the plan, the Coast Guard lists pending and ongoing actions under each of its objectives but does not include dates. According to Coast Guard officials, the pilot will run the length of a performance cycle (from April 1, 2013 to March 31, 2014). However, based on the plan, we are unable to determine whether the Coast Guard has additional actions planned for implementing GEAR that extend beyond preparing for and starting the project itself. Best practices note that planning and scheduling are continual processes throughout the life of a project. They state that planning may be done in stages throughout the project as stakeholders learn more details. Because GEAR-related work continued at Coast Guard after April 1, 2013 the information included in the plan is insufficient to gauge progress. In addition, the plan lacks detail in defining performance measures that include clearly stated performance targets that are aligned to the plan’s objectives and enable the agency to monitor and measure progress. According to the plan, the Coast Guard disseminated a pre-pilot survey in March 2013 at Base Boston with plans for additional surveys mid- and post-pilot to create a baseline. In addition, the Coast Guard created a data repository to gather and analyze performance management responsibilities. However, for both examples, there is no statement within the plan specifying whether these are performance measures or identifying what types of data are being collected. According to internal control standards, performance measures should be established to evaluate the success or failure of their activities and programs. Performance measurement involves identifying performance goals and measures, establishing performance baselines by tracking performance over time, identifying targets for improving performance, and measuring progress against those targets. Without additional information on performance measures, the Coast Guard is missing opportunities to gauge progress and measure the success of their implementation efforts and, as a result, the agency may not have all available data for decision- making. Specifically, agency officials told us that the success of GEAR’s implementation at Base Boston is one of the factors that will determine whether or not the Coast Guard exports the GEAR framework to its remaining bases and employees. The GEAR framework presents an opportunity for federal agencies to increase employee engagement and improve performance management. Indeed, even though the GEAR pilot has only been in place a short period of time, agency officials have already described such benefits as improved engagement and communication between employees and supervisors. Although the recommendations outlined in the GEAR report do not represent new concepts, implementing them in practice may present challenges unique to each agency, as demonstrated by the experience of the five pilot agencies. OPM has facilitated conversations among participating GEAR agencies to identify lessons learned and the CHCO Council is developing a GEAR toolkit to help agencies improve performance management and increase employee engagement. However, going forward, the roles and responsibilities of OPM, the CHCO Council, and participating federal agencies have not been fully defined, including how to identify future promising practices, and how to update and disseminate information on the government-wide implementation of GEAR. Clearly defined roles and responsibilities will be important for capitalizing on the improvements made at the five pilot agencies, as well as for sustaining and achieving the current Administration’s goal of implementing GEAR more broadly. Without these next steps, the momentum and lessons learned from the GEAR pilot may be lost and desired performance management reforms in federal agencies may not be fully realized. The five pilot agencies deserve significant credit for taking steps to implement the GEAR recommendations and improve performance management within their organizations in a relatively short amount of time. Although it is too early to tell whether GEAR is changing organizational culture within the pilot agencies, leading practices identified during the pilot should help inform GEAR implementation efforts at other agencies. However, as part of implementing the GEAR model, most of the pilot agencies did not develop complete project plans that will help guide their GEAR-related efforts into the future. Updating these plans to fully reflect the nature and scope of their efforts will help ensure that these agencies meet the goals of their respective GEAR efforts and serve as a model for other agencies to follow. Recognizing that moving toward a more performance-oriented culture within federal agencies is likely to be a continuous effort and to ensure that the opportunity GEAR recommendations offer to improve performance management is not lost, we recommend the Acting Director of OPM, in collaboration with the Chief Human Capital Officers Council, define roles and responsibilities of OPM, the CHCO Council, and participating federal agencies going forward as the GEAR framework is implemented government-wide. In doing so, OPM, in collaboration with the CHCO Council, could define roles and responsibilities such as supplementing the GEAR report and updating the diagnostic toolkit as needed to reflect additional promising practices and lessons learned (such as those we identified) and guidance on using metrics. This should include considering whether connecting performance expectations to crosscutting goals should be part of the GEAR framework. disseminating the information contained in the GEAR report and diagnostic toolkit through multiple venues and various government- wide websites, such as OMB Max, to ensure federal managers, supervisors and other stakeholders have access to promising practices and additional guidance on improving performance management. In addition, to improve agencies’ GEAR implementation plans, we recommend that: the Secretary of Homeland Security direct the Commandant of the Coast Guard take the following two actions to update the agency’s GEAR implementation plan to include: (1) performance measures that permit comparison between desired outcomes and actual results and (2) additional information on schedules that are linked to specific actions; the Secretary of Housing and Urban Development take the following two actions to update the agency’s GEAR implementation plan to: (1) include objectives describing the goals the agency plans to achieve and (2) identify roles and responsibilities for specific actions and stakeholders; the Secretary of Veterans Affairs direct the Under Secretary for Memorial Affairs take the following three actions to update the National Cemetery Administration’s GEAR implementation plan to include: (1) performance measures that permit comparison between desired outcomes and actual results, (2) additional information on roles and responsibilities for specific actions and stakeholders, and (3) additional information on schedules that are linked to specific actions; and the Acting Director of OPM to take the following two actions to update the agency’s GEAR implementation plan to: (1) include objectives describing the goals the agency plans to achieve and (2) identify roles and responsibilities for specific actions and stakeholders. We provided a draft of this report to the Acting Director of OPM and the Secretaries of Homeland Security (Coast Guard), Housing and Urban Development, Energy, and Veterans Affairs (National Cemetery Administration). The Associate Director for Employee Services at OPM, the Chief of Staff at Veterans Affairs, and the Director of the Departmental GAO-Office of Inspector General Liaison Office at DHS provided written comments on a draft of the report, which are reprinted in appendixes II, III, and IV, respectively. In their written responses, OPM, DHS, and VA officials agreed with our recommendations. HUD officials also stated that they agreed with our recommendation to HUD in an e-mail exchange. We modified our assessment of VA’s GEAR implementation plan, changing schedules from not included to partially included, based on additional information the agency provided after the report was sent for comment. DOE, DHS, and VA also suggested technical changes to the report, which we incorporated where appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until thirty days from the report date. At that time, we will send copies to the Secretaries of Energy, Homeland Security, Housing and Urban Development, and Veterans Affairs, and the Acting Director of the U.S. Office of Personnel Management, as well as the appropriate congressional committees and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-6806 or goldenkoffr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix XII. This report (1) analyzes how the Goals-Engagement-Accountability- Results (GEAR) framework addresses key practices for effective performance management and identifies opportunities, if any, to improve GEAR implementation government-wide; (2) describes the status of each pilot agency’s GEAR implementation and any lessons learned to date; and (3) assesses the extent to which each agency’s GEAR implementation plan includes selected best practices for project planning. The GEAR framework is currently being piloted in five departments and agencies – the Departments of Energy (DOE), Homeland Security/United States Coast Guard (DHS/Coast Guard), Housing and Urban Development (HUD), Veterans Affairs/National Cemetery Administration (VA/NCA), and the Office of Personnel Management (OPM). For purposes of this report, we are referring to both departments and agencies as “agencies.” To analyze how the GEAR framework addresses key practices for effective performance management and identify opportunities, if any, to improve GEAR implementation government-wide, we verified that key practices we previously identified were applicable for purposes of this engagement by conducting a literature review of leading performance management practices and consulting with internal subject matter experts. In addition, OPM officials reviewed these criteria and agreed that many of the practices are very important contributors to organizational success. We systematically reviewed the GEAR report from the National Council on Federal Labor-Management Relations (LMR Council) using a methodology to determine whether the goals, recommendations, and actions described in the document qualitatively reflected the presence or absence of the key practices. We also reviewed LMR Council meeting minutes and other materials from LMR Council meetings. Finally, we interviewed OPM officials and Chief Human Capital Officers Council officials, some of whom were members of the LMR Council work group that drafted the report. also interviewed labor union representatives about any problems or concerns with implementing the GEAR framework. We interviewed OPM officials responsible for assisting the pilot agencies in GEAR implementation as well as OPM officials who had contributed to developing the GEAR framework in their capacity as LMR Council members for perspectives on lessons learned. In addition, we conducted a site visit at the Coast Guard’s Base Boston in May 2013, where we observed initial GEAR training sessions and interviewed supervisors and employees about their reactions to the training and GEAR itself. We chose this site because the Coast Guard was the most recent of the five pilot agencies to implement GEAR and we were able to visit soon after the start of their implementation efforts. To assess the extent to which each agency’s GEAR implementation plan includes selected best practices for project planning, we identified project planning best practices primarily from Standards for Internal Control in the Federal Government, GAO’s definitions of performance measurement, and GAO’s guide for best practices in project schedules, and assessed the inclusion of information contained in the agencies’ GEAR plans against the best practices. We focused on the following best practices because they were relevant to GEAR: (1) plan objective(s) describing a goal; (2) specific actions needed to attain that goal; (3) roles and responsibilities identified and assigned to project stakeholders; (4) schedules containing logically related project elements leading to the goal; and (5) valid performance measures that permit comparison between desired outcomes and actual results. Since the pilot agencies are in the process of implementing GEAR, we assessed them on the contents of the plan and not against the progress made towards implementation of the plan. Two analysts independently assessed each agency’s GEAR plan to determine the extent to which information that was included met all five best practices and rated each plan using a three-level scale of included, partially included, or did not include, and reached a level of inter-rater agreement greater than 80 percent. In addition, we noted strengths and areas for further attention in each plan. We assessed partial inclusion if the agency made at least one reference to a specific best practice within the plan but were not able to assess full inclusion if the information was not consistently provided. We conducted this performance audit from February 2013 to September 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. DHS was involved in the early development of the GEAR framework as part of the CHCO Council and subsequently volunteered the Coast Guard to pilot the GEAR framework, with consideration given to expanding implementation after achieving success through the limited pilot effort. The Coast Guard is implementing all five of GEAR’s recommendations in a single-site pilot at Base Boston, affecting approximately 200 employees, both GS and WG. The Coast Guard chose Base Boston for several reasons: the presence of both GS and WG employees – who happen to be supervised by both military and civilian personnel, a wide variety of occupations, and the representation of a single union rather than the multiple unions that are present at other bases. Coast Guard officials said they chose to implement GEAR as a pilot because agency-wide implementation brought challenges such as dealing with such a large military population – approximately 42,000 military personnel that rotate bases approximately every two to three years and operate on a different performance management plan compared with approximately 9,000 civilian employees – and negotiating with approximately nine unions. According to one senior Coast Guard official, to develop more than one site would create a capacity issue that would make it difficult to plan and implement GEAR. The Coast Guard began planning for GEAR in March 2012 and launched the pilot in April 2013 to coincide with the start of the annual performance process. According to Coast Guard officials, the planning process was in place but implementation was delayed due to Base Boston personnel being deployed to assist in Hurricane Sandy recovery efforts. The pilot is expected to run for the length of one annual performance cycle and complete on March 31, 2014. According to Coast Guard officials, agency officials met with union representatives for extensive discussions involving facilitators and brainstorming sessions in order to include union input in the final GEAR implementation plan. As a result of these discussions, Coast Guard officials told us that the agency has the approval of the union and that both parties are mutually responsible for the successful implementation of GEAR. The Coast Guard currently requires every civilian employee to have a documented performance appraisal at the end of the rating cycle and under GEAR implementation, employees and supervisors are to receive increased training to improve the accountability and engagement between employees and supervisors. Coast Guard officials noted that the agency needs to improve its culture of engagement. They said part of the problem is that supervisors do not have enough time to provide feedback, and employees do not know how to prepare adequately for their assessments or feedback sessions. By September 2013, all new supervisors (both civilian and military) will have to take 40 hours of mandated supervisor training so they will have a base knowledge of how to provide effective feedback. For more information on steps the Coast Guard has taken to implement GEAR, see table 9. Coast Guard has identified the following benefits that may be realized at Base Boston through GEAR implementation: improved performance management accountability; improved employee-supervisor engagement through training and additional educational resources; increased communications to improve awareness and importance of the performance management process; improved overall labor management relations through involvement of the labor unions; and projected fewer grievances over performance appraisals at the end of the pilot. DOE officials said the agency is implementing the GEAR framework in part because it aligned with existing human capital priorities and supported DOE’s vision of improving its performance-based culture. DOE is implementing GEAR agency-wide with the exception of Bonneville Power Administration and the National Nuclear Security Administration, affecting approximately 8,400 employees. DOE officials told us that a holistic approach to GEAR implementation provided the best opportunity for success and that only by engaging the entire organization could the agency hope to improve organizational culture. According to DOE officials, implementing GEAR is a two-year effort and the agency is in the second year of implementation; however, DOE is implementing the GEAR framework as part of a larger continuous effort to improve its performance-based culture. DOE employees are covered by local union collective bargaining agreements and according to DOE officials, each union local was engaged in GEAR implementation where appropriate. DOE does not have any unions with department-level recognition or collective bargaining rights. Before launching the pilot, the DOE Office of the CHCO briefed local union representatives on the GEAR framework and asked for their input on rollout. Agency officials said that they received no feedback from the unions. As a first step towards implementation of GEAR, DOE officials surveyed supervisors and employees to establish a baseline. Additionally, DOE officials said they held employee focus groups about the agency’s performance management system to help inform some of the key actions of the GEAR implementation plan. Agency officials said that they expected, and found, that employees did not view the effective execution of the performance management process as positively as supervisors; in other words, performance discussions are occurring but they are not perceived by employees as adding value. Based on data showing that timely completion of the reviews declined over the previous year, DOE officials have begun to look for ways to streamline the review process. This includes shifting to a system that allows supervisors to indicate satisfactory performance without the need for extensive documentation. Officials also said that a new electronic performance assessment system, ePerformance, has facilitated completion of quarterly reviews. According to DOE officials, ePerformance allows management to monitor feedback sessions centrally and ensure that supervisors complete them in a timely fashion, something that was impossible under the agency’s old paper- based system for conducting performance reviews. According to DOE officials, it is still not possible to ensure that all reviews are of good quality, only that they are timely. DOE officials feel that improved communication between managers and employees will be a major benefit. To reinforce the importance DOE places on supervision, the agency developed a mandatory supervisory critical element, which holds supervisors accountable for each phase of the performance process, according to officials. DOE also issued a mandatory training framework for supervisors and employees to support ongoing improvements in the agency’s performance-based culture. For more information on steps DOE has taken to implement GEAR, see table 10. According to DOE officials the agency will realize additional benefits from implementing the GEAR framework, including incremental improvements to its organizational culture on a yearly basis demonstrated through employee feedback and other human capital metrics, such as ePerformance metrics; continued focus on human capital efforts at senior levels of agency management; and reinforcement of performance management as a year‐round priority. In addition, DOE officials stated that benefits resulting from GEAR implementation have already been realized. For example, DOE officials stated that their use of human capital performance metrics, supported by GEAR’s recommendation to align performance management with organizational performance management, has resulted in DOE reducing its reliance on targeted support service. According to agency officials, HUD’s CHCO was an active participant in the CHCO Council meetings that first developed the GEAR concepts. The agency volunteered to participate because GEAR is consistent with the Secretary’s Transformation Initiative, which aims to improve the quality of performance plans and increase the quality and timeliness of feedback. While HUD was already pursuing several objectives that it now views as part of GEAR, GEAR provided a framework of guiding principles that helped HUD officials organize their efforts. HUD is implementing GEAR agency-wide (with the exception of its Office of the Inspector General) HUD affecting approximately 8,100 employees and four labor unions.has established the Labor-Management Advisory Committee (LMAC), which includes representatives from each of HUD’s four unions as well as managers. HUD officials did not think they could be successful if they implemented GEAR within a single unit. They felt that implementing it agency-wide would permit it to be better embedded in the performance culture. HUD plans for GEAR to be fully implemented in September 2013. At the beginning of implementation, HUD decided not to pursue GEAR’s fifth goal – improving the assessment, selection, development, and training of supervisors – based on staff capacity at that time, according to HUD officials. Since then, the agency has recruited a Chief Learning Officer and has taken several actions to implement the fifth goal, such as distributing guidance on holding crucial conversations with employees and adding supervisory training courses. As a result of GEAR, HUD has acquired a new performance management system that supports strategic alignment to individual performance objectives and will support feedback on performance objectives, including feedback from peers and subordinates. HUD officials said that they procured a new performance management system in October 2012 and completed union negotiations in June 2013 to adopt a new performance management policy and framework. Senior executives have already moved to the new performance management system and the agency is in the process of rolling it out to the rest of the agency. For more information on steps HUD has taken to implement GEAR, see table 11. HUD has identified the following benefits that may be realized as a result of GEAR implementation: increased alignment between HUD’s work and mission through performance objectives outlined in the department’s strategic plan; more engaged workforce through increased quality and timeliness of more accurate performance ratings as performance plans better reflect the work staff are doing; help with retention of employees; and improved performance management overall at HUD. Unlike the other pilot agencies, VA officials had no official interaction with the LMR Council in the development of the GEAR framework. VA began implementing GEAR in October 2012 to coincide with the VA performance management cycle. VA officials project the implementation to be complete by 2016. VA is implementing GEAR within its NCA agency as a single unit pilot in MSN II, encompassing national cemeteries located in the southeastern area of the country. NCA officials expect to expand GEAR to the remaining four networks beginning in 2014. VA chose the NCA because it has a reputation as a high-performing organization and an existing focus on metrics and feedback. VA officials said they chose to implement GEAR as a single unit, rather than agency-wide, because doing so allowed them to provide closer attention and support. NCA is implementing the GEAR framework to seven mission critical occupations totaling approximately 240 employees. NCA plans to include 10 additional occupations in MSN II during fiscal year 2014, and to also apply the GEAR framework to the original seven mission critical occupations in the remaining four networks beginning in fiscal year 2014. NCA plans to continue to phase in occupations during fiscal years 2014- 2016. According to NCA officials, representatives from two unions have been involved in GEAR implementation and contributed valuable input that has enhanced NCA’s approach to GEAR. Although NCA officials told us they are committed to all five of GEAR’s recommendations, the agency is currently implementing four of the recommendations and plans to implement the fifth recommendation after an evaluation of current approaches to assessing, selecting, and developing supervisors has been completed. According to NCA officials, NCA has increased feedback to employees on individual, unit, and agency goals and performance by implementing a three-step feedback process with meetings every four months (in February, June, and October), as well as providing tools to assist supervisors in preparing for and conducting those one-on-one meetings. Supervisors and employees also receive a monthly performance scorecard with information on several of NCA’s performance targets that include agency, region, and cemetery-level performance in key operational areas such as client satisfaction and employee safety. As part of GEAR, NCA defined “exceptional” performance, in addition to the “fully successful” level, in new performance plans for all seven of the targeted occupations. NCA officials said that officials representing labor unions recommended this approach so employees have a clear target. NCA officials stated that this practice will continue as new performance plans are created. NCA plans to also refine their approach to rewarding and recognizing exceptional performers to correspond with what employees value. For example, they are considering possible generational preferences in the nature of rewards and preferences associated with the stages of employees’ careers. For more information on steps NCA has taken to implement GEAR, see table 12. NCA has identified the following benefits that may result from GEAR implementation: by aligning performance goals with organizational goals, employees may better understand the connection between their daily activities and the organization’s success; employees can better identify performance successes, gaps, and opportunities to improve performance at their cemetery and within their network; and involving employees and their labor representatives in efforts to improve performance management systems increases their understanding and ownership of these systems and organizational goals and increases their belief in the fairness of the systems, including in the ability to reinforce accountability. OPM’s Director, as Chair of the CHCO Council and co-chair of the LMR Council, helped propose the idea of examining the federal government’s performance management accountability framework and make recommendations for its improvement, which resulted in the GEAR framework. In addition, OPM officials reported that the agency collaborated with and supported all pilot agencies in the initial development of their processes and implementation efforts. As a result of their role in developing the framework, OPM is implementing GEAR agency-wide, affecting approximately 5,800 employees, including employees represented by two unions. According to OPM officials, it made sense to implement GEAR agency-wide from the beginning. OPM had previously identified human capital needs, such as improving supervisor assessment, that had not been addressed due to limited resources and competing priorities. However, an official said that GEAR focused attention on some of these issues and provided a framework to move forward. Initially, OPM planned on implementing four of the five GEAR goals agency-wide, according to OPM officials, and did not take steps to implement the first goal of articulating a high-performance culture. However, according to OPM officials, the agency is currently developing a performance culture statement, in collaboration with its employees and elected labor representatives, which officials stated would meet GEAR’s first goal of articulating a high-performance culture when complete. OPM officials said that they decided to focus on the assessment and selection of supervisors. Currently, OPM is identifying supervisory competencies to be balanced with technical requirements of supervisors. For example, an official said that OPM is expanding its use of situational judgment assessments, an emerging approach from the law enforcement and security community in which the supervisor would be assessed on his or her leadership qualities in addition to the ability to communicate effectively. In addition, OPM is implementing quarterly progress reviews using previously developed forms for supervisors and employees. An OPM official said that the simplicity of the new forms will make it easier for supervisors to use. For more information on steps OPM has taken to implement GEAR, see table 13. OPM has identified the following benefits that may be realized as a result of GEAR implementation: employees will understand the agency mission and goals and be engaged in the process of achieving those goals; employees will receive feedback on their performance at least supervisors will be hired based on their leadership potential and demonstrated supervisory competencies; and supervisors will be held accountable for practicing good performance management. Robert Goldenkoff, (202) 512-6806 or goldenkoffr@gao.gov. In addition to the contact named above, Tom Gilbert, Assistant Director; Dewi Djunaidy; Karin Fangman; Robert Gebhart; Erik Kjeldgaard; and Cindy Saunders made key contributions to this report. | A longstanding challenge for federal agencies has been developing credible and effective performance management systems that can serve as a strategic tool to drive internal change and achieve results. In 2011, various federal agencies, labor unions, and other organizations developed the Goals-Engagement-Accountability-Results (GEAR) framework to help improve performance management. GAO was asked to evaluate GEAR. This report (1) analyzes how the GEAR framework addresses key practices for effective performance management and identifies opportunities to improve GEAR implementation government-wide; (2) describes the status of GEAR implementation at pilot agencies and lessons learned to date; and (3) assesses the extent to which each pilot agency's GEAR implementation plan includes selected best practices for project planning. The report is based on GAO's analysis of GEAR documents, agency project plans, and interviews with agency officials. GAO found that the GEAR framework generally addresses previously identified key practices for effective performance management, such as aligning individual performance expectations with organizational goals, but refinements could improve future government-wide implementation. Five federal agencies are piloting GEAR--the Departments of Energy (DOE), Homeland Security/Coast Guard (DHS/Coast Guard), Housing and Urban Development (HUD), Veterans Affairs/National Cemetery Administration (VA/NCA), and the Office of Personnel Management (OPM)--with the intention to expand GEAR government-wide. The Chief Human Capital Officers Council (CHCO Council) is developing a toolkit based, among other things, on the experience of the pilot agencies. The toolkit is intended to help additional agencies implement the GEAR framework; CHCO Council representatives expect the toolkit to be complete by the end of September 2013. However, beyond the toolkit, neither the CHCO Council nor OPM have identified next steps to implement GEAR government-wide, such as identifying roles and responsibilities. Further, neither OPM nor the CHCO Council has plans to regularly update the GEAR framework or toolkit to include additional lessons learned, or to make such information available more broadly to key stakeholders, such as human resource professionals who may be responsible for future implementation. Without taking these steps, agencies that have already begun implementing GEAR risk losing their momentum; in addition, it may be challenging to implement GEAR government-wide. The five pilot agencies adopted various approaches to implementing GEAR -DOE, HUD, and OPM implemented GEAR agency-wide, while Coast Guard and NCA adopted GEAR in single units - based on agency needs, available resources, and GAO's lessons learned to date. For example, early stakeholder involvement, including engagement between those representing labor and management, resulted in greater transparency and fewer obstacles. In addition, administering employee surveys to identify the greatest needs before implementing GEAR helped establish a baseline to better track results. Each of the pilot agencies developed a GEAR project plan that outlined specific actions. DOE's GEAR plan was the most thoroughly documented. The other four agency plans did not include all project planning best practices. Without these elements, agencies may be limited in their ability to determine what needs to be done, when it should be done, who should do it, and how to measure progress towards achieving objectives. As GEAR is adopted government-wide, GAO recommends that the Director of OPM, in collaboration with the CHCO Council, define roles and responsibilities for OPM, the CHCO Council, and individual agencies, in such areas as updating the toolkit (as needed) and disseminating information on GEAR more broadly. GAO also recommends that OPM, Coast Guard, HUD, and NCA update their GEAR project plans to be consistent with best practices for project planning. OPM, DHS, HUD, and VA agreed with the recommendations. |
We reviewed the Department of the Treasury’s report, interviewed senior IRS officials responsible for the actions being taken to correct the management and technical weaknesses, and reviewed documentation. On June 4, 1996, we briefed senior Treasury and IRS officials, including the Deputy Secretary of the Treasury and the Commissioner of the IRS, on the results of our review. We performed our work at IRS headquarters in Washington, D.C., between May 9, 1996 and June 4, 1996 in accordance with generally accepted government auditing standards. The Department of the Treasury and IRS provided comments on a draft of this report, which are discussed in the “Agency Comments and Our Evaluation” section and are reprinted in appendix I. IRS envisions a modernized tax processing environment which is virtually paper free and in which taxpayer information is readily available to IRS employees to update taxpayer accounts and respond to taxpayer inquiries. In our July 1995 report, we emphasized the need for IRS to have in place sound management and technical practices to increase the likelihood that TSM’s objectives will be cost-effectively and expeditiously met. A 1996 National Research Council report on TSM had a similar message. Its recommendations parallel the over a dozen recommendations we made in July 1995 to improve IRS’ (1) business strategy to reduce reliance on paper, (2) strategic information management practices, (3) software development capabilities, (4) technical infrastructures, and (5) organizational controls. In the July 1995 report, we described our methodology for analyzing IRS’ strategic information management practices, drawing heavily from our research on the best practices of private and public sector organizations that have been successful in improving their performance through strategic information management and technology. These fundamental best practices are discussed in our report Executive Guide: Improving Mission Performance Through Strategic Information Management and Technology (GAO/AIMD-94-115, May 1994), and our Strategic Information Management (SIM) Self-Assessment Toolkit (GAO/Version 1.0, October 28, 1994, exposure draft). To evaluate IRS’ software development capability, we validated IRS’ September 1993 assessment of its software development maturity based on the Capability Maturity Model (CMM) developed by Carnegie Mellon University’s Software Engineering Institute, a nationally recognized authority in the area. This model establishes standards in key software development process areas (i.e., requirements management, project planning, project tracking and oversight, configuration management, quality assurance, and subcontractor management) and provides a framework to evaluate a software organization’s capability to consistently and predictably produce high-quality products. When we briefed the IRS Commissioner in April 1995 and issued our report documenting its weaknesses in July 1995, IRS agreed with our recommendations to make corrections expeditiously. At that time, we considered IRS’ response to be a commitment to correct its management and technical weaknesses. In September 1995, IRS submitted an action plan to the Congress explaining how it planned to address our recommendations. In our March 1996 testimony to the House Appropriation Committee’s Subcommittee on Treasury, Postal Service, and General Government, we noted that this plan, follow-up meetings with senior IRS officials, and other draft and “preliminary draft” documents received through early March 1996 provided little tangible evidence that actions being taken would correct the pervasive management and technical weaknesses that continued to place TSM, and the huge investment it represents, at risk. This interim status report on IRS’ efforts to respond to our July 1995 recommendations noted that IRS had initiated a number of activities and made some progress in addressing our recommendations to improve management of information systems; enhance its software development capability; and better define, perform, and manage TSM’s technical activities. However, we reported that none of these steps had fully satisfied any of our recommendations. Consequently, IRS was not in an appreciably better position in March 1996 than it was in April 1995 to assure the Congress that it would spend its fiscal year 1996 and future TSM appropriations judiciously and effectively. In a subsequent testimony before the Senate Committee on Governmental Affairs, we reiterated our concerns that IRS’ effort to modernize tax processing was jeopardized by persistent and pervasive management and technical weaknesses, and that ongoing efforts did not include milestones or provide enough evidence to conclude that weaknesses will soon be corrected. We also addressed analogous technical weaknesses in an electronic filing system project called Cyberfile which substantiated our concerns that IRS was continuing to risk millions of dollars in undisciplined systems development in fiscal year 1996. In addition, we identified physical security risks at the planned Cyberfile data center. The Department of the Treasury, in its May 1996 report to the Senate and House Appropriations Committees, provides a candid assessment of TSM progress and future redirection, and a description of ongoing and planned actions intended to respond to our recommendations to correct management and technical weaknesses. It finds that despite some qualified success, IRS has not made progress on TSM as planned because systems development efforts have taken longer than expected, cost more than originally estimated, and delivered less functionality than originally envisioned. It concludes that significant changes are needed in IRS’ management approach, and that it is beyond the scope of IRS’ current ability to develop and integrate TSM without expanded use of external expertise. The report notes that work has been done to rethink, scale back, and change the direction of TSM. Additional changes are still in progress with actions underway to restructure the management of TSM and expand the use of contractors. Agreeing that our July 1995 recommendations are valid, the report notes that more work has to be done to respond to our recommendations. It states that progress in IRS’ management and technical areas can only be achieved by institutionalizing improved practices and monitoring projects for conformance to mandated standards and practices. The report does not address the basic problem of continuing to invest hundreds of millions of dollars in TSM before the requisite management and technical disciplines are in place. Neither does it address the risk inherent in shifting hundreds of millions of dollars to additional contractual efforts when the evidence is clear that IRS does not have the disciplined processes in place to manage all of its current contractual efforts (e.g., Cyberfile) effectively. IRS has initiated a number of actions to address management and technical weaknesses that continue to impede successful systems modernization. However, ongoing efforts do not correct the weaknesses and do not provide enough evidence to determine when they will be corrected and what steps if any are being taken in the interim to mitigate the risks associated with ongoing TSM spending. IRS has identified increasing electronic filings as critical to achieving its modernization vision. We noted that IRS did not have a comprehensive business strategy to reach or exceed its electronic filing goal, which was 80 million electronic filings by 2001. IRS’ estimates and projections for individual and business returns suggested that, by 2001, as few as 39 million returns may be submitted electronically, less than half of IRS’ goal and only about 17 percent of all returns expected to be filed. We reported that IRS’ business strategy would not maximize electronic filings because it primarily targeted taxpayers who use a third party to prepare and/or transmit simple returns, are willing to pay a fee to file their returns electronically, and are expecting refunds. Focusing on this limited taxpaying population overlooked most taxpayers, including those who prepare their own tax returns using personal computers, have more complicated returns, owe tax balances, and/or are unwilling to pay a fee to a third party to file a return electronically. We concluded that, without a strategy that also targets these taxpayers, IRS would not meet its electronic filing goals. In addition, if, in the future, taxpayers file more paper returns than IRS expects, added stress will be placed on IRS’ paper-based systems. Accordingly, we recommended that IRS refocus its electronic filing business strategy to target, through aggressive marketing and education, those sectors of the taxpaying population that can file electronically most cost-beneficially. IRS agreed with this recommendation and said that it had convened a working group to develop a detailed, comprehensive strategy to broaden public access to electronic filing, while also providing more incentives for practitioners and the public to file electronically. It said that the strategy would include approaches for taxpayers who are unwilling to pay for tax preparer and transmitter services, who owe IRS for balances due, and/or who file complex tax returns. IRS said further that the strategy would address that segment of the taxpaying population that would prefer to file from home, using personal computers. To date, IRS has performed an electronic filing marketing analysis at local levels; developed a marketing plan to promote electronic filing; consolidated 21 electronic filing initiatives into its Electronic Filing Strategies portfolio; and initiated a reengineering project with a goal to reduce paper tax return filings to 20 percent or less of the total volume by the year 2000. It plans to complete its electronic filing strategy in August 1996. These initiatives could result in future progress toward increasing electronic filings. However, our review found that these initiatives are not far enough along to determine whether they will culminate in a comprehensive strategy that identifies how IRS plans to target those sectors of the taxpaying population that can file electronically most cost-beneficially. It also is not clear how the reengineering project will impact the strategy or how these initiatives will impact TSM systems that are being developed. We reported that IRS did not have strategic information management practices in place. We found, for example, that, despite the billions of dollars at stake, information systems were not managed as investments. To overcome this, and provide the Congress with insight needed to assess IRS’ priorities and rationalization for TSM projects, we recommended that the IRS Commissioner take immediate action to implement a complete process for selecting, prioritizing, controlling, and evaluating the progress and performance of all major information systems investments, both new and ongoing, including explicit decision criteria, and using these criteria, to review all planned and ongoing systems investments by June 30, 1995. In agreeing with these recommendations, IRS said it would take a number of actions to provide the underpinning it needs for strategic information management. IRS said, for example, that it was developing and implementing a process to select, prioritize, control, and evaluate information technology investments to achieve reengineered program missions. Our assessment found that IRS has taken steps towards putting into place a process for managing its extensive investments in information systems. Following are examples of these steps. IRS created the executive-level Investment Review Board, chaired by the Associate Commissioner for Modernization, for selecting, controlling and evaluating all of IRS’ information technology investments. IRS developed initial and revised sets of decision criteria used last summer and again in November 1995 as part of its Resource Allocation and Investment Review to make additional changes in information technology resource allocations for remaining fiscal year 1996 funds and planned fiscal year 1997 spending. This review included only TSM projects under development. It did not address operational systems, infrastructure, or management and technical support activities. The Treasury Department created a Modernization Management Board to review and validate high-risk, high-cost TSM investments and to set policy and strategy for IRS modernization effort. IRS is considering the use of a “project readiness review” as an additional Investment Review Board control mechanism for gauging project readiness to proceed with spending. IRS developed the Business Case Handbook that includes decision criteria on costs, benefits, and risks. It is reassessing the business cases, which were developed on the TSM projects, using the handbook. Eleven cases are scheduled for completion in June 1996, and IRS plans to have the remaining cases completed by September 1996. Results are planned to be presented to the Investment Review Board to assist in making funding decisions for fiscal year 1997. IRS has developed the Investment Evaluation Review Handbook designed to assess projected costs and benefits against actual results. The handbook has been used on four TSM projects and five additional reviews are scheduled to be completed within the next year. The completed reviews contain explicit descriptions of problems encountered in developing these systems. The reviews make specific recommendations for management and technical process changes to improve future results. Specific recommendations pertain to strengthening project direction and decision-making. Many reflect concerns that we have raised in past reviews. The investment evaluation reviews were presented to the Investment Review Board and disseminated to other IRS managers. IRS is defining roles, responsibilities, and processes for incorporating Investment Evaluation Review recommendations at the project and process levels. These are positive steps and indicate a willingness to address many of the weaknesses raised in our past reports and testimonies. But, as noted in Treasury’s report on TSM, the investment process is not yet complete. According to Treasury, it is missing (1) specific operating procedures, (2) defined reporting relationships between different management boards and committees, and (3) updated business cases for major TSM technology investments. These concerns coincide with two central criticisms we have repeatedly made about TSM. Because of the sheer size, scope, and complexity of TSM, it is imperative that IRS institutionalize a repeatable process for selecting, controlling, and evaluating its technology investments, and that it make informed investment decisions based on reliable qualitative and quantitative assessments of costs, benefits, and risks. Although IRS is planning and in the initial stages of implementing parts of such a process, a complete, fully-integrated process does not yet exist. Specifically, IRS has not provided us evidence to justify its claims that its decisions were supported by acceptable data on project costs, benefits, and risks. For example: Our review found no evidence to suggest that IRS established minimal data requirements for the decisions made as part of the TSM Resource Allocation and Investment Review or the rescope process in December 1995. For example, because IRS lacks the basic capabilities for disciplined software development, it cannot convincingly estimate systems development costs, schedule, or performance. Subsequent to its rescope analysis, IRS developed minimal data quality requirements for cost-benefit and risk studies, proposed return on investment calculations, and return on investment thresholds, or comparisons of expected performance improvements with results to date. However, to date, few, if any projects have met these criteria. In deciding whether to accelerate, delay, or cancel specific TSM projects, IRS did not use validated data on actual versus projected costs, benefits, or risks as set forth by the Office of Management and Budget (OMB). Instead, IRS continues to make its decisions based on spending whatever budgeted funding ceiling amounts can be obtained through its annual budget and appropriations cycles. As a result, IRS cannot convincingly justify its TSM spending decisions. All projects (i.e., proposed projects, projects under development, operational systems, infrastructure, and management and technical support activities) were not included in a single systems investment portfolio. Instead, only TSM projects under development were ranked. As a result, there is no compelling rationale for determining how much to invest in these projects compared to other projects, such as operational systems, infrastructure, etc. There is no defined process with prescribed roles and responsibilities to ensure that the results of investment evaluation reviews are being used to (1) modify project direction and funding when appropriate and (2) assess and improve existing investment selection and control processes and procedures. As a result, there is no evidence that changes are occurring based on the valuable lessons learned as in the recently completed post implementation review of the Service Center Recognition/Image Processing System. For example, IRS found that because system requirements were not adequately defined or documented, the system could not be quantifiably tested properly which adversely affected the implementation of the system. Moreover, with only four investment evaluation reviews completed to date and five planned for the upcoming year, this represents only a small fraction of the total IRS annual investment in TSM. More must be done to confirm actual results achieved from TSM expenditures. We noted in our July 1995 report that IRS’ reengineering efforts were not linked to its systems development efforts. As shown in our work with leading organizations, information system development projects that are not driven by a critical reexamination and redesign of business processes achieve only a fraction of their potential to improve performance, reduce costs, and enhance quality. Since our July report, IRS’ reengineering efforts have undergone a redirection. Three reengineering projects—processing returns, responding to taxpayers, and enforcement actions—were halted because IRS decided to focus instead on an enterprise-level view of reengineering. Its new effort, entitled Tax Settlement Reengineering, was begun in March 1996 and involves a comprehensive review of all the major processes and activities that enable taxpayers to settle their tax obligations, from educational activities through final settlement of accounts. The reengineering project team, working with IRS’ Executive Committee, has identified 16 major processes involved in tax settlement and is about to begin reengineering four of them. High-level designs of the new processes are scheduled to be defined by September 30, 1996, with work on detailed designs to start early in fiscal year 1997, if approved by the Executive Committee. Reengineering efforts on as many as eight other tax settlement processes could be underway by the end of fiscal year 1997. Although this effort could have substantial impact, IRS still faces the same problem we reported on a year ago. Reengineering lags well behind the development of TSM projects, whereas it should be ahead of it—defining and directing the technology investments needed to support new, more efficient business processes. Until the reengineering effort is mature enough to drive TSM projects, there is no assurance that ongoing systems development efforts will support IRS’ future business needs and objectives. The reengineering team believes that by September 1996 they will have a general idea of how the first four tax settlement reengineering projects may impact current system development efforts. If additional reengineering projects are started as planned in 1997, it could be another year or more before most of the information and systems requirements stemming from these projects are defined. Meanwhile, investment continues in many TSM projects that may not support the requirements resulting from these reengineering efforts. IRS acknowledges that integration of reengineering and TSM must occur, and has assigned responsibility for it to the Associate Commissioner for Modernization, but has not yet specified how or when the requisite integration will occur. We reported that unless IRS improves its software development capability, it is unlikely to build TSM timely or economically, and systems are unlikely to perform as intended. To assess its software capability, in September 1993, IRS rated itself using the Software Engineering Institute’s CMM. IRS placed its software development capability at the lowest level, described as ad hoc and sometimes chaotic and indicating significant weaknesses in its software development capability. Our review confirmed that IRS’ software development capability was immature and was weak in key process areas. For instance, a disciplined process to manage system requirements was not being applied to TSM systems, a software tool for planning and tracking development projects was not software quality assurance functions were not well defined or consistently systems and acceptance testing were neither well defined nor required, software configuration management was incomplete. To address IRS’ software development weaknesses and upgrade IRS’ software development capabilities, we recommended that the IRS Commissioner immediately require that all future contractors who develop software for the agency have a software development capability rating of at least CMM Level 2, and before December 31, 1995, define, implement, and enforce a consistent set of requirements management procedures for all TSM projects that goes beyond IRS’ current request for information services process, and for software quality assurance, software configuration management, and project planning and tracking; and define and implement a set of software development metrics to measure software attributes related to business goals. IRS agreed with these recommendations and said that it was committed to developing consistent procedures addressing requirements management, software quality assurance, software configuration management, and project planning and tracking. It also said that it was developing a comprehensive measurement plan to link process outputs to external requirements, corporate goals, and recognized industry standards. Specifically regarding the first recommendation, IRS has (1) developed standard wording for use in new and existing contracts that have a significant software development component, requiring that all software development be done by an organization that is at CMM Level 2, (2) developed a plan for achieving CMM Level 2 capability on all of its contracts, and (3) started to implement a plan to monitor contractors’ capabilities, which may include the use of CMM-based software capability evaluations. The Department of the Treasury report also noted that a schedule for conducting software capability evaluations was developed. However, we found that IRS does not yet have the disciplined processes in place to ensure that all contractors are performing at CMM Level 2. For example, contractors developing the Cyberfile electronic filing system were not using CMM Level 2 processes, subsequent to our July 1995 recommendation. Further, no schedule for conducting software capability evaluations has yet been developed. With respect to the second recommendation, IRS is updating its systems life cycle (SLC) methodology. The SLC is planned to have details for systems engineering and software development processes, including all CMM key process areas. IRS has updated its systems engineering process to include guidance for defining and analyzing systems requirements and for preparing work packages. Furthermore, IRS has drafted handbooks providing guidance to audit and verify developmental processes. In addition, IRS has developed a configuration management plan template, updated its requirements management request for information servicesdocument, and developed and implemented a requirements management course. The Department of the Treasury also reported that IRS is testing the SLC on two TSM efforts, Integrated Case Processing (ICP) and Corporate Accounts Processing System (CAPS). IRS also has a CMM process improvement plan and work is being done across various IRS organizations to define processes to meet CMM Level 2. Finally, IRS is assessing its capabilities to manage contractors using the CMM goals. However, the procedures for requirements management, software quality assurance, software configuration management, and project planning and tracking are still not complete. A software development life cycle implementation project, which is to include these procedures, is not scheduled for completion until September 30, 1996. In addition, software quality assurance and configuration management plans for two ICP projects were not being used, and the groups developing software for CAPS do not have a software configuration management plan or a schedule for its development. Furthermore, ICP and CAPS development is continuing without the guidelines and procedures for other process areas (e.g., requirements management, project planning, and project tracking and oversight) required by CMM Level 2. Regarding the third recommendation, IRS has a three-phase process to (1) identify data sources for metrics, (2) define metrics to be used, and (3) implement the metrics. A partial set of metrics is currently being identified. Initial use of these metrics—populated with real data and in a preliminary format—is scheduled for use on a set of identified projects beginning on June 30, 1996. Data sources for these metrics have been identified and weaknesses (such as difficulties in retrieving the data and inconsistencies in the data) are being documented to provide feedback to various systems’ owners. However, this initial set of metrics is incomplete. It focuses on areas such as time reporting, project sizing, and defect tracing and analysis, but does not include measures for determining customer satisfaction and cost estimation. Such measures are needed to adequately track the needed functionality with associated costs throughout systems development. Further, there is no schedule for completing the definition of metrics or for institutionalizing the processes needed to ensure their use. Finally, there is no mechanism in place to correct identified data and data collection weaknesses. In summary, although IRS has begun to act on our recommendations, these actions are not yet complete or institutionalized, and, as a result, systems are still being developed without the disciplined practices and metrics needed to give management assurance that they will perform as intended. We reported that IRS’ systems architectures, integration planning, and system testing and test planning were incomplete. To address IRS’ technical infrastructure weaknesses, we recommended that the IRS Commissioner before December 31, 1995, complete an integrated systems architecture, including security, telecommunications, network management, and data management; institutionalize formal configuration management for all newly approved projects and upgrades and develop a plan to bring ongoing projects under formal configuration management; develop security concept of operations, disaster recovery, and contingency plans for the modernization vision and ensure that these requirements are addressed when developing information system projects; develop a testing and evaluation master plan for the modernization; establish an integration testing and control facility; and complete the modernization integration plan and ensure that projects are monitored for compliance with modernization architectures. IRS agreed with these recommendations and said that it was identifying the necessary actions to define and enforce systems development standards and architectures agencywide. IRS’ current efforts in this area follow. In April 1996, IRS completed a descriptive overview of its integrated three-tier, distributed systems architecture to provide management with a high-level view of TSM’s infrastructure and supporting systems. IRS has tasked the integration support contractor to develop the data and security architectures. IRS has adopted an accepted industry standard for configuration management. It developed and distributed its Configuration Management Plan template, which identifies the elements needed when constructing a configuration management plan. In April 1996, enterprisewide configuration management policies and procedures were established. IRS also plans to obtain contractor support to develop, implement, and maintain a vigorous configuration management program. IRS has prepared a security concept of operations and a disaster recovery and contingency plan. IRS has developed a test and evaluation master plan for TSM. IRS plans to develop implementation and enforcement policies for the plan. IRS has established an interim integration testing and control facility, which is currently being used to test new software releases. It is also planning a permanent integration testing and control facility, scheduled to be completed by December 1996. IRS has completed drafts of its TSM Release Definition Document, which is planned to provide definitions for new versions of TSM software from 1997 to 1999, and Modernization Integration Plan, which is planned to define IRS’ process for integrating current and future TSM initiatives. centers need to take to absorb the workload of a center that suffers a disaster. The test and evaluation master plan provides the guidance needed to ensure sufficient developmental and operational testing of TSM. However, it does not describe what security testing should be performed, or how these tests should be conducted. Further, it does not specify the responsibilities and processes for documenting, monitoring, and correcting testing and integration errors. IRS is still working on plans for its integration testing and control facility. In the interim, it has established a temporary facility which is being used for limited testing. The permanent facility is not currently being planned to simulate the complete production environment, and will not, for example, include mainframe computers. Instead, IRS plans to continue to test mainframe computer software and systems which interface with the mainframes in its production environment. To ensure that IRS does not put operations and service to taxpayers at risk, IRS should prepare a thorough assessment of its solution, including an analysis of alternative testing approaches and their costs, benefits, and risks. IRS’ draft TSM Release Definition Document and draft Modernization Integration Plan (1) do not reflect TSM rescoping and the information systems reorganization under the Associate Commissioner, (2) do not provide clear and concise links to other key documents (e.g., its integrated systems architecture, business master plan, concept of operations, and budget), and (3) assume that IRS has critical processes in place that are not implemented (e.g., effective quality assurance and disciplined configuration management). In summary, although IRS has taken actions to prepare a systems architecture and improve its integration and system testing and test planning, these efforts are not yet complete or institutionalized, and, as a result, TSM systems continue to be developed without the detailed architectures and discipline needed to ensure success. We reported that IRS had not established an effective organizational structure to consistently manage and control systems modernization organizationwide. The accountability and responsibility for IRS’ systems development was spread among IRS’ Modernization Executive, Chief Information Officer, and research and development division. To help address this concern, in May 1995, the Modernization Executive was named Associate Commissioner. The Associate Commissioner was to manage and control systems development efforts previously conducted by the Modernization Executive and the Chief Information Officer. In September 1995, the Associate Commissioner for Modernization assumed responsibility for the formulation, allocation, and management of all information systems resources for both TSM and non-TSM expenditures. In February 1996, IRS issued a Memorandum of Understanding providing guidance for initiating and conducting technology research and for transitioning technology research initiatives into system development projects. It is important that IRS maintain an organizationwide focus to manage and control all new modernization systems and all upgrades and replacements of operational systems throughout IRS. To do so, we recommended that the IRS Commissioner give the Associate Commissioner management and control responsibility for all systems development activities, including those of IRS’ research and development division. Steps are being taken by the Associate Commissioner to establish effective management and control of systems development activities throughout IRS. For example, its SLC methodology is required for information systems development, and information technology entities throughout the agency have been directed to submit documentation on all information technology projects for review. However, there is no defined and effective mechanism for enforcing the standards or ensuring that organizational entities cannot conduct systems development activities outside the control of the Associate Commissioner. Further, no timeframes have been established for defining and implementing such control mechanisms. As a result, systems development conducted by the research and development division has now been redefined as technology research, keeping it from the control of the Associate Commissioner. In summary, although improvements have been made in consolidating management control over systems development, the Associate Commissioner still does not yet have control over all IRS’ systems development activities. IRS plans to increase its reliance on the private sector by (1) preparing an acquisition plan and statement of work to conduct an expedited competitive selection for a prime development and integration contractor; (2) transferring responsibility for systems engineering, design, prototyping, and integration for core elements of TSM to its integration support contractor; and (3) making greater use of software development contractors, including those available under the Treasury Information Processing Support Services (TIPSS), to develop and deliver major elements of production TSM systems. By increasing its reliance on contractors, IRS expects to improve the accountability for and probability of TSM success. IRS plans to increase the use of private-sector integration and development expertise by expanding the use of contractors to support TSM. It outlined a three-track approach for transitioning over a period of 2 years to the use of a prime contractor that would have, according to IRS, overall authority and responsibility for the development, delivery, and deployment of modernized information systems. To facilitate this strategy, IRS reported it would consolidate the management of all TSM resources, including key TSM contractors, in its Government Program Management Office (GPMO). Under the direct control of the Chief Information Officer, GPMO will be delegated authority for the management and control of the IRS staff and contractors that plan, design, develop, test, and implement TSM components. IRS plans to have GPMO fully staffed and operational by October 1, 1996. IRS representatives told us the agency was currently developing a detailed contract management plan and a statement of work for acquiring its prime contractor, and believed it could award a contract in about 2 years. IRS’ approach to expanding the use of contractors to build TSM is still in the early planning stages. Because of this, IRS was unable to provide us with formal plans, charters, schedules, or the definitions of shared responsibilities between GPMO and the existing program and project management staff. At this point, it is unclear what these IRS planned actions entail, or how they will work. For example, IRS has not specified how and when it plans to transfer its development activities to contractors, and to what extent contractors could be held responsible for existing problems in these government-initiated systems. This is particularly important because if IRS continues as planned, the principal TSM systems will be in development and/or deployed before IRS plans to select a prime contractor in about 2 years. Moreover, it is not clear how the prime contractor would direct potential competitors that are already under contract with IRS. Without further explanation of and a schedule for transitioning specific responsibilities from IRS to contractors, we cannot fully understand or assess IRS’ plans. Further, plans to use additional contractors will succeed if, and only if, IRS has the in-house capabilities to manage these contractors effectively. In this regard, there is clear evidence that IRS’ capability to manage contractors has weaknesses. In August 1995, IRS acquired the services of the Department of Commerce’s National Technical Information Service (NTIS) to act as IRS’ prime contractor in developing Cyberfile. However, Cyberfile was not developed using disciplined management and technical practices. As a result, this project exhibited many of the same problems we have repeatedly identified in other TSM systems, and, after providing $17 million to NTIS, it was not ready for planned testing during the 1996 tax filing season. Similarly, IRS contracted in 1994 to build the Document Processing System. After expending over a quarter of a billion dollars on the project, IRS has now suspended the effort and is reexamining some of its basic requirements, including which and how many forms should be processed, and which and how much data should be read from the documents. We recently initiated an assignment to evaluate in detail IRS’ software acquisition capabilities using the Software Engineering Institute’s Software Acquisition CMM. This assignment is scheduled to be completed later this year. It is clear that unless IRS has mature, disciplined processes for acquiring software systems through contractors, it will be no more successful in buying software than it has been in building software. IRS has initiated a number of actions and is making some progress in addressing our recommendations to correct its pervasive management and technical weaknesses. However, none of these actions, either individually or in the aggregate, fully satisfy any of our July 1995 recommendations and it is not clear when these actions will result in disciplined systems development. As a result, IRS continues to spend hundreds of millions of dollars on TSM through fiscal year 1997, while fundamental weaknesses jeopardize the investment. Recognizing its internal weaknesses, IRS plans to use a prime contractor and increase use of software development contractors to develop TSM. However, in this area, its plans and schedules are not well defined, and, therefore, cannot be completely understood or assessed. Further, as the experience with Cyberfile and the Document Processing System projects makes clear, IRS does not have the mature processes needed to acquire software and manage contractors effectively. Because IRS still does not have (1) effective strategic information management practices needed to manage TSM as an investment, (2) mature and disciplined software development processes needed to assure that systems built will perform as intended, (3) a completed systems architecture that is detailed enough to guide and control systems development, and (4) a schedule for accomplishing any of the above, the Congress could consider limiting TSM spending to only cost-effective modernization efforts that (1) support ongoing operations and maintenance, (2) correct IRS’ pervasive management and technical weaknesses, (3) are small, represent low technical risk, and can be delivered in a relatively short time frame, and (4) involve deploying already developed systems, only if these systems have been fully tested, are not premature given the lack of a completed architecture, and produce a proven, verifiable business value. As the Congress gains confidence in IRS’ ability to successfully develop these smaller, cheaper, quicker projects, it could consider approving larger, more complex, more expensive projects in future years. Because IRS does not manage all of its current contractual efforts effectively, and because its plans to use a “prime” contractor and transition much of its systems development to additional contractors are not well defined, the Congress could consider requiring that IRS institute disciplined systems acquisitions processes and develop detailed plans and schedules before permitting IRS to increase its reliance on contractors. On June 6, 1996, we met with Treasury and IRS officials to discuss a draft of this report and we incorporated their comments as appropriate in finalizing it. In addition, on June 6, 1996, we received written comments from Treasury. In his letter, the Deputy Secretary of the Treasury reiterates Treasury’s commitment to significantly increased oversight of TSM and to making a sharp turn in the way TSM is managed. He also makes clear Treasury’s and IRS’ understanding that additional improvements are necessary to fully correct the management and technical weaknesses delineated in our report. The Deputy Secretary of the Treasury also says that he is reducing the fiscal year 1997 budget request for TSM from $850 million to $664 million and will need to ensure, at all times, solid stewardship for the dollars appropriated and clear accountability for the investments undertaken. Achieving sound management for the TSM program will require that IRS (1) institutionalize effective strategic information management practices, (2) institutionalize mature and disciplined software development processes, and (3) complete systems, data, and security architectures and use them to guide and control systems development, before making major investments in TSM systems development. Until these disciplined processes are in place and the requisite architectures completed, the Congress could consider limiting IRS TSM spending to only cost-effective modernization efforts that meet the criteria outlined in our Matters for Congressional Consideration. We are sending copies of this report to the Chairmen and the Ranking Minority Members of (1) the Senate and House Committees on the Budget, (2) the Subcommittee on Taxation and IRS Oversight, Senate Committee on Finance, (3) the Senate Committee on Governmental Affairs, (4) the Subcommittee on Oversight, House Committee on Ways and Means, and (5) the House Committee on Government Reform and Oversight. We are also sending copies to the Secretary of the Treasury, Commissioner of the Internal Revenue Service, and Director of the Office of Management and Budget. Copies will be available to others upon request. This work was performed under the direction of Dr. Rona B. Stillman, Chief Scientist for Computers and Telecommunications, who can be reached at (202) 512-6412. Other major contributors are listed in appendix II. Sherrie Russ, Senior Evaluator Christopher E. Hess, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a legislative requirement, GAO reviewed the Internal Revenue Service's (IRS) actions to correct GAO-identified management and technical weaknesses that jeopardize its tax systems modernization (TSM) efforts. GAO found that: (1) IRS does not have a comprehensive strategy to maximize electronic filing because the present strategy targets only a small portion of the taxpayers likely to file electronically; (2) IRS strategic information management practices remain ineffective because information systems are not managed as investments; (3) IRS reengineering efforts lag behind the development of TSM projects; (4) IRS is improving its software development activities, but these improvements are not complete or institutionalized; (5) the IRS technical infrastructure, including systems architecture, integration planning, and system testing and test planning, is incomplete; (6) IRS has not established an effective organizational structure to consistently manage and control TSM; and (7) IRS plans to increase its use of contractors to facilitate TSM, but it has not been successful in managing all of its contractors. |
Primarily out of concern that noise from air tours over national park units could impair visitors’ experiences and park unit resources, the Congress passed the National Parks Air Tour Management Act of 2000 to regulate air tours conducted over national park units. The act mandates new responsibilities for FAA and the Park Service, including developing ATMPs for all national park units where air tour operators apply for authority to conduct air tours. The purpose of an ATMP is to develop acceptable and effective measures to mitigate or prevent the significant adverse impacts, if any, from air tours on the natural and cultural resources and visitor experiences at national park units, and on abutting tribal lands. To implement the act, FAA and the Park Service must, among other things: establish an advisory group to provide continuing advice and counsel on air tours over and near national park units; establish an ATMP at any national park unit whenever an air tour operator applies for authority to conduct an air tour over the park unit; grant interim operating authority to existing air tour operators to provide annual authorizations to operators until 180 days after an ATMP is developed at the relevant park unit; develop an open, competitive process for air tour operators interested in providing air tours over a park unit whenever an ATMP limits the number of air tours during a specified time frame; include incentives in an ATMP for air tour operators to adopt technology that makes aircraft quieter for tours over park units; and submit a report to Congress by April 5, 2002, on the effectiveness of the act in providing incentives for the development and use of quiet aircraft technology. The act requires FAA and the Park Service to prepare each ATMP in accordance with the National Environmental Policy Act of 1969 (NEPA). NEPA requires each federal agency to prepare an environmental impact statement to assess proposed actions that will have a significant impact on the environment. If the agency is unsure whether the proposed action will have a significant impact, it prepares a briefer document called an environmental assessment. If the assessment concludes the action will have a significant impact, the agency must then prepare an environmental impact statement—otherwise it issues a “finding of no significant impact.” The act requires both FAA and the Park Service to approve the NEPA decision document associated with each ATMP; agency officials believe they will need to prepare an environmental assessment for most ATMPs. The act defines an “air tour” as any flight conducted for compensation or hire in an aircraft where a purpose of the flight is sightseeing over a national park unit or within one-half mile outside the boundary of any national park unit—the agencies further defined air tours to include only flights below 5,000 feet above ground level. The act defines two types of air tour operators: existing and new entrant operators. Existing operators are those that were providing air tours over a national park unit at any time during the 12-month period ending April 5, 2000. A new entrant is an air tour operator that applies for operating authority but did not provide air tours over a national park unit during the same 12-month period. Before an ATMP is developed for a park unit, the act instructs FAA to grant interim operating authority to any existing air tour operator that applies for operating authority, which lasts until 180 days after an ATMP is developed. Interim operating authority provides existing air tour operators with an annual number of tours that can be conducted over a park unit. The number of tours authorized is equal to the number of air tours conducted by the operator during the 12-month period prior to the act’s passage on April 5, 2000, or the average number of air tours per 12-month period conducted by the operator for the 36 months before the act—whichever is greater. The act allows FAA and the Park Service to grant increases in interim operating authority to existing operators, and to grant interim operating authority to new entrant operators, under certain circumstances, but the agencies have chosen not to do so. For safety reasons, the act requires certain air tour operators—known as Part 91 operators because they operate under safety rules in Part 91 of Title 14 of the Code of Federal Regulations—to apply for the more stringent operational and safety rules outlined in Part 135. Some of the more stringent safety rules under Part 135 include passing an annual flight check, passing an instrument proficiency check every 6 months, maintaining copies of the aircraft’s maintenance log, and flying no more than 8 hours in a 24-hour period. The act’s ATMP and safety certification requirements do not apply to Part 91 operators that obtain a letter of agreement from FAA and the relevant park unit’s superintendent describing the conditions under which their air tours will be conducted. The act limits air tours by Part 91 operators under this provision to no more than five air tours per park unit (not per operator) in any 30-day period. To implement the act, FAA directed its Western Pacific Region to work with the Park Service’s Natural Sounds Program. FAA’s mission is to provide the safest, most efficient aerospace system in the world, which now includes managing air tours over national park units. It is pursuing its mission with an annual budget of over $14 billion for fiscal year 2005, approximately $8 million of which was budgeted for air tour management; FAA has provided a total of about $29 million for implementing the act for fiscal years 2001 through 2005. The Volpe Center, a fee-for-service organization in the Department of Transportation, performs work primarily for the department, as well as other entities, and covers such issues as safety, mobility, security, and noise pollution. FAA contracted with the Volpe Center to perform, among other things, sound monitoring, environmental analyses, and economic analysis in support of creating ATMPs; FAA has allocated $27 million of its $29 million in funding to the Volpe Center for these activities. The National Park Service is responsible for conserving the scenery, the natural and historic objects, and the wildlife in national park units, and for providing for the enjoyment of national park units in ways that leave them unimpaired for future generations. To accomplish its mission, the Park Service received a budget from Congress of about $1.7 billion for fiscal year 2006, $1.4 million of which is allocated to the Natural Sounds Program, according to a program official. The Park Service established the Soundscapes Program Center in 2000 (now the Natural Sounds Program) primarily to work with FAA’s Western Pacific Region Manager to develop ATMPs, though the Natural Sounds Program’s mission is not limited to creating ATMPs. The Natural Sounds Program works to protect, maintain, or restore natural sounds in the national park units by working in partnership with park units to increase scientific and public understanding of the value and character of sounds that are appropriate for a park unit and to eliminate or minimize noise intrusions. The program provides technical assistance to park units in managing sounds and assessing impacts from noise, and performing outreach and education on sounds. FAA and the Park Service have taken steps to implement the act, but implementation has been slow and some of the act’s key requirements have not been addressed. Implementation of the act has been slow, in part, due to disagreements between FAA and the Park Service over the procedures necessary to implement the act in compliance with NEPA. While agency officials expect to develop ATMPs more quickly in the future now that they have drafted an implementation plan, they acknowledge that issues must still be addressed before the first ATMP is completed. FAA and the Park Service have taken several actions to implement the act and addressed some of its requirements. Specifically: As the act requires, the agencies established the National Parks Overflights Advisory Group (advisory group) in April 2001 to provide, among other things, continuing advice and counsel about air tours over and near national park units. The advisory group is composed of a balanced group of representatives of general aviation, air tour operators, environmental concerns, and Indian tribes. Since 2001, the advisory group has met periodically to discuss issues related to implementation, such as interim operating authority, increases in operating authority, and noise monitoring in the park units. In October 2002, FAA issued a final rule that completed the definition of an air tour and informed air tour operators that they must file an application for operating authority over national park units by January 23, 2003, in order to avoid a break in conducting such tours. Operators that conducted tours prior to April 5, 2000, may still apply for operating authority and may be granted interim operating authority. Owing in part to some confusion over application requirements on the part of air tour operators, in January 2005 FAA notified air tour operators that they could self-correct the information they provided in their applications for operating authority. In June 2005, FAA published in the Federal Register for public comment the list of air tour operators and the number of annual air tours each operator received under interim operating authority; the comment period closed October 31, 2005. To meet the act’s requirement for developing ATMPs, the agencies began developing plans in January 2003 for nine national park units—reduced to six in July 2005—for which air tour operators had applied for operating authority, with the goal of completing ATMPs at all park units by approximately 2010. Despite the progress made, officials at both agencies said that implementation has been slow because of (1) other priorities that took precedence over ATMP development and (2) disagreements between FAA and the Park Service over how to implement NEPA in assessing the impact of noise on park units. According to FAA officials, the agency temporarily suspended work on many rulemaking projects, including air tour management, and refocused its resources to address air passenger safety and security in the wake of the terrorist attacks on September 11, 2001. In addition, FAA and Park Service officials said the agencies’ differing missions and environmental policies have delayed some actions to implement the act. For example, FAA and the Park Service have some procedural differences over how to implement NEPA in assessing noise impacts on park units, including the appropriate noise model to use and the noise metrics that are applicable. While NEPA requires the preparation of an environmental impact statement for actions having significant environmental impacts, the agencies could not agree on the criteria to use in determining when noise from aircraft would have a significant impact on the environment and park unit resources. To resolve their differences, the agencies developed new methodologies to assess the potential adverse impacts of air tour noise on park unit resources and visitors’ experiences and agreed to a combination of FAA and Park Service guidance and practices to implement NEPA. Agency officials explained that the slow implementation was also due in part to the unexpected complexities of meeting some of the act’s requirements. Thus, it took time to resolve these and other issues between the agencies. For example, FAA and the Park Service developed new procedures to calculate emissions for aircraft that conduct air tours. FAA and the Park Service have taken steps to mitigate their disagreements and increase the efficiency of ATMP development. To establish a framework for cooperation and participation in implementation, the agencies signed a memorandum of understanding that addresses, among other things, the scope of work, financial terms, the process for developing and approving ATMPs, and outlines a dispute resolution process. Officials are to resolve their disagreements at the program level and elevate them to higher levels if they cannot resolve them. At the time of this report, the agencies had elevated one disagreement and were still working to resolve their differences over the language used in the environmental compliance documents that describes the purpose and need for an ATMP at a park unit. The lack of resolution on this issue could cause further delays, according to agency officials. Furthermore, the agencies drafted an implementation plan in September 2005 that will guide them through the process of developing ATMPs for the remaining park units that need them. The draft implementation plan establishes criteria for determining the order in which park units will develop ATMPs, the roles and responsibilities of FAA and the Park Service in developing ATMPs, how to develop ATMPs and the supporting environmental compliance documents, how ATMPs will be implemented and enforced once completed, and how the plans can be changed. As a result, agency officials expect the implementation plan, once finished, to help them develop ATMPs at the remaining park units in less time than it is taking to develop ATMPs for the first set of park units. However, according to FAA officials, it may take 5 more years before all of the 94 national park units begin developing ATMPs. Figure 1 summarizes the steps taken to implement the act, as of November 2005. Despite the progress made, some of the act’s key requirements have not been fully implemented. First, at the time of our review, it was unclear whether any of the six park units currently developing ATMPs will limit the number of air tours authorized, and if so, how competitive bidding will be handled. As agency officials acknowledge, they will eventually need to address this issue in the implementation plan and in ATMPs. Second, the agencies have not identified incentives for air tour operators to adopt technology that make aircraft quieter, such as enclosed tail rotors on helicopters. In general, air tour operators and FAA officials said that incentives are needed because of the high cost of retrofitting old aircraft with quiet technology or purchasing new aircraft already equipped with quiet technology. However, according to FAA officials, before incentives can be provided in an ATMP, noise studies must be conducted to determine what impact, if any, quiet technologies would have on park unit resources. Although the draft implementation plan broadly addresses quiet technology incentives, it acknowledges that such incentives are yet to be devised for the first ATMPs underway. FAA and Park Service officials said they expect to develop a competitive bidding process and quiet technology incentives on a park-unit-by-unit basis and will address these issues before the first ATMP is drafted in fiscal year 2007. FAA and the Park Service’s implementation of the act has limited the ability of air tour operators to make major decisions, such as expanding or selling their businesses. FAA, in cooperation with the Park Service, has not granted increases in interim operating authority to existing operators, nor interim operating authority to new entrants. Furthermore, air tour operators face uncertainty about whether they can legally transfer their authority to conduct air tours. As a result of the slow implementation of the act and the current time frame for developing ATMPs, these adverse effects on operators have been prolonged. In addition, as a safety requirement of the act, Part 91 operators must apply for more stringent safety certification from FAA in order to initiate or continue conducting air tours. In contrast to these effects on operators, the implementation of the act has so far had little effect on the 112 national park units we surveyed in July 2005. Specifically, more than half of the park units reported that the act’s implementation had no positive or negative effect on their park unit, and about 30 percent were uncertain or did not know. FAA and the Park Service’s implementation of the act has limited the ability of air tour operators to make major business decisions because FAA, in cooperation with the Park Service, has not granted (1) increases in interim operating authority to existing operators who applied for such increases or (2) interim operating authority to new entrants. Furthermore, air tour operators face uncertainty about whether they can legally transfer their authority to conduct air tours—which they believe should be valued in the marketplace—both during the interim operating period and once an ATMP is established. In addition to these uncertainties, in order to maintain the level of air tour activity they held prior to the act, Part 91 operators have generally decided to invest resources for their operations to meet the more stringent level of safety certification required by the act. According to FAA data, four existing operators have applied to FAA for a total increase of 7,860 air tours during interim operating authority at two park units (see table 1). Although the act allows FAA, in cooperation with the Park Service, to grant increases if certain conditions are met, the agencies have not done so. As a result, existing operators have not been able to expand their air tour businesses over national park units. For example, one air tour operator told us it had expanded its air tour business over Hawaii Volcanoes National Park at an average annual rate of 22 percent in the 4 years before it applied for operating authority in 2003. FAA granted the operator interim operating authority based on its activity during the year prior to the act, and the operator then requested an increase in the annual number of tours authorized. Since no increases have been granted, the operator has had to reduce the number of tours in order to remain compliant with the act. Existing operators have faced uncertainty about their potential to expand their air tour businesses for a longer period of time than expected: nearly 6 years after the act’s passage, no ATMPs have been completed, and the process for developing the vast majority of plans has not even begun. FAA officials told us they would like to grant increases in interim operating authority, and Park Service officials told us they would consider such increases in order to minimize the negative effects of the act’s slow implementation on air tour operators. The act allows increases in interim operating authority only if it is agreed to by both agencies, and promotes safe air tour operations and the protection of national park unit resources. Park Service officials said that because such increases could have significant environmental impacts, the agencies would either have to prepare an environmental assessment or collect additional data on where operators propose conducting more tours—such as flight paths, frequency of tours, and times of day—to evaluate those impacts prior to approving the requested increase in air tours. The two agencies have not reached an agreement on how they will handle increases during interim operating authority; due to the time and cost involved with an assessment, the agencies may not be able to determine whether increases can be granted until the assessment associated with the relevant park’s ATMP is developed. The agencies also have not issued interim operating authority to 16 new entrants that have applied for authority to conduct tours over a combined 46 national park units (see table 2). As a result, 10 operators have not been able to begin flying tours over any national park units, and 6 existing operators have not been able to expand their businesses to include additional park units. Among the former, for example, are two new entrant companies in Hawaii that told us they had refrained from flying within a half-mile of park units, despite customer demand and their own desire to grow their businesses. These two new entrants also said they were still waiting for official responses from FAA about the applications they filed in January 2003. The longer it takes to fully implement the act in developing ATMPs, the longer these and other new entrants may be impaired. The act gives FAA, in cooperation with the Park Service, the authority to grant interim operating authority to new entrants if certain conditions are met: (1) FAA determines the authority is necessary to ensure competition, (2) the authority would not create a safety or noise problem, and (3) the ATMP has not been developed within 2 years of the act’s passage. Although the third condition has been met in all cases, and the first condition might be met in some cases, Park Service officials told us they interpret the clause regarding noise as triggering the same environmental analysis that is needed for an ATMP—unless new entrants provide more data about when and where they propose flying. Because the agencies have not reached an agreement on how they will handle new entrants during interim operating authority, they may not be able to determine whether new entrants can be accommodated until they develop the relevant ATMPs. In the meantime, new entrants—like existing operators seeking increases in interim authority—are not able to make important decisions regarding their air tour businesses. Air tour operators are uncertain about whether they can legally transfer their flight allocations under the act, both during the period of interim operating authority and once an ATMP is completed, because FAA has not adequately communicated its opinion on this subject to operators and its field offices. As a result, operators have not been able to make major business decisions such as retiring or selling their businesses—or have made inappropriate decisions to transfer, sell, or buy air tour allocations. FAA’s opinion is that operating authority is not a property right or interest. However, as both FAA and aviation members of the advisory group stated at their June 2005 meeting, operators believe they should be able to transfer their air tour allocations if, for instance, an operator wants to go out of business. Now that air tours over national park units are regulated, they said, an existing operator’s value lies in its pre-2000 level of activity, as well as in its pilots and personnel, equipment, reputation, and other business assets. If an operator is not able to transfer its air tour allocations, advisory group members said there might not be any business to sell. FAA officials told us that operators’ uncertainty about their ability to transfer their air tour allocations stems from the fact that the act, regulations, and formal FAA guidance to its flight standards district offices and air tour operators have not addressed this subject. The air tour community is aware that while regulations for air tours over Grand Canyon National Park specify that operators have no property interest in air tour allocations, the regulations nevertheless allow such transfers subject to FAA control. And in practice, operators have successfully bought and sold allocations for that park unit with FAA’s knowledge. In addition, air tour operators have learned that some operators have successfully transferred or sold air tour allocations to other operators for other park units, causing further confusion about their permissibility. For example, we found one air tour operator had paid another operator in January 2003 for 10,911 air tours the latter conducted over 11 national park units prior to the act, including 7 park units over which the purchasing operator had not previously flown. Subsequently, the purchasing operator included all but three of these tours in its application for operating authority, with the FAA district office’s knowledge. FAA granted the operator interim operating authority to conduct a number of tours based on its own historical activity, plus the tours purchased from the other operator. FAA senior officials and attorneys told us that transfers of air tour allocations, whether during the interim period or once an ATMP is established, are generally not allowed but should be handled on a case-by- case basis. In a letter to one air tour operator in January 2003, FAA described the limited circumstances under which an operator could obtain the existing operator status of another operator for the purposes of applying for operating authority. Specifically, FAA said both the operator and the purchased entity would have to be corporations, and the purchased corporation would have to continue to exist as its own legal entity, in order for the new owner to obtain the existing operator status. However, this letter did not address the broader issue of transferring air tour allocations once interim or final operating authority is granted. FAA headquarters officials told us they had not widely communicated this letter or the agency’s broader position to its district offices or to air tour operators because the confusion had not been brought to their attention until our review. As a result of this lack of communication, the district offices have addressed this issue inconsistently, and there may be deviations from headquarters’ position. For example, one operator in South Dakota acquired an existing air tour operator in 2001 and applied for operating authority, considering its predecessor’s history of air tour activity to be an acquired asset. The FAA district office and headquarters officials initially considered this operator to be a new entrant and denied it interim operating authority, causing it to pursue alternative sources of income for 2-1/2 years. Then, in mid-2005, FAA headquarters investigated the case and granted the purchasing operator interim operating authority because it had met the agency’s requirements for obtaining existing operator status. But in the example mentioned earlier in which one air tour operator paid another operator for 10,911 air tours, another district office was aware of the transfer before it issued operating authority. Finally, under the jurisdiction of a third district office, we found one operator with two companies has allowed one of his companies to use the other company’s air tour allocations over several park units—effectively making an inappropriate transfer, according to FAA’s position. FAA officials told us that in order to continue their historic levels of air tour activity and income, Part 91 operators have to meet the higher standards of Part 135 regulations. According to FAA data, 68 of the 77 existing operators have been allowed to maintain their pre-2000 level of air tour activity over national park units because they were already certified under Part 135 regulations. Eight of the remaining nine operators that were Part 91 operators have applied for Part 135 certification, and one operator chose to remain a Part 91 operator. These eight operators have to invest resources in additional pilot training and check rides, aircraft inspections, safety manuals, record-keeping, and other activities to meet the Part 135 safety requirements that go beyond those required under Part 91 regulations. Although the amount of investment required to meet Part 135 standards varies from one operator to another, FAA officials told us the initial cost can vary from hundreds to tens of thousands of dollars. Alternatively, if Part 91 operators had decided not to apply for the Part 135 standards required by the act, and they had a high level of air tours prior to 2000, their loss of air tour business could have been significant. For example, one air tour operator applied for operating authority under Part 135 and reported that it had conducted 5,200 tours over Mount Rushmore National Memorial strictly under Part 91 regulations in the year before the act. FAA then issued interim operating authority to this operator for 5,200 tours annually over that park unit because the operator applied for Part 135 certification. But if this operator had not applied for authority under Part 135, it would have been restricted to five or fewer tours per month over that park unit—if it had acquired a letter of agreement signed by both agencies. At the time of our review, FAA data showed that only one Part 91 operator had not applied for Part 135 certification and had received a letter of agreement from FAA for up to five flights per month at one park unit. However, Park Service officials said this operator had not obtained a letter of agreement from the Park Service. A majority of the 112 park units we surveyed in July 2005, as well as Park Service officials we spoke with, reported that FAA and the Park Service’s implementation of the act has had neither a positive or negative effect on the park units, yet many still want an ATMP. Specifically, 62 (55 percent) of the park units reported that the act’s implementation had no positive effect on their park unit, and 64 (57 percent) reported no negative effect, while only 17 (15 percent) reported a positive effect to some extent and 14 (13 percent) reported a negative effect to some extent. Another 33 (29 percent) and 34 (30 percent) park units responded that they were uncertain or did not know the effect, positive or negative, respectively. In written comments in the survey, 14 of the 112 park units attributed this lack of effect to their unit not having air tours. The act’s implementation may result in positive, negative, or no effects on park units because the level of air tour activity has been held constant over park units under interim operating authority. For those park units wanting to reduce air tours, interim operating authority has preserved a level of activity that those park units already consider too high. On the other hand, for those park units where a low level of air tour activity currently exists (as defined by those park units), freezing the level of air tours under interim operating authority has prevented the growth of air tours. Finally, some park unit officials told us there has been no effect and that the status quo is acceptable. Regarding the act’s requirement that FAA and the Park service establish ATMPs at all park units where applications for operating authority are made, 53 (47 percent) park units responded in the survey that they need an ATMP to mitigate or prevent potential adverse impacts on park unit resources, visitor experiences, and air safety. Fifty-nine (53 percent) of the park units stated that they had air tours over their park units and identified how air tours are affecting park unit resources. For example, as shown in table 3, 23 (39 percent) of the 59 park units reported a negative effect on visitors’ experiences, and 3 (5 percent) reported a positive effect. Because the agencies plan to first develop ATMPs at park units on the basis of the presence of new entrants, higher levels of air tour activity, and other priorities, some park units are not likely to have a complete ATMP until 2012. Some park units we visited told us this delay does no harm because they are not substantially affected by air tours, and they are therefore satisfied with the act’s limited implementation and the time frame for developing ATMPs. We identified four key issues to be addressed by the Congress and the agencies to improve implementation of the act: (1) a lack of flexibility for determining which park units should develop ATMPs, (2) an absence of Park Service funding for its share of ATMP development costs, (3) limited ability to verify and enforce the number of air tours, and (4) FAA’s inadequate guidance concerning the act’s safety requirements. The act requires an ATMP to be developed for any park unit where an application for operating authority is made, regardless of the size of the park unit, the number of air tours or operators at a park unit, or their impact. Thus FAA and the Park Service are not authorized to exclude any park units from the ATMP process. The number of air tours authorized at park units under interim operating authority ranges from more than 35,000 to 5 air tours per year. Of the 94 park units currently expected to develop ATMPs, 49 park units (52 percent) have 61 or more authorized air tours per year under interim operating authority. Of the remainder, 36 park units (38 percent) have 60 or fewer authorized air tours per year, and 9 park units (10 percent) have no authorized air tours because only new entrants applied for authority at those park units. According to the 112 park units we surveyed, more park units have other types of aviation than air tours, and more park units cited those other types of aviation as their biggest aviation concern. Specifically, more national park units reported having military, general aviation, and high-altitude commercial flights than air tours over their park units: 89 park units (79 percent) had military overflights, 97 park units (87 percent) had general aviation overflights, and 95 park units (85 percent) had high-altitude commercial overflights. In contrast, 59 park units (53 percent) reported having air tours, and 59 park units had other types of overflights, such as pesticide spraying, search and rescue, and Park Service research flights. Furthermore, when asked to identify the one or two types of overflights that had the most negative effect on their park unit, 56 park units cited general aviation and 44 park units reported military flights, compared with the 33 park units that cited air tours. More than half of the 112 park units we surveyed responded either that they did not need an ATMP or they were unsure if they needed one, despite the act’s current requirement. Specifically, 43 park units (38 percent) reported not needing an ATMP, and 16 park units (14 percent) did not know or were uncertain about the need for a plan. In our discussions with Park Service and FAA officials and air tour operators, we found that voluntary agreements at some park units, such as Haleakala and Badlands National Parks and the Statue of Liberty National Monument, had successfully established air tour routes and elevation levels to minimize impacts from air tours on park unit resources and visitors on the ground. These voluntary agreements, some of which were adopted prior to the act, are good management practices that could be replicated at other park units. For example, Haleakala officials started working with air tour operators in the early 1990s in response to visitors’ complaints about noise from helicopter tours over the park unit’s crater, while recognizing that such tours provide an alternative means for visitors to enjoy the park unit. In 1998, the Haleakala officials and operators signed an agreement establishing routes that keep air tours outside the crater and away from visitor centers but still allow air visitors to enjoy the crater from 500 feet above ground level within one mile of its southern boundary. In addition, the helicopter companies agreed to take punitive action against pilots if there are complaints of violations, and Park Service officials and helicopter companies meet once a month to discuss issues. As a result of this agreement, Haleakala officials and air tour operators told us complaints have decreased substantially, and operators have been able to maintain their tour businesses. If some park units find that they do not need ATMPs, then the agencies will save federal dollars if they have the option of not developing such plans. FAA estimates spending an average of $405,000—ranging from $257,000 to $681,000—on the environmental analyses required at each of the first nine park units that started developing ATMPs, compared with the agency’s original estimate of an average $300,000 per park unit. Based on the current cost and the number of park units currently expected to develop ATMPs, the development of such plans could cost the federal agencies an estimated $38 million in current dollars. Officials from both agencies and members of the federal advisory group have expressed concern about the cost and time required to fully implement the act by developing ATMPs at all 94 park units where applications for operating authority were still active as of November 2005. In particular, officials at both agencies have questioned whether it is cost-effective to develop ATMPs for park units where there is a low level of air tour activity or where there is greater concern about other environmental impacts, such as vehicular traffic or other types of aviation. Within the confines of the act, officials from both agencies said they are considering alternatives to the ATMP development approach used at the first nine park units—alternatives that these officials believe would fulfill the act’s requirements but potentially save the agencies time and money in developing the plans. Under NEPA, agencies may adopt procedures to determine which actions usually do not have any significant impact on the environment and therefore need not be the subject of an environmental assessment or impact statement; these actions are referred to as categorical exclusions. In cases where the stakeholders agree there are no significant impacts from air tours, agency officials said they may be able to issue ATMPs using their respective categorical exclusion procedures or they could issue an abbreviated environmental assessment. With this in mind, FAA has proposed creating an aviation rulemaking committee to pursue an expedited ATMP process at park units where there is low air tour activity and little public controversy. FAA envisions this rulemaking committee could be chaired by a park unit’s superintendent and would comprise stakeholders from both agencies, the aviation community, environmental groups, the nearby residential community, and any other appropriate interest groups. The committee would hold a public hearing and create an ATMP that would be published for comment and then issued as a final rule. As precedent for this proposal, FAA officials pointed to the success of a rulemaking committee it convened in 1999 to address issues surrounding the regulation of operations conducted by fractional owners and managers. That committee, composed of 27 representatives of the aviation community and relevant federal agencies, drafted proposed regulations and provided the necessary funding to conduct environmental and economic analyses of the proposed regulations. The two agencies are also considering three other approaches to expedite the ATMP process: (1) group several park units under one plan, (2) perform one environmental analysis to support multiple ATMPs, or (3) develop an environmental impact statement that could be used for as many ATMPs as possible nationwide. However, at the time of our review, the agencies had not committed to any of these approaches or agreed on when and where they might be applied; thus it is too early to know what results may come from these efforts. Furthermore, even if they are successful, the agencies will still have the responsibility of developing an ATMP for each park unit where an operator proposes to conduct air tours, then monitoring air tour operators’ compliance with an ATMP and enforcing the ATMP’s requirements. These responsibilities require resources beyond the creation of the ATMP. As a result, FAA and the Park Service have discussed the benefit of legislative changes to the act in order to give the agencies authority to determine which park units need an ATMP. FAA officials expressed concern that if the agencies were to recommend a legislative change to the Congress, it may trigger an environmental review under NEPA that is similar to what is already being done to develop the ATMPs, which is both costly and time-consuming. This would effectively diminish the benefits of seeking such a change. FAA and Park Service officials concurred that this is an issue the Congress should handle without a formal legislative proposal from the agencies. The Park Service has not funded its share of the cost of developing ATMPs, despite its agreement with FAA to fund 40 percent of this effort. In a memorandum of understanding between FAA and the Park Service, the agencies agreed that FAA would fund 60 percent and the Park Service would fund 40 percent of the cost of developing ATMPs. The agreement describes the qualifying costs as external contractor costs required to produce ATMPs. These qualifying costs exclude staff salaries, benefits, and travel for agency personnel; agency equipment and supplies; and any costs for in-house contractors hired by either agency. From fiscal year 2001 through fiscal year 2005, FAA has funded 100 percent of the initial ATMPs’ development, which amounts to $27 million through distinct budget appropriations, while the Park Service had not requested or received any dedicated funding for the program until fiscal year 2006 when Congress provided $500,000 toward air tour management. Although the Park Service has also contributed staff time to work with FAA on the development of ATMPs, the cost of doing so does not count toward its 40 percent obligation, according to the memorandum of understanding. FAA officials said at the current estimate of an average $405,000 per park unit for ATMP development, the agency estimates it will cost an additional $13 million for fiscal years 2006 through 2010, for a total program cost since 2001 of about $38 million for 94 ATMPs. At that level, FAA has already received about 67 percent of the total ATMP cost—if the Park Service receives funds for the remaining $13 million that FAA estimates is needed, that will be just 33 percent of the total cost. If the Park Service does not meet its obligation within the next 2 years, according to FAA and Park Service officials, implementation may be hindered. However, officials from both agencies said adoption of alternative approaches to the ATMP process could lower costs. FAA and existing laws and regulations do not require operators to record and report the number of air tours they conduct over national park units. Consequently, FAA and the Park Service lack a mechanism to verify the number of air tours conducted over national park units, both historically and under interim operating authority. Of the 25 existing operators we interviewed, 23 told us they had used a variety of documents, such as flight logs and ticket sales receipts, to estimate their pre-2000 air tour activity in their applications for operating authority. However, FAA officials and operators said the quality of these data varied since there is no record- keeping requirement, and two operators told us they had no records of their pre-2000 activity. In addition, we found two operators had deliberately inflated their estimates to ensure some growth in future years—even though that action ran counter to the act’s intent. Specifically: According to one air tour operator, because of the act’s passage in 2000, the operator started keeping track of its air tours and deliberately inflated the number reported in its application for operating authority in 2003 to allow for future expansion. Without the documentation to verify this information, FAA issued interim operating authority to this operator for the inflated amount, and this operator has not had to limit its tours in recent years as it might have if it had reported actual numbers. Another operator with one pilot applied for operating authority at dozens of park units spanning 6 Western states totaling more than 1,500 air tours annually. FAA and Park Service officials said it was unlikely this operator could have conducted that many tours, and in their view, the company had inflated the amount of tours it reported. However, without reliable data to prove or disprove the operator’s claim, FAA granted the company interim operating authority for the reported activity. In addition, without reliable air tour data, FAA and the Park Service cannot determine whether operators are violating their interim authority. As a result, the agencies are limited in their ability to enforce the act. We determined 3 of the 25 existing operators we interviewed were exceeding the number of tours they were authorized to conduct under interim operating authority, flying over park units for which they did not have authority, or both. For example, one owner told us his company was exceeding its interim operating authority by more than 3,000 tours per year over a major national park unit, and was conducting tours over two other park units for which it had no authority. According to FAA officials, it is in an operator’s best interest to keep records of its tours over national park units to verify the number of tours conducted, and some operators are doing so as a good business practice. Without a requirement for operators to maintain and report such records, however, the agencies cannot take appropriate action to enforce the act or deter violations. Consequently, those operators who deliberately inflated their pre-2000 flight activity in their applications enjoy higher levels of activity under interim operating authority than the act intended, and thus may have a competitive advantage over operators who provided more accurate data. To address this problem, FAA told us, legislation or rulemaking is needed to require operators to maintain and report records during the interim operating period. Once an ATMP for a park unit is completed, agency officials believe each ATMP should include reporting requirements in order to make the act enforceable. However, at the time of our review, FAA, as the agency responsible for regulating air tour operators, had not decided how it would implement a reporting requirement. FAA has not instructed its flight standards district offices or air tour operators on how to interpret and enforce the act’s requirements for Part 91 operators, which are now required to meet the safety standards of Part 135 regulations. Under an exemption in the act, Part 91 operators may continue to be regulated under Part 91 if they obtain a letter of agreement from FAA and the relevant park unit’s superintendent and are limited to a combined total of five air tours per park unit per month. We found that 3 of the 29 companies we interviewed had not taken steps for all their pilots and aircraft to meet Part 135 standards, had not obtained letters of agreement from the two agencies, and were exceeding the five-tour limit using pilots and/or aircraft qualified only for Part 91 operations. Furthermore, we found that the number of tours conducted by one operator’s Part 91 pilots and aircraft exceeded its interim operating authority. Specifically, this operator employed a single pilot and a single helicopter qualified for Part 135 operations, and seven pilots and three helicopters under Part 91 regulations, to conduct tours within a half-mile of a major national park unit. The operator’s manager estimated those Part 91 pilots and aircraft had given hundreds of tours within one half-mile of a national park unit in the previous year. The manager believed the interim operating authority applied only to himself and his one Part 135-certified helicopter, and did not apply to the other pilots and aircraft. Officials in the two FAA flight standards district offices overseeing the three operators mentioned above were either not aware of these circumstances or believed the operators were in compliance with the act because they had at least a single pilot and single aircraft that were Part 135-certified. According to officials at the district office who were aware of the circumstances, the fact that those operators used additional pilots and aircraft qualified for only Part 91 operations was immaterial and not a violation of the act. FAA attorneys and other agency managers disagreed, indicating the operators mentioned above were in violation. They interpreted the act to mean that unless an operator chose to operate under the Part 91 restrictions, all pilots and aircraft conducting tours over national park units should meet Part 135 standards in order to increase safety. Until our review, FAA officials said the issue had not been brought to their attention, and they agreed that interpretation of the act by some FAA district offices seemed to be inconsistent. We found that the guidance FAA headquarters provided to district offices and air tour operators regarding the requirements for operating authority applications was not clear about this issue. For instance, the guidance did not require the companies to identify the number of pilots they employed, or what specific certification level those pilots and their aircraft were qualified for. In the three cases where we found the companies were exceeding the five-tour limit using Part 91 pilots and aircraft, their applications for operating authority did not disclose their additional pilots, they did not specify the level of certification their aircraft met, or both. Furthermore, both the act and FAA guidance routinely use the term “operator,” which is broadly defined to refer to companies, corporations, individuals, and other entities. Agency officials said the scope of the act’s intentions was not clear on this matter, and the common use and interpretation of the term “operator” as a business—not an individual— could have caused confusion within the aviation community. The National Parks Air Tour Management Act provided FAA and the Park Service with new authority to regulate air tours over national park units to ensure that the noise from such tours does not impair visitors’ experiences or damage park unit resources. However, some of the act’s requirements, and FAA’s and the Park Service’s slow implementation, have had unintended consequences on air tour operators and relevant park units. The level of air tours over the park units has been held constant under interim operating authority at pre-2000 levels for nearly 6 years because no ATMPs have yet been completed. FAA and the Park Service, air tour operators, and some members of Congress did not envision that so many park units and air tour operators would be operating under interim operating authority for so long. Maintaining the level of air tour activity for those park units that were adversely affected by air tours may be justified while the agencies try to assess their impacts. However, according to the 112 park units we surveyed, many of the park units currently scheduled to get an ATMP may not need one for the foreseeable future—but the act does not provide the agencies with any flexibility to determine which park units do not need ATMPs. While the agencies are currently considering more cost-effective methods for developing ATMPs within the confines of the act, it is too early to know what results may come from those efforts. Amending the act to authorize the agencies to determine which park units should develop ATMPs would go a long way to addressing the unintended consequences of the act at a number of park units, and could save federal dollars by not requiring the development of ATMPs for some park units. Park units identified as needing an ATMP would continue to be regulated under the act as they are now, while the other park units that do not currently need an ATMP would become unregulated, thus allowing existing operators at those park units to grow their businesses and new entrants to begin operating. At any time in the future, should the level of air tour activity at an unregulated park unit expand to such a level so as to warrant the development of an ATMP, the agencies would have the necessary authority to begin regulating the air tours at that park unit. This flexibility would also encourage park units and air tour operators, under the threat of becoming regulated, to negotiate and comply with voluntary agreements to mitigate the impacts of air tours. In amending the act, the Congress could consider different processes and criteria for the agencies to determine which park units will develop ATMPs. For example, Congress, in consultation with the agencies, could establish a process with specific criteria that the agencies use to determine which park units should have ATMPs. Other options Congress could consider are a nomination process with approval by the National Parks Overflights Advisory Group, or a process whereby the agencies could assess the need for an ATMP based on the likelihood of potential significant adverse impacts. FAA has determined, but not effectively communicated or consistently enforced, the circumstances under which air tour operators may or may not transfer or sell their air tour allocations. As a result, air tour operators do not know if they should plan to expand, reduce, or even sell their business. Since some air tour operators have assumed that their air tour allocations can be transferred or sold, they have been doing so, with the knowledge and approval of their local FAA flight standards district offices, contrary to the position of FAA headquarters. For consistent enforcement of the number of air tours authorized under interim operating authority, and ultimately under ATMPs, FAA and the Park Service must be able to verify the number of air tours conducted over a national park unit by each authorized operator. Since air tour operators are currently not required to maintain and report information on their air tours, FAA and the Park Service are unable to enforce the air tour allocations. By not enforcing the air tour allocations, the agencies are allowing operators that are exceeding their allocations to have an unfair business advantage over those operators that are complying with the act and may also have adverse impacts on visitors’ experiences and park unit resources. Finally, consistent enforcement of air tour operators’ allocations over national park units, both under interim operating authority and once an ATMP is established, is vital to controlling the impacts from air tour noise on the national park units and in ensuring a level playing field among all the air tour operators. Nearly 6 years after the passage of the act, a great deal of confusion remains regarding the act’s safety requirements. FAA has not provided definitive guidance to its flight standards district offices on how they should interpret and enforce the act’s safety requirements and exemption for Part 91 operators. As a result, this provision has generally not been enforced, and some air tour operators are not in compliance with act. To allow more cost-effective implementation of the National Parks Air Tour Management Act, Congress may wish to consider amending the act to authorize the agencies to determine which park units should develop ATMPs. To improve compliance, enforcement, and implementation of the National Parks Air Tour Management Act, we recommend that the Secretary of Transportation direct the Administrator of FAA to take the following three actions: Communicate the agency’s position to its district offices on whether operating authority is transferable or sellable under both interim and final operating authority, and if so, under what conditions. Establish a procedure for air tour operators to record and report to FAA and the Park Service the number of air tours they conduct over national park units, under both interim and final operating authority. Clearly communicate to FAA district offices how to interpret, and thus enforce, the act’s requirements for Part 91 air tour operators. We provided the Departments of Transportation and the Interior with a draft of this report for review and comment. The Department of Transportation offered technical comments and otherwise generally agreed with the findings of this report and agreed to consider our recommendations as they move forward with the program. The Department of the Interior provided written comments that are included in appendix VI, along with our specific response. Interior generally agreed with our findings and recommendations, but it questioned whether Congress needs to amend the act to give the agencies greater flexibility. Interior commented that it was concerned that amending the act could “unnecessarily or unwittingly” jeopardize the protection of park resources and visitor enjoyment by excluding some park units from the ATMP process solely based on their level of air tour activity. Furthermore, Interior commented that there are several administrative remedies available to the agencies that might be best used to address those park units with low air tour activity. We agree that any amendments to the act should preserve the Park Service’s authority to develop an ATMP at any park unit it deems necessary and that park units should not be arbitrarily excluded from the process solely based on their level of air tour activity. However, we disagree that existing administrative remedies would provide the flexibility that is needed to achieve the most effective and efficient implementation of the act. The purpose of providing flexibility in the act is not to exclude park units that need ATMPs, but rather to provide the agencies the flexibility not to develop ATMPs for park units where the agencies deem them to be unnecessary. In its comments, Interior suggested support for this approach by stating that it “… would agree to a general grant of authority which would provide the agencies discretion to make such determinations based on agency developed criteria that goes beyond simply the level of air tour activity.” Interior also provided technical comments and editorial suggestions that we have incorporated throughout the report, as appropriate. We are sending copies of this report to the Secretaries of Transportation and the Interior, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. If you or your staff have questions about this report, please contact me at (202) 512-3841 or nazzaror@gao.gov. Key contributions to this report are listed in appendix VII. We identified and analyzed applicable laws, regulations, policies, and procedures to determine what actions the Federal Aviation Administration (FAA) and the National Park Service (Park Service) have taken to implement the act and what remains to be addressed. Specifically, we used the act to identify what actions the agencies were required to take. To learn what actions FAA and the Park Service have taken to implement the act’s requirements, we reviewed notices and regulations published in the Federal Register and agency documents, and interviewed agency officials at FAA and Park Service headquarters, FAA district offices, and national park units, and air tour operators. We selected a nonprobability sample of 12 park units to visit because 9 of the 12 were the first park units FAA and the Park Service chose to develop air tour management plans (ATMPs), and the other 3 faced circumstances that differed from the first 9: one had a military flight restriction, one was a potential candidate for an alternative method for developing an ATMP, and one believed an ATMP was not needed because it had so few air tours. The 12 park units are: Great Smoky Mountains, Hawaii Volcanoes, Haleakala, and Badlands National Parks; Kalaupapa, Pu’uhonau o Honaunau, and Kaloko-Honokohau National Historical Parks; Pu’ukohola Heiau National Historic Site; the USS Arizona Memorial; Lake Mead National Recreation Area; and Mount Rushmore National Memorial. We assessed budget data describing the Park Service’s requests for annual appropriations and FAA’s funding dedicated to developing ATMPs, including those funds obligated to the Volpe Center. This budget information covered the fiscal years 2001 through 2006 and were obtained from budget appropriation reports, the agencies’ budget requests, and budget summaries provided by FAA and the Volpe Center. We determined that these data were sufficiently reliable for the purposes of this report. We also assessed data describing the number of air tours by operators over various national park units. Of interest were data on numbers of air tour operators, including existing operators, new entrants, and total applicants, as well as numbers of annual authorized air tours and new or increased authority requested. We interviewed agency officials regarding a series of data reliability questions addressing areas such as data entry, data access, quality control procedures, and data accuracy and completeness. We asked follow-up questions whenever necessary. We determined that these data were sufficiently reliable for the purposes of this report. To assess how air tour operators have been affected by the implementation of the act, we reviewed FAA and Park Service documents, documents provided by air tour operators, and interviewed a sample of 29 operators at the 12 different park units we visited. Where appropriate, we used documents provided by the operators, FAA, and the Park Service to corroborate information we collected from interviews with the air tour operators. To choose the operators to interview, we divided the operators at each park into 3 groups based on the number of annual air tours they were granted under interim operating authority—small (3,000 or fewer air tours), medium (between 3,001 and 10,000 air tours), and large (more than 10,000 air tours). Next, we randomly selected operators in each group and met with at least three in each group; we also met with all new entrant operators regardless of the number at each park unit. The air tour operators we interviewed are: Adventure Air, LC.; Air Grand Canyon, Inc.; American Aviation, Inc.; Aris, Inc.; Aviation Ventures, Inc.; Badger Helicopters, Inc.; Big Island Air, Inc.; Black Hills Aerial Adventures, Inc.; Call Air, Inc.; Eagle Aviation, Inc.; Grand Canyon Airlines, Inc.; Great Smoky Mountains Helicopter, Inc.; Helicopter Consultants of Maui, Inc.; K & S Helicopters, Inc.; King Airelines, Inc.; Manuiwa Airways, Inc.; Maverick Helicopters, Inc.; Mauiscape Helicopters, Inc.; Mokulele Flight Service, Inc.; Rainbow Pacific Helicopters, Ltd.; Rambo Helicopter Charter, Inc.; Rushmore Helicopters, Inc.; Safari Aviation, Inc.; Scenic Airlines, Inc.; Schuman Aviation, Co. Ltd.; Vista Helicopter Services, Inc.; Skycraft Air Maintenance, Ltd.; Sunshine Helicopters, Inc.; and Windrock Aviation, LLC. To assess how the national park units have been affected by implementation of the act, we reviewed FAA and Park Service documents, conducted a survey of all 112 national park units identified to develop ATMPs as of July 2005, interviewed representatives of the 12 park units we visited, and also interviewed FAA and Park Service officials at their headquarters offices. We surveyed all national park units where air tour operators had applied for operating authority, which includes existing and new entrant operators. Ten tribal lands within or abutting national park units were also identified as needing to be part of the ATMPs developed at the relevant park units, but our review is limited to the implementation of the act at national park units. Since our survey was conducted, some air tour operators withdrew their applications and other corrections were made by FAA, which resulted in a reduced number of park units that have been identified to develop ATMPs—as of November 2005, there are 94 park units identified to develop ATMPs, which are listed in appendix III. We designed our survey with the assistance of a GAO methodologist. During its design, we reviewed a similar survey conducted by the National Parks Conservation Association and also obtained input from FAA and Park Service officials. Even though we surveyed all 112 national park units identified to develop ATMPs as of July 2005, the practical difficulties of conducting any survey may introduce other types of errors, commonly referred to as “nonsampling error.” For example, differences in how a particular question is interpreted or the sources of information available to respondents can introduce unwanted variability into the survey results. We included steps in both the data collection and data analysis stages for purposes of minimizing such nonsampling errors. We pretested the content and format of the survey with three national park units. We also had the survey independently reviewed by a GAO survey specialist. Based on the results of these pretests and reviews, we revised the survey instrument as appropriate. All returned surveys were reviewed, and we called respondents to obtain follow-up information when questions were not answered or clarification was needed. Our survey response rate was 100 percent. All survey data were keypunched, and then an additional sample of the data were verified as an added check for accuracy of the information. The data were then summarized and tabulated, and the aggregate results are included in appendix II. It is worth noting that not all park units surveyed responded to question 6. In this question, park units were asked to identify what effect, if any, air tours over the park unit have had on the following resources: visitors’ experience, cultural/historical resources, natural resources/wildlife, number of visitors to the park unit, and other resources. Fifty-one park units indicated that they do not have air tours, which would imply that the remaining 61 of the 112 park units surveyed do have air tours. Yet, 59 park units responded that they do have air tours and 2 park units did not respond because they were unsure whether the flights over their park units were air tours or some other type of flight. We conducted our work from January 2005 through January 2006 in accordance with generally accepted government auditing standards. Thank you for taking the time to complete the following brief survey, which the U.S. Government Accountability Office (GAO) is using to assess the effect of the National Parks Air Tour Management Act of 2000 on National Park Units nationwide. Senators Daniel K. Akaka (D-HI), Lamar Alexander (R-TN), Jeff Bingaman (D-NM), Daniel Inouye (D-HI), and John McCain (R-AZ) have requested GAO, as an independent Congressional agency, to review the implementation of the Act by the Federal Aviation Administration (FAA) and the National Park Service (NPS). This important survey will help inform our review of the Act and its implementation to date. The following survey is designed to gauge the effects, both positive and negative, of commercial air tours on the National Park Units. For example, sound from such tours might have a negative effect on visitors’ experiences, wildlife, or other Park Unit resources. On the other hand, air tours might have a positive effect by allowing elderly, disabled, and other persons to experience the Park Unit. Thus the effects could be positive and/or negative; please respond to the following questions with all effects that apply. This survey should take about 15 minutes or less to complete. Please read the following instructions carefully before starting the survey. This survey can be completed on your computer. To do this, first save the MSWord file containing the survey to your computer. You may then enter your responses directly to that file. Completing the survey is very simple. There are only a few rules to follow. Please use your mouse to navigate by clicking on the field or check box you wish to answer. To select a check box, simply click on the center of the box and an ‘X’ will appear. To change or deselect a check box response, simply click on the checked box and the ‘X’ will disappear. To respond to a question that requires that you enter an answer or write a comment, click on the ] and begin typing. You may type as much as you wish, the box will expand to accommodate your answer. Please identify the individual at your particular Park Unit who is most knowledgeable of this subject area, if not yourself, to complete this survey by Friday, July 22, 2005; we would like only 1 response per Park Unit. Also, please answer these questions from your perspective as a National Park Service representative, rather than expressing your personal opinion as a private individual. If you need additional room for written comments, please use the last page and/or submit additional text, and be sure to indicate which question(s) your additional comments address. Please email your completed survey to XXXX at XXXX, or fax it to: XXXX with Attn: XXXX on the cover sheet. Please complete and return this survey by Friday, July 22, 2005. We understand there are great demands on your time; however, your response is crucial to provide important information to Congress. Thank you in advance for your cooperation. If your response will be delayed, or if you have any questions, please call or e- mail: Please enter the following information in the event we need to clarify a response. Definition: For the following survey questions, the word effect refers to the positive and/or negative impacts of commercial air tours on visitors’ experiences in the Park Unit, on Park Unit resources, and/or on the number of visitors to the Park Unit. 1. Which of the following types of overflights are there over your Park Unit? (Please check all that N=97 General Aviation (private, non-commercial flights) N=59 Commercial air tours (sightseeing, fixed-wing aircraft or helicopters that fly less than 5,000 feet above ground level and within a National Park Unit or within a ½ mile outside its boundary) N=95 High-elevation commercial (jets used for transportation; fly at least 5,000 feet above ground level) N=59 Other — please specify: N= 2 None Please go to question 4 N= 2 Uncertain/don’t know Please go to question 4 * The number of responses for Question 1 is greater than 112 because the respondent could select more than one answer. The 59 respondents that answered other specified that other types of aviation include agricultural pesticide spraying, Park Service research, fire management, search and rescue, medical evacuation, law enforcement, hot air balloon, ultralight aircraft, and other federal agency flights. 2. Of the types of overflights that you checked in Question 1, which one or two types of overflights have same types of overflights as having both positive and negative effects.) * The number of responses for Question 2 is greater than 112 because the respondent could provide up to two answers. The responses included Military – 44, General Aviation – 56, Commercial Air Tours – 33, High-Elevation Commercial – 17, Other – 22, None – 6. 3. Of the types of overflights that you checked in Question 1, which one or two types of overflights have same types of overflights as having both positive and negative effects.) * The number of responses for Question 3 is greater than 112 because the respondent could provide up to two answers. The responses included Military – 4, General Aviation – 11, Commercial Air Tours – 9, High-Elevation Commercial – 4, Other – 30, None –29, Uncertain/Don’t Know –2. 4. Since April 2000, has your Park Unit received written complaints from visitors about any type of overflights? N=26 Yes ........................ Please go to question 5. N=77 No.......................... Please go to question 6. N= 9 Uncertain/don’t know Please go to question 6. 5. Please enter the number of written complaints for each type of applicable overflights that your Park however, please indicate if you are doing so by noting (E) after the number): Other (please specify) * Question 5 was not reported because the data were unreliable 6. If there are commercial air tours over your Park Unit, what effect, if any, have they had on the following? (Please check all that apply): Visitors’ experience.......................................N=3 Cultural/Historical resources .........................N=2 Natural resources/Wildlife.............................N=1 Number of visitors to the Park Unit ..............N=3 (Number of respondents who provided comments) If there are commercial air tours over your Park Unit, please explain your answer: N=52 (Number of respondents who provided comments) N=51 There are no commercial air tours over the Park Unit Please go to question 8 remaining 61 of the 112 park units surveyed do have air tours. Yet, only 59 park units responded. The remaining two park units were unsure as to whether the flights over their park units were air tours or some other type of flight. As a result, these two park units did not respond to question 6. 7. In your opinion as a NPS representative, does your Park Unit have a level of commercial air tour activity to warrant the cost of collecting business-use fees from air tour operators? 8. The National Parks Air Tour Management Act of 2000 requires FAA and NPS to jointly develop an Air Tour Management Plan for every Park Unit with commercial air tours. Regardless of this requirement, do you think your Park Unit needs an Air Tour Management Plan to mitigate or prevent potential impacts to Park resources, visitor use, and air safety? Yes ....................................N=53 No.....................................N=43 Uncertain/don’t know.......N=16 Please explain your answer: N=83 (Number of respondents who provided comments) 9. Has your Park Unit entered into any agreements, formal or informal, with commercial air tour operators in recent years? Yes ....................................N=12 No.....................................N=99 Uncertain/don’t know.......N=1 If yes, please briefly describe these agreements: N=15 (Number of respondents who provided comments) 10. Does your Park Unit collect fees upon entry from commercial bus tours using an Incidental Business Permit? N=22 Yes .......................... Please go to question 11. N=86 No........................... Please go to question 12. Please go to question 12. 11. How many commercial bus tours entered your Park Unit in fiscal year 2004, and how much did you estimates; however, please indicate if you are doing so by noting (E) after the number.) Number of commercial bus tours in FY 2004: _______*___________ Amount collected in commercial bus tour fees in FY 2004: : _____*_____________ * Question 11 was not reported because the data were unreliable 12. Since the National Parks Air Tour Management Act was passed in 2000, to what extent has the implementation of the Act had a positive effect on your Park Unit? To no extent (no effect)......N=62 To a small extent ................N=9 To a moderate extent ..........N=7 To a great extent.................N=1 To a very great extent.........N=0 Uncertain/don’t know........N=33 Please explain your answer: N=60 (Number of respondents who provided comments) 13. Since the National Parks Air Tour Management Act was passed in 2000, to what extent has the implementation of the Act had a negative effect on your Park Unit? To no extent (no effect)......N=64 To a small extent ................N=7 To a moderate extent..........N=5 To a great extent.................N=1 To a very great extent.........N=1 Uncertain/don’t know........N=34 Please explain your answer: N=51 (Number of respondents who provided comments) 14. No questionnaire of this type can cover every relevant topic. If you wish to expand on your answers or comment on any other topic related to the National Parks Air Tour Management Act of 2000 or its implementation, please use the space below and/or submit additional text. N=56 (Number of respondents who provided comments) Thank you for your time and participation in this important survey – we greatly appreciate your input! Golden Gate National Recreation Area (includes Alcatraz Island, Muir Woods National Monument, Presidio of San Francisco, and Fort Point National Historic Site) Great Sand Dunes National Park and Preserve Hagerman Fossil Beds National Monument Hubbell Trading Post National Historic Site Lake Mead National Recreation Area (includes part of Parashant National Monument) North Cascades National Park (includes Lake Chelan National Recreation Area) Salinas Pueblo Missions National Monument San Francisco Maritime National Historical Park Statue of Liberty National Monument (includes Ellis Island National Monument) Sunset Crater Volcano National Monument 959 Recreation Area; Gateway National Recreation Area; Lava Beds National Monument; Lower East Side Tenement Museum National Historic Site; Manhattan Sites (which includes Castle Clinton National Monument, Saint Paul's Church National Historic Site, Federal Hall National Memorial, General Grant National Memorial, Hamilton Grange National Memorial, and Theodore Roosevelt Birthplace National Historic Site); Manzanar National Historic Site; Pinnacles National Monument; Santa Monica Mountains National Recreation Area; Tonto National Monument; and Whiskeytown National Recreation Area. In addition, FAA withdrew interim operating authority in October 2005 for one operator at three park units where there were no other applicants. Those three park units are: John Muir National Historic Site, Redwood National and State Parks, and Rosie the Riveter/World War II Home Front National Historical Park. This information reflects applications for operating authority that were still active as of November 2005. Lake Mead National Recreation Area (includes part of Parashant National Monument) Statue of Liberty National Monument (includes Ellis Island National Monument) Glen Canyon National Recreation Area Golden Gate National Recreation Area (includes Alcatraz Island, Muir Woods National Monument, Presidio of San Francisco, and Fort Point National Historic Site) San Francisco Maritime National Historical Park Great Smoky Mountains National Park North Cascades National Park (includes Lake Chelan National Recreation Area) 185 (Continued From Previous Page) Canyon de Chelly National Monument Chaco Culture National Historical Park Sunset Crater Volcano National Monument Hubbell Trading Post National Historic Site Gila Cliff Dwellings National Monument San Juan Island National Historical Park 18 (Continued From Previous Page) The following are GAO’s comments on the Department of the Interior’s (Interior) letter dated January 5, 2006. 1. While it is true that FAA and the Park Service could not agree on what constitutes a significant impact, we did not include specifics of the disagreement because, according to agency officials and the implementation plan, the problem has been resolved. Furthermore, the agencies’ slow implementation of the act has been discussed in past congressional hearings and we concluded that we could not provide any new information. 2. We agree that Congress, in amending the act, should preserve the Park Service’s authority to develop an ATMP for any park unit it deems necessary. Our intent for suggesting that Congress consider amending the act was to provide the agencies with the flexibility not to develop ATMPs for park units where the Park Service deems them to be unnecessary. That flexibility currently does not exist in the act. As we discuss in the report, an ATMP may be unnecessary for a specific park unit for a variety of reasons, including (1) a low level of air tour activity, (2) an existing voluntary agreement governing air tour activities, or (3) more significant effects by other types of overflights at the park unit. We agree that whether or not an ATMP is necessary for a specific park unit could depend on a number of factors and not just solely on the level of air tour activity. Furthermore, we agree that in amending the act Congress should not arbitrarily exclude park units from the ATMP process solely on the basis of their level of air tour activity. We believe that such an exclusion would be an oversimplification of a complex issue, and we did not imply such a solution in this report. 3. We disagree that programmatic approaches taken by the agencies could provide the same flexibility as a legislative amendment. As Interior acknowledges in its comments, the act does not give the agencies the authority to exclude any park unit from the ATMP process. In the matter for congressional consideration, we offer the point that Congress may wish to give the agencies such authority. 4. We disagree with Interior’s characterization of our survey results as “anecdotal.” We surveyed all 112 park units that were expected to develop ATMPs as of July 2005, and we received a 100 percent response rate. Per our instructions, the survey was to be completed by Park Service officials with the most knowledge of the subject area. We believe that our methodology was sound and that we gathered the most authoritative data currently available from Park Service officials with first-hand knowledge of the different causes of noise at their respective park units. However, since only a small number of park units have completed sound monitoring studies as part of the development of an ATMP, we acknowledge that the responses provided were generally based on the survey respondents’ first-hand knowledge and years of experience rather than scientific data. Nevertheless, the survey results clearly indicate that there are a number of park units currently scheduled to develop ATMPs that may not need them. We agree with Interior’s comments that the survey results should not be “used to imply that low levels of air tour activity alone equates to little or no adverse effects.” 5. We agree with this comment and we have revised the discussion of this issue in the final report accordingly. 6. We agree, and note in the report, that the Park Service needs additional information in order to grant increases in air tour allocations under interim operating authority. FAA confirmed that it does have authority to collect this data from air tour operators, as it is the agency with jurisdiction over operators and could require operators to provide the necessary information if an operator wants to increase its air tours under interim operating authority. The agencies would have to evaluate the information and determine whether an environmental analysis is necessary. In addition to the individual named above, Jeffery D. Malcolm, Assistant Director, Josey Ballenger, Alisha Chugh, Richard Johnson, Cathy Hurley, Wyatt R. Hundrup, Judy Pagano, Carol Herrnstadt Shulman, and Monica Wolford made key contributions to this report. Also contributing to the report were Roy Judy and Steve Martin. | Primarily because of concerns that noise from air tours over national parks could impair visitors' experiences and park resources, Congress passed the National Parks Air Tour Management Act of 2000 to regulate air tours. The act requires the Federal Aviation Administration (FAA) and the National Park Service to develop air tour management plans for all parks where air tour operators apply to conduct tours. A plan may establish controls over tours, such as routes, altitudes, time of day restrictions, and/or a maximum number of flights for a given period; or ban all air tours. GAO was asked to (1) determine the status of FAA and the Park Service's implementation of the act; (2) assess how the air tour operators and national parks have been affected by implementation; and (3) identify what issues, if any, need to be addressed to improve implementation. FAA and the Park Service have taken some steps to implement the National Parks Air Tour Management Act, but almost 6 years after its passage, the required air tour management plans have not been completed. FAA issued regulations implementing the act and the agencies began developing plans at nine parks. But implementation has been slow, in part, because FAA needed to address airline security after the September 11, 2001, attacks and because the two agencies disagreed over how to comply with environmental laws. Agency officials expect that future plans will be developed more quickly since they have drafted an implementation plan to guide their development. Nevertheless, because no plans have been completed, it is unclear how some of the act's key requirements will be addressed, such as creating incentives for air tour operators to adopt quiet aircraft technology. FAA and the Park Service's slow implementation of the act has limited the ability of air tour operators to make major decisions, such as expanding or selling their businesses, while it has had little effect on the parks. For example, operators have been unable to increase their number of air tours beyond their pre-2000 levels or expand to additional parks. Also, air tour operators face uncertainty about whether they can legally transfer their authority to conduct air tours. In contrast, the implementation of the act has so far had little effect on the 112 national parks we surveyed. Most of the parks responded that they had not experienced any positive or negative effect of the implementation of the act, or that they were uncertain or did not know the extent of the effect. Nonetheless, 47 percent responded that their park could benefit by having a plan to mitigate or prevent potential adverse impacts on park resources, visitor experiences, and air safety. GAO identified four key issues that need to be addressed to improve implementation of the act. Lack of flexibility for determining which parks should develop plans: Not all parks required to develop a plan may need one because they have few air tours or are more affected by other types of flights. Yet, the act does not provide the agencies with any flexibility to exclude some parks. Absence of Park Service funding for its share of plan development costs: The Park Service has not requested nor received funding for its share of the costs of developing plans. Limited ability to verify and enforce the number of air tours: Air tour operators are not required to report the number of tours they conduct. As a result, the agencies are limited in their ability to enforce the act. Based on information provided by operators, GAO found some operators had inappropriately exceeded their number of authorized tours. FAA's inadequate guidance concerning the act's safety requirements: FAA has not instructed its district offices or air tour operators on how to interpret the act's requirement that operators meet a specified level of safety certification. |
Gangs, which operate in all 50 states and the District of Columbia, vary in size, ethnic composition, membership, and organizational structure. Gangs range from groups that have regional or national status and operate in a number of states throughout the country to local groups that are associated with a particular neighborhood or street. Most gangs nationwide are local neighborhood or street groups. Assessments by DOJ and other organizations have identified gang crime problems nationwide from large cities to rural communities. In many communities, criminal gangs commit as much as 80 percent of crime, according to law enforcement officials. See appendix I for information on the extent to which communities of various sizes experience gang crime problems, recent gang crime trends observed, descriptions of major national-level gang organizations, and impacts of national and local street gangs in the localities we visited. Gang crime problems are not unique to the United States. In other countries, urban youth gangs operate often in association with adult organized-crime organizations. For example, gang activity has been reported in Great Britain, Germany, the Netherlands, France, Africa, and Asia, as well as in Russia and the countries of eastern and central Europe following the dissolution of the Soviet Union. Some of these international gangs can be linked to gangs in the United States. We have ongoing work examining efforts to combat gangs with transnational connections and plan to report on this issue later this year. DOJ and DHS are the departments with key roles in federal enforcement efforts to investigate and prosecute gang-related crimes (See fig. 1). DOJ’s involvement is primarily through its Criminal Division; the 93 U.S. Attorneys in 94 judicial districts across the nation that operate with administrative and operational support from the Executive Office of U.S. Attorneys (EOUSA); and three law enforcement agencies: the Federal Bureau of Investigation (FBI); the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF); and the Drug Enforcement Administration (DEA). In addition, the U.S. Marshals Service (USMS) fugitive task force program assists law enforcement agencies in apprehending dangerous fugitives, including gang members, who are not arrested after having been criminally charged. ICE is the DHS agency with the largest role in investigating gang- related crimes, and U.S. Customs and Border Protection (CBP) is responsible for interdicting gang-related illicit activities that cross United States borders. Within DOJ, the Criminal Division, along with U.S. Attorneys, is charged with enforcing most federal criminal laws. The Criminal Division can prosecute a wide range of criminal matters, including many of those involving gangs and gang members. Criminal Division attorneys are to prosecute nationally significant cases and formulate and implement criminal enforcement policy, among other responsibilities. The Criminal Division oversees the investigation and prosecution of gang-related crimes under five Deputy Assistant Attorneys General. Each one supervises three or four sections dealing with specific violations of criminal law. The work of each of the sections is driven by the type of criminal matter under investigation (i.e., organized crime and racketeering, as well as narcotics and dangerous drugs), not whether gangs and/or gang members are involved in the crimes. As a result, according to a DOJ Criminal Division official, all five Criminal Division deputy assistant attorneys general oversee sections that deal with crimes and criminal matters that involve gangs and gang members. The Criminal Division has also established the Gang Unit to help coordinate multi-jurisdictional gang investigations and prosecutions. The 93 U.S. Attorneys prosecute the majority of criminal cases as well as civil litigation, handled by DOJ. In 2005, as part of a DOJ initiative to combat gangs, the Attorney General instructed each U.S. Attorney to name an anti-gang coordinator to work in consultation with federal, state, and local agencies to develop a comprehensive anti-gang strategy focusing on prevention and enforcement. EOUSA provides general executive assistance and supervision to USAOs and has a national gang coordinator who helps act as a liaison between the USAOs and other DOJ components involved in gang prosecution efforts. EOUSA provides operational support for information technology, training, and other functions, and prepares an annual statistical report of U.S. Attorneys, among other functions. The FBI has jurisdiction to investigate a broad range of violations of federal law including organized crime and violent crime that can involve gangs and gang members. ATF, as part of its mission, conducts investigations to reduce violent crimes involving firearms and explosives, which frequently involve gangs and gang members. DEA is the nation’s single-mission drug enforcement agency with responsibility for enforcing controlled substance laws and regulations. Some DEA priority investigations target gangs involved in narcotics trafficking. USMS is the enforcement arm of the federal courts with responsibilities including apprehending fugitives from federal justice, protecting federal judges, transporting federal prisoners, operating the witness security program, and seizing property acquired by criminals through illegal activities. The USMS fugitive task force program and other initiatives target fugitive gang members who have been criminally charged. Within DHS, ICE, the largest investigative arm of the department, has responsibility for a range of issues that may threaten national security, including financial and immigration fraud violations, as well as targeting street gangs with connections to international criminal activities. Within ICE’s Office of Investigation, the National Gang Unit manages and coordinates national efforts to combat the growth and proliferation of transnational criminal street gangs. Gang members who have prior criminal convictions, are involved in crimes with a nexus to the border, or are foreign-born and are in the United States illegally may be subject to ICE’s dual criminal and administrative authorities that are used to disrupt and dismantle transnational gang activities with criminal prosecutions and deportation. In addition, CBP, the DHS component that protects U.S. borders against terrorism, illegal immigration, and drug smuggling, among other threats, participates in a national gang intelligence group. DOJ supports community gang prevention, intervention, and enforcement activities through grant funding of demonstration projects managed by its Office of Justice Programs (OJP). The mission of OJP is to increase public safety and improve the fair administration of justice across America through innovative leadership and programs that include demonstration programs to assist state and local governments to reduce crime, as well as crime and criminal justice research and evaluation, training, and technical assistance. OJP’s Office of Juvenile Justice and Delinquency Prevention Programs (OJJDP) and Bureau of Justice Assistance (BJA) administer the demonstration programs we have identified as being directly focused on anti-gang efforts, while its National Institute of Justice is responsible for evaluating some program results and generating research-based knowledge to help inform policy, develop strategies, and deploy resources. DOJ and DHS component agencies have different roles and responsibilities for combating gang crime and focus on different aspects of gang enforcement. At the headquarters level, DOJ and FBI have established several coordinating entities to share information on gang- related investigations and intelligence across agency boundaries. Nevertheless, some of these entities have not sufficiently differentiated their roles and responsibilities, thus impacting their ability to coordinate anti-gang efforts. In addition, ICE has not yet fully participated in some coordinating group functions. At the field division level, federal law enforcement agencies have established entities and strategies to help coordinate anti-gang efforts by, for example, establishing anti-gang task forces and case “deconfliction” mechanisms, and developing district- wide anti-gang strategies through the USAOs. In localities we visited officials from federal, state, and local law enforcement offices cited benefits to coordinating through task forces. Gang enforcement is primarily the responsibility of state and local law enforcement agencies that address community-based violence and crime on a daily basis. At the federal level, no one department or agency has sole responsibility for gang enforcement. Various DOJ and DHS components focus on different aspects of gang enforcement as part of their broader missions. Within DOJ, the FBI focuses primarily on investigating violent, multi jurisdictional gangs whose activities constitute criminal enterprises by identifying, investigating, and prosecuting the leadership and key members of violent gangs; disrupting or dismantling gangs’ criminal enterprise; and recovering illegal assets through seizures and forfeitures. ATF primarily focuses on efforts to reduce the occurrence of firearms, arson, and explosives-related violent crime, including such crimes committed by gang members. The primary focus of DEA’s enforcement efforts is on the links between gangs and drug trafficking. USMS’s role is to apprehend gang members who have been criminally charged but not arrested. Within DHS, ICE’s primary focus for gang enforcement is to disrupt and dismantle violent transnational criminal street gangs by investigating cross-border smuggling and financial and fraud-related crimes. ICE uses its dual criminal and administrative authorities to address gang crime with the twofold approach of criminal prosecution and deportation. For these federal law enforcement agencies, enforcement of gang-related crimes competes for resources with agencies’ other program areas, such as counterterrorism, illegal-drug and firearms trafficking, white collar crime, and public corruption. For example, the FBI investigation of gang crime is part of two FBI priorities; major thefts and violent crime, and combating transnational and criminal organizations and enterprises, FBI’s tenth and sixth ranked priorities, respectively. FBI, ICE, and DEA are the three federal law enforcement agencies that specifically track agent time dedicated to gang enforcement efforts. These federal agencies have dedicated a relatively small portion of overall agent resources to anti-gang efforts, but these resource levels have increased since fiscal year 2003. For example, according to FBI data, agent full time equivalents (FTEs) spent on anti-gang efforts ranged from a low of 4.1 percent of total agent FTEs in fiscal year 2003 to a high of 7.7 percent of total agent FTEs in fiscal year 2008. Figure 2 summarizes FBI FTEs on anti-gang efforts and all investigative activities from fiscal year 2003 to fiscal year 2008. As shown in figure 3, DEA’s agent FTEs on anti-gang efforts are a relatively small portion of overall agents FTEs, but have increased from 161 in fiscal year 2003 to 225 in fiscal year 2008. The chief of the Gang Unit for DOJ’s Criminal Division said that federal law enforcement agencies are spending more time on gang-related investigations now than they did several years ago because (1) agencies have been able to hire additional agents for counter-terrorism investigations, so agents who were diverted from criminal investigations to counter-terrorism immediately after the terrorist attacks of September 11, 2001, are returning to criminal investigations and (2) the prior administration and Congress had an interest in expanding the federal role on addressing violent crime and supporting anti-gang efforts, so federal law enforcement agencies responded by placing an increased emphasis in these areas. At this point, it is too early to tell what impact, if any, these additional resources would have on gang enforcement efforts. Since 2004, at the headquarters level, DOJ has established several entities to coordinate and share information on gangs and gang enforcement efforts across department and agency boundaries. As shown in table 1, these entities include the Gang Unit; the National Gang Targeting, Enforcement, and Coordination Center (GangTECC); the National Gang Intelligence Center (NGIC); the Anti-Gang Coordination Committee; and Mara Salvaturcha (MS-13) National Gang Task Force. These entities have different roles and responsibilities, but in general, they serve as mechanisms for deconflicting cases, providing law enforcement agencies with information on gangs and gang activities, and coordinating participating agencies’ strategies and task forces. These entities provide DOJ and DHS with a means to operate across agency boundaries. For example, according to the Office of the Deputy Attorney General, GangTECC has provided an avenue through which participating agencies share information to help facilitate communication among participating agencies at the headquarters level. NGIC has worked to provide law enforcement agencies with information and analysis of federal, state, and local law enforcement intelligence focusing on gangs that pose a significant threat to U.S. communities, including information on the growth, migration, criminal activity, and structure of gangs. NGIC has helped to facilitate information sharing on gang-related issues by, for example, issuing intelligence bulletins to law enforcement agencies. Our work on effective interagency collaboration has shown that when multiple agencies are working to address aspects of the same problem, there is a risk that overlap or fragmentation among programs can waste scarce funds, confuse and frustrate program customers or stakeholders, and limit overall program effectiveness. Collaborating agencies should work together to define and agree on their respective roles and responsibilities and can use a number of possible mechanisms, such as memoranda of understanding, to clarify who will do what, organize joint and individual efforts, and facilitate information sharing. The headquarters-level anti-gang entities have defined their individual roles and responsibilities. Although some overlaps in mission may be appropriate to help reduce gaps, these entities have not yet clearly identified their differentiated roles and responsibilities, resulting in possible gaps or unnecessary overlap in agencies’ coordination and sharing of information on gang enforcement efforts. Examples include: The purpose of GangTECC is to allow participating agencies including ICE to access and use each respective agency’s gang intelligence, allow immediate access to operational information in a collocated environment and provide a national deconfliction center for gang operations. Although the roles and participation by DOJ and its component agencies in GangTECC were specified by the Deputy Attorney General in a July 2006 memorandum establishing GangTECC, as well as in the GangTECC Concept of Operations, GangTECC and ICE have not yet documented ICE’s participation in the center. ICE’s participation has varied since GangTECC’s inception. According to the head of ICE’s National Gang Unit, in the past ICE’s representative to GangTECC was engaged with other responsibilities at ICE headquarters, which periodically impacted the amount of time that the representative spent at GangTECC. However, the head of the National Gang Unit said that ICE’s representative is assigned to GangTECC on a full-time basis. ICE officials said that they are willing to work with other GangTECC officials to develop a memorandum of understanding that documents ICE’s role and participation in the center but had not yet done so at the conclusion of our audit work. Establishment of task forces at the field office level has not always been fully coordinated with ICE. Specifically, according to Anti-Gang Coordination Committee guidance, concurrence for each new gang or violent crime task force at the field office level is to be obtained by representatives of FBI, ATF, DEA, USMS, ICE, and the local USAO, as well as local or state police departments. The Anti-Gang Coordination Committee gives final approval to DOJ law enforcement agencies for establishing new anti-gang task forces in field locations. According to the Chief of the Gang Unit, this process for establishing new task forces in field locations helps to reduce task force overlap and duplication of effort. However, ICE did not have the opportunity to provide its concurrence for the creation of all recently approved task forces, making it difficult for the Anti-Gang Coordination Committee to ensure that there are no unnecessary overlaps in the creation or mission of task forces. Our review of the approval process for eight task forces authorized by the Anti-Gang Coordination Committee from January 2008 through June 2008 found that ICE’s concurrence was not obtained in three instances. Moreover, ICE is not represented on the Task Force Subcommittee of the Anti-Gang Coordination Committee, which, on behalf of the committee, reviews and provides recommendations concerning new task force applications. DOJ officials said that no Memorandum of Understanding or other document outlines ICE’s participation in the process for approving task forces. GangTECC and the MS-13 National Gang Task Force have overlapping missions and responsibilities for coordination and deconfliction of multi- jurisdictional investigations involving the MS-13 and 18th Street gangs. The two entities have these overlaps in part because the MS-13 Task Force already existed when GangTECC was established in 2006 and was not dismantled or folded into GangTECC at that time. The two entities differ in that GangTECC has participants from ATF, DEA, ICE, FBI, USMS, and other DOJ and DHS components and has responsibility for coordinating multi-jurisdictional investigations of all gangs except FBI-led investigations involving the MS-13 and 18th Street gangs. The MS-13 National Gang Task Force, on the other hand, has only FBI participants and is responsible for coordinating FBI’s multi-jurisdictional investigations involving MS-13 and 18th Street gangs. As a result, both entities coordinate some multi-jurisdictional MS-13 and 18th Street gang investigations and risk unnecessary federal resource expenditures to fund two entities when a single group could be more efficient. The GangTECC and MS-13 Task Force Directors acknowledged that there is overlap between the missions and responsibilities of the two entities, and the chief of the DOJ Criminal Division’s Gang Unit also noted that the two entities have overlapping jurisdictions and no formal coordination mechanisms. The Directors stated that the two entities do share information about the gangs and were co-located to encourage that interaction. Moreover, the director of GangTECC said he had invited representatives of the MS-13 National Gang Task Force to participate in meetings. As the invitation had been extended just prior to the conclusion of our audit work, we were not able to assess the level of participation. The Directors also said that while there is mission overlap, it has not jeopardized investigations or law enforcement operations. Nevertheless, the rationale for why two separate entities are needed is unclear. For example, the Criminal Division chief said that if DOJ were starting from scratch in creating a structure for coordinating federal anti-gang investigations, the department would not have the structure that currently exists because of the potential for this overlap. While the overlap may not have interfered with investigations or operations to date, it is not clear that this is the most efficient and effective use of federal resources. Articulating and differentiating among roles, responsibilities, and missions of headquarters-level anti-gang entities and ensuring ICE’s full participation in authorizing anti-gang task forces would help to identify gaps or overlaps among the entities and participating agencies and help to increase the understanding of federal, state, and local law enforcement agencies of each of the entities’ mission and goals. In addition, such action would strengthen these headquarters-level coordination efforts to help to ensure that they are not unnecessarily expending resources on overlapping missions. At the field level, federal law enforcement agencies primarily conduct and coordinate their gang enforcement efforts through task forces. Examples include: The FBI’s Violent Gang Safe Street Task Forces were established in 1992 to serve as long-term and coordinated teams of federal, state, and local law enforcement officers and prosecutors. These task forces focus on disrupting and dismantling the most violent and criminally active gang threats in the United States. According to the FBI, as of April 2009, 144 Safe Streets Taskforces were operating in locations across the country. ATF’s Violent Crime Impact Teams were established in 2004 through partnerships with other state and local agencies to reduce firearms-related violent crime including violent gang crime in small, geographic areas experiencing an increase in violent crime. As of April 2009, 31 Violent Crime Impact Teams were operating in locations across the country, according to ATF. DEA’s Mobile Enforcement Team (MET) program was established in 1995 to address the spread of drug trafficking and associated violent crime in urban and rural areas. Due to budgetary constraints, the MET program was temporarily suspended in June 2007, however, in January 2008, Congress directed DEA to use appropriated funds to continue the MET program. At that time, DEA also made MET investigations targeting the drug trafficking activities of criminal street gangs and criminal organizations that supply them a priority. Teams of eight agents each operate in ten DEA field divisions nationwide, according to DEA. USMS’ fugitive task force program and other initiatives such as Operation FALCON (Federal and Local Cops Organized Nationally) target fugitive gang members. Additionally, USMS coordinates a “Most Wanted Gang Members” list through GangTECC. ICE works with state and local law enforcement agencies in conducting its gang enforcement activities under its anti-gang initiative called Operation Community Shield. Under this initiative, investigations focus on transnational street gangs, gangs whose members are subject to ICE’s immigration and customs authorities because the members are foreign born and/or in the country illegally or have been involved in crimes with a nexus to the U.S. borders (i.e. narcotics trafficking and human trafficking). ICE investigations also focus on gangs operating in the United States and abroad as complex organized criminal organizations. At the field division level, many officials from federal, state, and local law enforcement offices cited benefits to coordinating federal anti-gang efforts primarily through task forces and were generally satisfied that the task force approach resulted in collaboration and information sharing among the various law enforcement entities. These officials provided examples of how these taskforces provide avenues through which federal, state, and local agencies can directly share resources and partner in conducting gang investigations, as noted below: Ten of the twenty local police chiefs or supervisors of gang units that we interviewed said that federally-led taskforces have resources to pay informants, conduct wiretaps, and purchase vehicles for surveillance and undercover operations; among other resources which state and local law enforcement agencies often do not have, or the officials noted that local law enforcement officers assigned to federally-led taskforces have better access to technology and new investigative techniques than officers not assigned to task forces. Use of these investigative tools and equipment allow state and local agencies to work with federal agencies in conducting investigations that target gangs as criminal enterprises, types of cases which state and local agencies would generally not be able to conduct in the absence of federal resources and assistance. Officials also provided examples of how task forces provide opportunities for direct information and intelligence sharing among federal, state, and local law enforcement agencies, as noted below: Four of the twenty local law enforcement officials said that one of the primary benefits to their agencies participating in federally led task forces is access to information and intelligence on gangs. State and local officers assigned to federally led task forces benefit by learning new investigative techniques that they, in turn, can share with other local law enforcement officers. In nineteen of the thirty four federal law enforcement field division offices we visited, supervisory agents noted that their task forces also benefited from state and local police officers’ intimate knowledge of the gang problems in their local communities. In addition to these benefits, several officials identified a challenge to the task force structure that they work to overcome. In some cases, federal, state, and local agencies that participate in task forces may have differing priorities and interests for gang enforcement activities. For example, supervisors at two FBI field divisions said that the FBI focuses on long- term gang investigations designed to eliminate entire gangs. In contrast, they said that state and local law enforcement agencies tend to focus on efforts to help reduce gang crime and violence in the short-term and look for short-term results for their communities. Consequently, the officials said that state and local law enforcement agencies are sometimes reluctant to dedicate resources to support the long-term investigations, but that these issues are worked through jointly by federal and local agencies involved in the task force. In addition to task forces, interviewees in the localities we visited described other mechanisms or tools for deconflicting law enforcement actions, sharing information, and coordinating gang enforcement activities with one another. In areas of the country identified by the White House Office of National Drug Control Policy (ONDCP) as high intensity drug trafficking areas (HIDTA), including New York City, Chicago, and Los Angeles, the HIDTAs monitored law enforcement activities including anti- gang operations to deconflict and coordinate across law enforcement agencies. In Richmond, Virginia, federal, state, and local agencies met regularly to deconflict cases and discuss anti-gang initiatives through a Cooperative Violence Reduction Partnership, which was created and chaired by the Richmond Chief of Police. Similarly, in Tampa, Florida, the Hillsborough County Sheriff’s Department and the Tampa Police Department co-chair of a Multi-Area Gang Task Force composed of representatives of 60 law enforcement agencies, including local, state and county police departments, FBI, ATF, and ICE. The task force participants share intelligence at monthly meetings and support one another in major anti-gang operations. The USAOs also have responsibilities for coordinating anti-gang efforts in their districts. For example, each of the 15 USAOs we visited had complied with DOJ requirements to appoint an Anti-Gang Coordinator for the district to help formulate the anti-gang strategies for their districts. The Anti-Gang Coordinators implement training opportunities for prosecutors and law enforcement agents and officers, act as liaisons for the USAO on gang-related cases with prosecutors from other offices as well as law enforcement officers and agents, and are proactively involved in developing strategies for investigating and prosecuting gang members with violent criminal behavior. Each USAO we visited had also completed a districtwide anti-gang strategy, as required by the Attorney General in 2005. According to guidance from the Attorney General, USAOs were to consult with federal, state, and local law enforcement; social service organizations; and community and faith-based groups in their district to develop the strategies. They included a description of the gang problem in each district; a description of how agencies within the district were responding or planned to respond to the problem; a description of whether the district has a specific gang unit or the resources being used to investigate and prosecute gangs; a description of the roles played by state and local law enforcement agencies in combating gangs; and any suggestions on how DOJ could more effectively address the gang problem on a local or national level. Anti-Gang Coordinators are required to prepare annual reports on the district’s anti-gang strategy, which are submitted to EOUSA and then provided to the Deputy Attorney General who informs the Attorney General of anti-gang activities throughout the nation. The reports are used to identify best practices, which are discussed at national conferences and informally among prosecutors. Federal agencies have developed and used measures to assess their gang enforcement efforts, but they lack a common or shared definition for “gang,” hindering their efforts to measure and report on gangs, gang crime, and enforcement activities. While DOJ has emphasized strategies for combating gangs as a part of its strategic objective to reduce violent crime, the department lacks a departmentwide performance measure for its anti- gang efforts. DOJ and DHS law enforcement agencies measure their gang crime enforcement efforts by counting outputs such as gang activities disrupted, arrests made, and enforcement activities conducted. However, U.S. Attorneys have underreported their efforts in prosecuting gang- related cases as well as the amount of time spent working on gang-related cases. EOUSA has taken steps to improve reporting on gang enforcement efforts. For example, as a result of our review and in following up on its 2006 guidance on reporting case and time management information on “gang-related” activities, the EOUSA Director issued guidance to USAOs in February 2009 noting the underreporting of gang cases and gang-related work time, and reinforcing the importance of tracking anti-gang activities. Gangs vary in size, ethnic composition, membership, and organizational structure, which makes it challenging to develop a uniform definition of “gang.” Federal law enforcement agencies have developed and used different working definitions of a “gang” and other associated terms, such as “gang-related.” However, these agencies lack a common or shared definition for “gang” and related terms, hindering federal agencies’ efforts to accurately measure and report on gangs, gang crime, and enforcement activities. Our prior work on performance management and measurement practices for entities involved in implementing crosscutting programs has shown that establishing common definitions can help to ensure that data used for common purposes or assessing performance is, among other things, consistently defined and interpreted. For example, we noted that a broadly accepted definition of “homeland security” did not exist and that some officials believed it was essential that the concept and related terms be defined, particularly because homeland security initiatives are crosscutting, and a clear definition promotes a common understanding of operational plans and requirements, and can help avoid duplication of effort and gaps in coverage. Common definitions promote more effective agency and intergovernmental operations and permit more accurate monitoring of homeland security expenditures at all levels of government. With respect to the definition of “gang,” DOJ and its components have discussed needs and possibilities for developing a common or shared definition for gangs in terms of numbers of members and organizational characteristics, but have not yet reached consensus on such a shared definition. DOJ developed a working definition of a “gang” as a group or association of three or more persons who may have a common identifying sign, symbol, or name and who are involved in criminal activity which creates an atmosphere of fear and intimidation. The DOJ definition is used by its component agencies such as ATF and FBI. ICE’s working definition of a gang also specifies that three or more persons must be involved in criminal activity; however, ICE’s definition requires that a gang crime be an ongoing pattern of criminal activity committed on two or more separate occasions. These working definitions are also distinct from a provision of federal law, which, for specified purposes, defines a criminal street gang as “an ongoing group, club, organization, or association of five or more persons that has as one of its primary purposes the commission of one or more of the described criminal offenses; the members of which engage, or have engaged within the past 5 years, in a continuing series of described offenses; and the activities of which affect interstate or foreign commerce. According to the FBI’s National Gang Strategy, a universal definition for a “gang” would facilitate intelligence collection and sharing, target selection, prosecution, and overall program management. The DOJ Criminal Division Gang Unit Chief recognized that having a standard definition of “gang” across agencies and departments would result in better statistics on how agencies are performing on gang-related criminal investigations. Other DOJ components also identified negative impacts resulting from the absence of a shared definition. For example, in its 2009 National Gang Threat Assessment, DOJ reported that one of the greatest impediments to the collection of accurate gang-related data was the lack of a national uniform definition of a gang used by all federal, state, and local law enforcement agencies. EOUSA officials also said that lack of consistent definitions of “gang member” and “gang-related crime” contributed to underreporting of gang-related cases by USAOs; therefore, EOUSA may not have complete data on its gang-related cases. Given the lack of a common definition, federal agencies do not have consistent and comprehensive data on the scope of the gang problem and the resources allocated to anti-gang efforts. According to DOJ and DHS agencies, lack of a shared definition of “gang” and related terms stems, in part, from headquarters-level coordination entities not attempting to reach consensus on how to use the term, and DOJ’s desire to provide agencies with flexibility in defining gangs. Agency officials said lack of a common definition did not adversely affect law enforcement activity. According to the Chief of the Gang Unit, agencies have not attempted to reach a consensus on a shared definition of “gang” because, while consistent use of the term would improve the quality of information available on federal efforts to combat gang violence, it would not make a difference in the cases investigated and prosecuted by federal agencies. For example, this official noted that a consistent definition of “gang” has no impact on U.S. Attorneys’ decisions to prosecute cases nor on the charges brought against gang-related defendants. Although lack of a common definition for “gang,” may not negatively affect gang investigations and prosecutions, the absence of a common or shared definition for “gang” and related terms makes it difficult for federal agencies to completely and accurately report on gang-related data. At the department level, DOJ lacks a performance measure for anti-gang efforts. Congress enacted the Government Performance and Results Act of 1993 (GPRA) to have agencies focus on the performance and results of programs, rather than on program resources and activities. The principles of the act include establishing measurable goals and related measures, developing strategies for achieving results, and identifying the resources that will be required to achieve the goals. GPRA requires federal agencies to develop strategic plans and performance goals and to identify resources needed to achieve them, as well as for agencies to develop performance measures to use in assessing the relevant outputs, service levels, and outcomes of each program activity. The act does not require agencies to use these principles for individual programs, but our related work and the experience of leading organizations have shown that the principles are the basic underpinning for performance-based management—a means to strengthen program performance. Performance measures help federal agencies to assess progress made on anti-gang efforts over time and provide decision makers with key data to facilitate the resource allocation process. One of the three strategic goals in DOJ’s fiscal year 2007 to 2012 strategic plan is the goal to “prevent crime, enforce federal laws and represent the rights and interests of the American people.” A strategic objective under this goal is to “reduce the threat, incidence, and prevalence of violent crime.” As shown in figure 4, associated with this strategic objective are various strategies, four of which are directly related to anti-gang efforts. While DOJ has outlined a number of departmentwide performance measures under the strategic goal, none relate specifically to gangs. undertook a process to identify long-term, measurable goals (key indicators) that would show, at a high level, progress toward meeting the department’s strategic goals and objectives. Such measures are departmentwide in nature as they represent priority areas for DOJ, and ar e reflected in DOJ’s strategic plan, as well as other GPRA-related documents such as the performance and accountability report. According to DOJ, anti-gang efforts are folded into several of the performance measures, including disrupting and dismantling drug trafficking organizations and reducing the supply of drugs available for consumption in the United States. Given that efforts to address gangs have been a major part of DO J’s overall approach to combating violent crime, the lack of a departmentwide performance measure or measures focused specifically on anti-gang DOJ efforts makes it difficult for the department and Congress to assess the effectiveness of the department’s overall anti-gang effort. It can be difficult for law enforcement agencies to measure outcomes or results of their law enforcement efforts, including anti-gang efforts. Trying to isolate the effects of federal law enforcement efforts from other factors that affect outcomes but over which DOJ has little or no control presents a formidable challenge because many factors contribute to the rise and fall of crime rates including federal, state, local, and tribal law enforcement activities and sociological, economic, and other factors. DOJ and DHS law enforcement agencies have established output measures, such as numbers of arrests and convictions, for assessing their gang crime enforcement efforts. Outputs provide status information about an initiative or program in terms of completing an action in a specified time frame. For example, the FBI uses disruptions and dismantlements as its primary gang enforcement measures and also reports on gang-related convictions. Among other things, ATF collects and reports information on gang-related convictions, and DEA and ICE collect and report information on gang- related arrests. USAOs collect information on gang-related cases filed and gang-related defendants. Since fiscal year 2003, federal law enforcement agencies’ gang-related measures have generally increased. DOJ and DHS officials attributed these increases to the allocation of additional resources for anti-gang efforts over the past few years. As shown in figure 5, since fiscal year 2003, the number of FBI gang-related disruptions has increased from 166 to 716 in fiscal year 2008. As shown in figure 6, the number of FBI gang-related dismantlements has remained fairly constant over the 6 years from fiscal year 2003 to fiscal year 2008, ranging from a high of 67 dismantlements reported in fiscal year 2006 to a low of 40 dismantlements reported in fiscal year 2004. In addition, FBI reports on gang-related convictions. From fiscal year 2003 through 2008, the FBI reported gang-related convictions that ranged from a high of 2,762 in fiscal year 2008 to a low of 1,690 in fiscal year 2005. Among other gang-related measures, ATF also reports the number of gang- related convictions by fiscal year. As shown in figure 7, since fiscal year 2003, the number of convictions has generally increased each year, and the number of gang-related convictions in fiscal year 2008 was about 5 times the number of such convictions in fiscal year 2003. As shown in figure 8, DEA reported number of gang-related arrests have also increased slightly from 1,823 in fiscal year 2003 to 2,038 in fiscal year 2008, but were at their highest level over the 6-year period in fiscal year 2006. Among other gang-related measures, ICE reports on the number of gang- related criminal and administrative arrests by fiscal year. As shown in figure 9, the number of criminal and administrative gang-related arrests made by ICE has increased since fiscal year 2006, the first full fiscal year that it compiled this information. According to EOUSA officials, USAOs have underreported the number of gang-related cases their offices handle and the amount of time they spend working on these gang-related cases. For fiscal year 2008, EOUSA data showed 536 cases filed in which a gang member had a participating role, a number that officials said they believe underreports the extent to which USAOs were prosecuting gang-related crimes. In 2006, EOUSA issued guidance to USAO personnel for them to record information in National Legal Information Office Network System (LIONS) and U.S. Attorney-5 (USA-5) systems on numbers of gang-related cases and matters and time spent working on gang-related cases and matters to assist in tracking resources and outputs for gang enforcement efforts. LIONS is the centralized computer database used by USAOs and managed by EOUSA to prepare annual statistical reports on the activities of the U.S. Attorneys by types of cases opened, pending, and closed. USAO personnel use the USA- 5 system for time management, and EOUSA uses the system to analyze and help manage assignment of resources to priority crime areas such as counter-terrorism, narcotics, and organized crime. EOUSA officials said that improvements to the data collection are needed. A 2008 DOJ Office of Inspector General report found that caseload and time management concerns contributed to reliance by EOUSA on incomplete and inaccurate data to determine resource needs, allocate positions, and respond to inquiries from Congress and other interested parties. EOUSA officials identified three key challenges that impact the reliability of all case and time management information. First, because every attorney and support person in USAOs enters data into the information systems, the potential for errors and omissions is great. Second, the systems have become more complex to use as additional codes are created to record information on cases and time spent on various activities to respond to congressional interest, DOJ priorities, and audits. Third, attorneys, historically, have not viewed data entry as a priority, so they have not always been diligent about being sure that it is done correctly. With regard to collecting data specifically on gang-related cases, EOUSA officials identified other specific challenges. For example, extra data entry steps are required to enter information on gangs into LIONS and USA-5 beyond those steps required for inputting general case information to the systems. Moreover, attorneys may not determine until after initial information on a case is entered into LIONS that the case is gang-related and then not go back into LIONS and revise the initial information to show that defendants are gang-related. In addition, EOUSA officials noted that attorneys do not have a common definition for what constitutes a “gang” and “gang-related crime” that would allow for consistent reporting. EOUSA has taken steps to improve reporting on gang enforcement efforts. As a result of our review and in following up on its 2006 guidance on reporting case and time management information on “gang-related” activities, the EOUSA Director issued guidance to USAOs in February 2009 noting the underreporting of gang cases and gang-related work time in LIONS and USA-5, respectively, and reinforcing the importance of providing accurate information on anti-gang activities in these systems. In addition, EOUSA officials said that they continue to provide training to USAO personnel on use of the information systems and the importance of reporting complete and accurate information. Peer reviews conducted at USAOs every 3 years also assess, among other operations, the systems in place for entering information into LIONS and USA-5. This February 2009 guidance and training for USAOs are positive steps to help improve USAOs’ collection and reporting of data on gang-related cases. GAO’s Standards for Internal Control in the Federal Government state that internal controls should generally be designed to assure that ongoing monitoring occurs in the course of normal operations. Given that USAOs have not consistently and accurately entered data on gang-related cases into their case and time management systems as required by EOUSA’s 2006 guidance, in the absence of periodic monitoring of USAOs’ gang- related data, EOUSA cannot be certain that USAOs have followed the guidance and accurately recorded gang-related data. DOJ anti-gang grants involve not only law enforcement efforts, but also efforts focused on gang prevention, intervention, and re-entry support for former gang members who are released from prison. DOJ, through OJJDP, has provided grant funding to localities across the nation under the Gang- Free Schools and Communities Program, Gang Reduction Program, and the Gang Prevention Coordination Assistance Program primarily to test models for communities to follow in implementing approaches for addressing gang problems. DOJ, through, BJA, also provides grant funding under the Comprehensive Anti-Gang Initiative, which is the largest current anti-gang program. Communities that received grants from OJJDP and BJA had flexibility in determining how to allocate and use grant funding and used the funding in support of different anti-gang approaches and programs. Grant programs for which DOJ-sponsored evaluations were completed reported mixed results for achieving reductions in gang crime with benefits for grant recipients but little evidence that the programs effectively reduced youth gang crime. Sustainability of grant programs after federal funding ended has also been a concern. DOJ, the department with responsibility for administering anti-gang grants, has pursued a strategy to assist communities in combating gang crime that involves not only law enforcement efforts, but also efforts to prevent young people from joining gangs, intervene and provide alternatives to gang membership for youth who are gang-affiliated, and offer support through re-entry activities for former gang members who are released from prison and returning to their communities. In fiscal year 2008, DOJ, through OJJDP and BJA, reported providing $14.9 million for three grant programs specifically directed to combating gang activity, and demonstration projects were still spending OJJDP funds for two other anti-gang programs. Three of the current grant programs—Gang Free Schools and Communities, Gang Reduction, and Gang Prevention Coordination Assistance—are administered by OJJDP. The other program, the Comprehensive Anti-Gang Initiative, is administered by BJA. In addition to allocating funding under these discretionary grant programs, DOJ reported that it allocated about $8.1 million in fiscal year 2008 for specific anti-gang programs, as directed by Congress. Grantees included cities, counties, law enforcement agencies, and private organizations in locations across the United States. Officials of OJP noted many similarities in the grant initiatives funded by OJJDP and BJA. Most importantly, officials said, both models emphasize coordination and collaboration among law enforcement and social service agencies at the federal, state, and local levels, as well as community and faith-based groups that are involved in anti-gang efforts. However, the officials also noted differences between the OJJDP and BJA grant programs. For example, BJA does not specifically direct services to juveniles, and the BJA grant program has a component to provide assistance to former gang members released from prison and reentering their communities that was not present in the OJJDP grant programs. DOJ has provided grant funding to localities across the nation under the Gang-Free Schools and Communities Program, Gang Reduction Program, and the Gang Prevention Coordination Assistance Program primarily to test models for communities to follow in implementing approaches for addressing gang problems. Since the 1980’s, OJJDP’s anti-gang programs for juveniles were demonstrations or tests of the Comprehensive Community-Wide Gang Program Model developed by Irving Spergel, a researcher and professor at the University of Chicago. This model was based on the results of an assessment directed by Dr. Spergel and funded by OJJDP beginning in 1987 and was first implemented in the Little Village neighborhood of Chicago in 1993. To develop the model, Dr. Spergel and his research team conducted a national survey to attempt to identify every promising community gang program in the United States and then identify the common elements that were essential to each program’s successes based on community representatives’ responses to the survey questions. The research team identified the following five elements common to promising community gang programs that became the comprehensive model: Community mobilization: Involvement of local citizens, including former gang-involved youth; community groups; and agencies, as well as coordination of programs and staff functions within and across agencies. Opportunities provision: Development of a variety of specific education, training, and employment programs targeting gang-involved youth. Social intervention: Involvement of youth-serving agencies, schools, faith- based organizations, police, and other juvenile and criminal justice organizations in reaching out to gang-involved youth and their families and linking them to needed services. Suppression: Use of procedures including close supervision and monitoring of gang-involved youth by agencies of the juvenile and criminal justice systems and also by community-based agencies, schools, and other groups. Organizational change and development: Development and implementation of policies and procedures that result in the most effective use of available and potential resources, within and across agencies, to better address the gang problem. In 1995, OJJDP awarded funds to five competitively-selected sites that demonstrated the capacity to implement the model and then, in 1999, OJJDP funded a rural gang initiative to test the model in four rural communities with growing gang problems. Evaluations of the first five sites to demonstrate the model determined that, while results varied from location to location, when properly implemented, a combination of prevention, intervention, and suppression strategies was successful in reducing the gang problem. A national evaluation of the rural gang initiative was not completed because of staffing issues on the evaluation team. OJJDP continued to test the model with two of the three programs that were funded as of January 2009. The Gang-Free Schools and Communities Program began in 2000 and the Gang Reduction Program began in 2003. The third program, the Gang Prevention Coordination Assistance Program, is not an additional test of Dr. Spergel’s Comprehensive Community-Wide Gang Program Model. Rather, the program provides funds for demonstration locations to hire a coordinator who will enhance the coordination of existing community-based gang prevention and intervention strategies that are closely aligned with local law enforcement efforts. Table 2 provides information on these four anti-gang programs that were currently funded as of January 2009. The Comprehensive Anti-Gang Initiative administered by BJA is the largest currently funded anti-gang program. The initiative was announced in 2006 as an extension of Project Safe Neighborhoods (PSN), which has the broader goal of reducing violent crime in communities. The Comprehensive Anti-Gang Initiative was designed to build on successes in the Project Safe Neighborhoods program by stressing the importance of collaboration among federal, state, and local law enforcement and community organizations. The initiative provided grant funding to sites in the following three areas: (1) law enforcement; (2) programs to prevent youth from joining gangs or remaining affiliated with gangs; and (3) services for former gang members re-entering the community after prison terms. Specifically, each community awarded funds under the initiative received a total grant of $2.5 million to be used over 3 years. Of the $2.5 million in grant funding, $1 million was to be spent on law enforcement; $1 million on prevention and intervention activities; and $0.5 million to create reentry assistance programs for transitional housing, job readiness and placement assistance, and substance abuse and mental health treatment to prisoners re-entering society. By incorporating these three components into the grant program, DOJ intends to address gang membership and gang violence at every stage. The following 12 locations received grant awards from fiscal year 2006 through fiscal year 2008: Fiscal year 2006 grantees: Los Angeles, Calif.; Tampa, Fla.; Milwaukee, Wisc.; Cleveland, Ohio; an area of Pennsylvania encompassing communities from Easton to Lancaster, Pa.; and Dallas/Fort Worth, Tex. Fiscal year 2007 grantees: Oklahoma City, Okla.; Raleigh/Durham, N.C.; Rochester, N.Y.; and Indianapolis, Ind. Fiscal year 2008 grantees: Chicago, Ill., and Detroit, Mich. Communities that received grants from OJJDP and BJA had flexibility in determining how to allocate and use grant funding and used the funding in support of different anti-gang approaches and programs, including intervention activities directed specifically to at-risk youth and prevention activities designed to benefit a range of residents in the targeted communities. Examples of activities initiated by OJJDP-funded communities in locations we visited include: A summer program with activities for youth and families at city parks that stayed open until midnight.—Los Angeles, Calif. A full range of services for gang-affiliated offenders who had been incarcerated and were returning to the community. Services available included training in adult literacy and anger management, substance abuse prevention, housing assistance, and job placement.—Dallas, Tex. Parenting classes and prenatal and infant health care services.— Youth mentoring programs.—Durham, N.C. The way communities chose to distribute grant funds also differed. Some communities chose to distribute a larger amount of funds to a small number of subgrantees and participants, whereas other communities chose to distribute smaller funding amounts to a larger number of sub- grantees and/or serve large numbers of people. Officials in Pittsburgh, Pa., and Richmond, Va., who participated in OJJDP’s Gang-Free Schools and Communities Program and the Gang Reduction Program, respectively, provided examples of how funds were allocated in their communities. In Pittsburgh, Pa., the Gang-Free Schools and Communities project served about 100 boys in the area where the highest incidents of violent gang activity were reported. This target area was also heavily affected by poverty, unemployment, and social disorganization. The participants were identified by the schools or other community organizations as being “gang- involved” and were provided with prevention and intervention services. Services include an after-school program and a mentoring program, as well as substance abuse treatment and employment services. In Richmond, Va., the Gang Reduction and Intervention Program was broader in scope, serving youth with a range of services from medical care to job training based on their needs. Officials stressed the importance of the assessment phase of the program. The officials developed a resource inventory tool to assess resource availability and gaps that has been shared with other communities nationwide. Richmond was using both proven and new programs in two target areas of the city to work on gang prevention and intervention issues. For example, the officials said that one of their subgrantees, the Boys and Girls Club, had a proven track record in successfully implementing gang prevention and intervention programs, but that they had also supported more than 60 other promising programs through the grant, ranging from small faith-based groups to larger community organizations. The grant had been used to provide services to a large number of young people and other residents, and it resulted in collaborative partnerships among social service agencies and community service providers and, to an extent, between law enforcement and social service agencies. Figure 10 shows several services and programs that received funding under the grant. Communities we visited that received funding under the Comprehensive Anti-Gang Initiative also were using, or planned to use, the grants for a wide variety of activities. For example, in Dallas, Tex., program officials said that gang prevention activities would take place primarily in schools. The officials planned to use the law enforcement component of the initiative for police overtime for operations designed to get gang members off of the streets in targeted areas. The re-entry component funds were to be used to implement a comprehensive strategy to assist offenders to prepare for release while still in prison and then offer services on their release from prison. In Tampa, Fla., the city’s “Gang Out” program provided many prevention activities. For example, in June 2008, more than 1,200 at-risk youth aged 7 to 14 in nine neighborhoods identified as “hot spots” for gang activity were participating in “Gang Out” programs. Children were referred to the program by law enforcement, social services, school officials, parents, or others, and they were provided access to a variety of structured services and activities according to their needs (i.e. mental health counseling, tutoring, mentoring, field trips, and participation on sports teams). Tampa officials said they were using the law enforcement portion of the grant to help pay overtime for law enforcement task forces investigating gang crime, and to develop a database to facilitate information sharing among federal, state, and local law enforcement agencies. Re-entry funding was being used to provide counseling, job placement assistance, housing, and other services to ex- gang members about to exit prison and return to the Tampa area. Figure 11 shows one young Gang-Out participant at work on an anti-gang mural and the mural the young people completed. DOJ sponsored evaluations of OJJDP grant programs that reported both benefits and challenges faced by communities in implementing comprehensive anti-gang models and awarded a contract to Michigan State University for a national evaluation of BJA’s Comprehensive Anti- Gang Initiative. Cosmos Corporation completed an evaluation of the Gang Free Schools and Communities program in November 2007. The Urban Institute completed an interim evaluation of the Gang Reduction Program in May 2008. The final evaluation of the Gang Reduction Program was not completed in April 2009 as scheduled because OJJDP officials said that the evaluators were waiting to receive additional data from one demonstration site and anticipated that the evaluation would be completed in 2009. The officials did not provide a revised estimated completion date for the evaluation. Officials said that no evaluation component of the Gang Prevention Coordination Assistance Program was funded because funding levels awarded to communities for gang prevention coordinator positions were relatively small amounts of $200,000 or less. With respect to BJA’s Comprehensive Anti-Gang Initiative, Michigan State University is expected to complete an interim report in late 2009 and provide it to NIJ, the OJP component handling the evaluation of the program. According to DOJ-sponsored evaluations, grant recipients benefited in various ways from the grant programs. First, grant recipients received federal funds to implement gang enforcement, prevention, intervention, and reentry programs that they might otherwise not have been able to implement. These programs benefited those individuals who participated in them. For example, the evaluation of the Gang-Free Schools and Communities Program found that each of the four communities awarded grants implemented the programs according to requirements by providing outreach and social services to at least 100 youth in targeted communities and neighborhoods. The youth served ranged in age from 12 to 24 years with a median age of 16.2 years, and 75 percent of the participating youth were gang members. Second, grant recipients received technical assistance and support from OJJDP and BJA to implement their approaches to addressing gangs. For example, the interim evaluation of the Gang Reduction Program found that significant implementation successes were achieved at all four of the sites. Each grantee developed strategic plans consistent with target area needs and problems and achieved broad participation in planning the program. Communication about gang issues within the target area and among participating organizations generally improved over the course of the program. Third, as a result of having federal funds and participating in a grant program, sites obtained needed leverage to bring together a wide range of stakeholders, such as law enforcement agencies, social service providers, and faith- based and community groups, to address communities’ gang problems. Despite these benefits, the projects reported mixed results in achieving reductions in gang crime. For example, evaluations of OJJDP-funded programs—the Gang-Free Schools and Communities Program and the Gang Reduction Program—showed that, while grantees were successful in doing strategic planning, forming community partnerships, and implementing programs, and their experiences offered insights on best practices, they found little evidence that the programs effectively reduced youth gang crime. In particular, the Gang-Free Schools and Communities Program evaluation concluded that while some measures of gang crime decreased and each location had anecdotal evidence of success with some individuals, overall, the program had little positive effect on the targeted youth who participated in the programs. Similarly, the preliminary evaluation of the Gang Reduction Program found that the program had not achieved its goal of reducing gang crime. One location, Los Angles, Calif., had a decrease in gang crime rates. The other locations had no changes or slight increases in gang crime after program implementation. In addition, communities that received funding under OJJDP’s grants to demonstrate the Comprehensive Community-Wide Anti-Gang Model faced difficulties in sustaining their programs after federal funding ended. No sites that received grant funds under the Rural Gang Initiative sustained the project with local funding or new grants after federal funding ended. Likewise, by 2004, all of the communities that had received funding under the Gang-Free Communities program had used all of the federal funding allocated and could not continue the programs with other funding sources. Of communities that received funding under the Gang-Free Schools program, Cleveland, Ohio did not sustain the program, however, as of March 2009, three other communities that received grants were sustaining at least some parts of their programs with other sources of funding. For example, the Houston, Tex., and North Miami Beach, Fla., programs received city funds; and the Pittsburgh, Pa., program received funding from the School Board and Project Safe Neighborhoods. The project coordinator in Pittsburgh, Pa., noted that sustainability is a great challenge. According to the coordinator, sustaining a program that targeted and enrolled about 100 gang-involved youth who lived or attended school in the target area once federal funding expires was definitely possible. However, expanding the program was a concern. For example, the coordinator wanted the project to be able to serve girls in addition to boys and increase efforts to provide services to youth reentering the community after being detained in juvenile facilities, but she had not identified funding sources to do so. Finally, OJJDP reported that very little planning for sustainability of projects and services funded by the Gang Reduction Program had taken place by 2006; however, in 2007, OJJDP reported that three of the four communities that were awarded funding under the Gang Reduction Program had taken steps toward sustaining at least portions of the initiative beyond the federal funding period. For example, in North Miami Beach, Fla., the initiative was incorporated as a nonprofit organization in 2007 and in Richmond, Va., partnerships with the Office of the Attorney General and Richmond Police Department ensured that some efforts would be sustained. DOJ has not yet made decisions about whether to expand existing anti- gang grant programs or fund new programs in the future. In particular, OJJDP and BJA officials told us that the department has not yet made decisions about priorities and availability of funding for future anti-gang programs. As of March 2009, OJJDP officials said that in the short term, no plans are in process for funding future demonstration projects of the Comprehensive Community-Wide Gang Program Model because OJJDP plans to use the funding it has available to provide information and technical assistance to communities nationwide to assist them in implementing community-based anti-gang efforts using lessons learned from the demonstration projects that have been completed or are underway. To this end, in 2008, DOJ published a report on best practices to address community gang problems based on OJJDP’s comprehensive gang model and lessons learned from the demonstration projects. The report is available to interested communities through OJJDP’s Web site, and OJJDP officials said that they had discussed aspects of the report with officials of communities that were interested in implementing the model. According to OJJDP officials, communities in Nevada, Oklahoma, Utah, and North Carolina are developing anti-gang programs based on OJJDP’s recommendations without federal funding, and have consulted OJJDP for information on best practices and lessons learned. Since these programs are all in the planning stages, it is too early to tell whether they will be successfully implemented, reduce youth gang crime, and be sustainable by the communities without federal funding. With regard to the Comprehensive Anti-Gang Initiative, the BJA-sponsored evaluation of sites funded under this program has not yet been completed, but the evaluation will include assessments of programs’ outcomes and sustainability, among other things. As of April 2009, BJA officials said that the agency does not currently plan to fund additional locations under the initiative in the short term and that results achieved in the communities currently funded under the program would be considered in making any future determination of whether to expand federal funding for the initiative to additional communities, and whether it is feasible for communities to implement the model without federal funding. Gangs have spread across community, state, and regional boundaries to become a national problem, requiring federal agencies to strengthen their coordination and collaboration on anti-gang programs and initiatives to combat gang crime and violence. Carrying out gang enforcement, prevention, and intervention efforts that involve multiple agencies with varying jurisdictions and missions is not an easy task, especially since agencies have limited resources and, in most cases, competing priorities. Federal agencies have taken positive actions to coordinate their anti-gang programs and initiatives and share information about gang threats and multijurisdictional investigations. However, these actions have not addressed all possible gaps or unnecessary overlaps in anti-gang programs, nor have they addressed all of the challenges identified in this report. Further actions by DOJ and DHS would enhance and sustain their collaboration in combating gangs. In particular, differentiation of the roles, responsibilities, and missions of headquarters-level gang coordination entities—including the MS-13 National Gang Task Force, NGIC, and GangTECC—could enhance DOJ and DHS’s collaboration in combating gang crime and reduce the potential for expending resources on overlapping missions. Moreover, DOJ and DHS could strengthen their efforts to more fully involve ICE in the task force review and approval process. In addition, to assist Congress, federal agencies, and other stakeholders in understanding and assessing gang enforcement efforts, additional actions are needed on the part of DOJ and DHS to improve performance measurement and evaluation. More specifically, consensus is needed on a shared definition of “gang” and other related terms to help federal law enforcement agencies improve their collection, evaluation, and reporting on gang enforcement efforts. For DOJ, a departmentwide performance measure for gangs is needed to help the department and Congress track the progress of the department’s overall gang enforcement efforts. Additionally, at a component level, additional monitoring of the extent to which USAOs track and record gang-related case information is needed to ensure accurate reporting of such information within DOJ and to external stakeholders, such as Congress. To strengthen federal agencies’ coordination of anti-gang efforts; help reduce gaps or unnecessary overlaps in federal entities’ roles and responsibilities; and assist the department, Congress, and other stakeholders in assessing federal gang enforcement efforts, we recommend that the Attorney General take the following three actions: direct DOJ law enforcement agencies that lead or participate in the headquarters-level anti-gang coordination entities—including GangTECC, NGIC, the Anti-Gang Coordination Committee, and the MS-13 National Gang Task Force—to, in consultation with DHS, reexamine and reach consensus on the entities’ roles and responsibilities, including identifying and addressing gaps and unnecessary overlaps; develop a departmentwide, strategic-level performance measure for the department’s anti-gang efforts; and direct EOUSA to periodically review gang-related case information entered by USAOs into the case and time management systems to ensure more accurate and complete reporting of USAOs’ gang-related cases. We also recommend that the Attorney General and the Secretary of Homeland Security jointly take the following two actions: ensure that ICE is part of the process for reviewing and approving the creation of new anti-gang task forces and jointly develop a common or shared definition of “gang” for use by DOJ, DHS, and component agencies for reporting purposes. In providing written comments on a draft of this report, DOJ concurred with three of our five recommendations and stated that it will consider the other two recommendations. DHS concurred with the two recommendations directed to DHS. DOJ and DHS provided information on steps they were taking or planning to take to address the recommendations. First, DOJ concurred with our recommendation to, in consultation with DHS, reexamine the roles and responsibilities of four DOJ headquarters anti-gang coordinating entities, including identifying and addressing any potential gaps and unnecessary overlaps. DOJ commented that this role is performed by the Anti-Gang Coordination Committee and that the department will continue to work with ICE, the headquarters-level anti- gang entities, and other DOJ agencies to identify and address gaps and unnecessary overlaps. Second, DOJ stated that the department will consider the recommendation to develop a departmentwide strategic-level performance measure for its anti-gang efforts as part of its strategic planning process. DOJ stated that it is in the initial stages of developing its next strategic plan and that it is too early in the planning process to state for certain that such a measure would be included. DOJ commented that senior leadership will consider including such a measure and that it recognizes that gangs are a major factor in many crimes and that it is possible that raising the visibility of DOJ’s efforts to combat gangs would weigh in favor of including such a measure. Given that efforts to address gangs have been a major part of DOJ’s overall approach to combating violent crime, we continue to believe that a departmentwide performance measure for gangs would help DOJ and Congress track the progress of the department’s overall anti-gang efforts. Third, DOJ concurred with our recommendation for EOUSA to review the case and time management systems of the USAOs to ensure more accurate and complete reporting of their gang-related cases and noted that beginning in fiscal year 2010, USAOs will be specifically required to enter gang-related information into the data management systems accurately and in a timely manner. According to DOJ, compliance with this requirement will be measured in performance evaluations of each USAO approximately every 3 years by evaluation and review staff. We believe this is a positive step that could help DOJ strengthen the completeness and accuracy of USAOs’ data on gang-related cases. Fourth, DOJ and DHS concurred with our recommendation to ensure that ICE is part of the process for reviewing the creation of new anti-gang task forces. DOJ and DHS said they would work on implementing a procedure for reviewing the creation of new anti-gang task forces, and DOJ outlined steps it has begun to take to address this recommendation. Specifically, in a letter dated July 17, 2009, the Deputy Attorney General formally extended an invitation to ICE to be a member of the Anti-Gang Coordination Committee and its Taskforce Review Subcommittee and to be afforded the same level of review as participating DOJ law enforcement agencies. Finally, DOJ agreed that a shared definition of “gang” for use by DOJ, DHS, and component agencies for reporting purposes would facilitate data collection and evaluation efforts and stated that it would broaden the discussion on whether to develop such a common definition to include DHS, and, specifically, ICE; however DOJ also noted some technical and operational challenges to implementing a uniform definition, including consideration of possible implementation costs. DOJ noted that it will use the Anti-Gang Coordination Committee as a forum to jointly consider a common definition or otherwise develop an effective performance measurement system for anti-gang activities. DHS concurred with the recommendation. We agree that it is important for DOJ to consider the costs in implementing a common definition for “gang” as it explores this issue with DHS through the Anti-Gang Coordination Committee. We continue to believe that a shared definition of “gang” would help agencies improve their collection, evaluation, and reporting of gang enforcement efforts. DOJ’s and DHS’s written comments are contained in appendices V and VI, respectively. We also incorporated technical comments provided by DOJ, DHS, and component agencies as appropriate. We are sending copies of this report to the Attorney General and the Secretary of the Department of Homeland Security, selected congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact Eileen Larence at (202) 512-8777 if you or your staff have any questions concerning this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VII. Historically, gang crime has been associated with urban areas of the country. However, as shown in figure 12, Department of Justice (DOJ) National Youth Gang Surveys from 2002 through 2006 showed that youth gangs are a problem not only for inner cities, but for surrounding suburbs and rural areas, as well. The surveys found that respondents who reported gang problems in their jurisdictions reported increases in various categories of gang-related crime, as well. More than half of respondents with gang problems reported increases from 2004 to 2006 in gang-related aggravated assaults and drug sales. Some respondents also reported increases in gang-related robberies, larceny/theft, burglary, and auto theft. In 2007, jurisdictions reported the highest annual estimate of youth gang problems since before 2000, with 86 percent of law enforcement agencies serving larger cities, 50 percent of suburban counties, 35 percent of smaller cities, and 15 percent of rural counties reporting that they experienced problems. In 2005 and 2009, DOJ reported on the gang threat nationally and by region to assist policymakers and law enforcement agency administrators understand the dimensions of the problem and assist them in facilitating policy and allocating resources to address it. The following were some trends the reports identified. 58 percent of state and local law enforcement agencies reported criminal gangs were active in their jurisdictions in 2008 compared with 45 percent of state and local agencies in 2004. Local street gangs, or neighborhood-based street gangs, remained a significant threat because they continue to account for the largest number of gangs nationwide. Most engage in violence in conjunction with a variety of crimes, including retail-level drug distribution. Gangs remain the primary retail-level distributors of most illicit drugs throughout the United States. They are also increasingly distributing wholesale-level quantities of marijuana and cocaine in most urban and suburban communities. Criminal gangs commit as much as 80 percent of the crime in many communities, according to law enforcement officials throughout the nation. Gang members are becoming more sophisticated in their use of computers and technology. These new tools are used to communicate, facilitate criminal activity, and avoid detection by law enforcement. Many gang members use the Internet to recruit new members and to communicate with members in other areas of the United States and in foreign countries. Forming multi-agency task forces and joint community groups is an effective way to combat the problem. However, decreases in funding and staffing to many task forces have created new challenges for communities. Major national-level street gangs include the Bloods, Crips, and Mara Salvatrucha, or MS 13, and 18th Street. Brief descriptions of the composition, size, and criminal activities of these four national gangs follow. Bloods. Bloods members are predominantly African American males. Membership estimates range from 5,000 to 20,000 members in 37 states. Bloods members are involved in distribution of drugs, including cocaine; methamphetamine; heroin; and marijuana, and they are involved in many other criminal activities, including assault, auto theft, burglary, car jacking, drive-by shooting, extortion, homicide, and identification theft. Crips. Crips members are also predominantly African American males. Membership is estimated to be 30,000 to 35,000 members in 41 states. Crips members engage in street-level distribution of powder and crack cocaine, marijuana, and PCP and are involved in other criminal activity such as assault, auto theft, burglary, and homicide. MS-13. MS-13 is an Hispanic street gang estimated to have 8,000 to 10,000 members in at least 38 states. MS-13 is an international gang with an estimated 30,000 to 50,000 members worldwide. Traditionally in the United States the gang consisted of loosely affiliated groups; however, law enforcement officials have reported the coordination of criminal activity among MS-13 members operating in Atlanta, Dallas, Los Angeles, New York, and Washington, D.C. metropolitan areas. MS-13 members smuggle drugs into the United States and transport and distribute them throughout the country. Some members are also involved in alien smuggling, assault, drive-by shooting, homicide, identification theft, prostitution operations, robbery, and weapons trafficking. 18th Street. 18th Street formed in Los Angeles, Calif., but now has an estimated membership of 30,000 to 50,000 in 28 states. In California, about 80 percent of its members are illegal aliens from Mexico and Central America. Gang members are involved in retail-level distribution of cocaine and marijuana, and, to a lesser extent, heroin and methamphetamine. They also commit assault, auto theft, car jacking, robbery, identification fraud, and homicide. Interviewees in 8 of the 15 localities we visited reported the presence of organized national-level street gangs in their communities in addition to local neighborhood gangs, while officials in 8 localities said that local neighborhood gangs were the primary source of gang-related crime. In all of the localities, officials attributed a wide range of violent crimes and other criminal activities to gangs. Interviewees in Baltimore, Md.; Los Angeles, Calif.; Chicago, Ill.; Dallas, Tex.; Durham, N.C.; Brooklyn and Manhattan, N.Y.; and Newark, N.J., said that organized national-level gangs are active in their communities. The level of organization and hierarchy of the national gangs varied from location to location. The following are three examples of descriptions by interviewees of how national gangs impact their communities: Los Angeles, Calif.: The city and the eight surrounding counties have about 150,000 gang members linked to 1,000 different gangs. The gangs are very aggressive and violent. Twenty years ago, the Bloods and Crips were the national gangs in control. Now, MS-13 and other Hispanic street gangs, as well as Asian and East European gangs are emerging with sophisticated structures like organized crime groups. Gangs are developing increasingly sophisticated tools to engage in criminal enterprise. For example, Eastern European and other ethnic gangs appear to be active in white collar crimes such as identify theft and credit card and health care fraud. Hispanic gangs have affiliated with international drug trafficking organizations, and an African American gang is engaged in extortion of money from small business owners. Chicago, Ill.: The city has a long history of gang activity that spans generations. The police department has identified families in which grandparents, parents, and children had all affiliated with gangs. The city police estimate 70,000 to 100,000 gang members in the area. Some gangs have as many as 10,000 members. Gangs here are large, territorial, and highly organized with written rules and a reporting structure to national gang leaders. Gangs control the retail drug trade. Gangs have corrupted local officials by having members serve as election workers at polling places. The city has experienced cases of police corruption by gang members and gang members employed by the police department and prisons. Newark, N.J.: Ten different sects of Bloods have been identified here. They fight each other but occasionally align against the Crips. The relationship between drugs and gangs is strengthening. Gang membership is rising as various gangs establish territory and relationships with narcotics trafficking markets in urban areas and more recently establishing transportation routes to suburban and rural markets in other states. Interviewees in the other 7 of the 15 localities we visited described the gangs in their area as primarily neighborhood or street-based. Officials in Atlanta, Ga.; Cleveland, Ohio; Milwaukee, Wisc.: Pittsburgh, Pa.; Raleigh, N.C.; Richmond, Va.; and Tampa, Fla., said that gangs present in their areas were primarily local street gangs. In some areas the local gangs loosely identified with national organizations by, for example, wearing the colors of the Bloods; however officials said that they did not communicate closely with the national gang hierarchies. Officials described gang organizations that were fluid and rapidly changing. Officials said that local gangs can literally take over neighborhoods. They are no less violent than national gang members. For example, officials in Newark, N.J., and Brooklyn, N.Y., said that in areas where both national and street gangs were present, the local gangs were violent and intimidating enough to hold their territory against the national gangs. The following are three examples from officials we interviewed of the composition, size, and criminal activities of the neighborhood gangs in their localities, and how they impact their communities. Pittsburgh, Pa.: The big problem here is with neighborhood street gangs that are involved in drug trafficking and retaliation against witnesses and rival gangs. A recently completed survey determined that affiliations with national gangs do not exist. Gangs are relatively small, usually 10 to 25 members, and are typically affiliated by neighborhood or street block. Some gang sets have adopted the names of streets where their members live. Gang members are predominantly young, with ages from early teens to twenties. Richmond, Va.: Gangs are widespread in the city. The major problem is “home-grown” gang members, many of whom have gone to prison and come back to the community. Because they see no other options, siblings and children of older gang members follow in family footsteps and join local gangs. Crimes committed by these groups include vandalism, murder, assault, and drive-by- shootings. Tampa, Fla.: The gang problem is mostly local. Some gangs operate within the city limits and some in the outlying county. The entire area has about 3,000 gang members. Gang members who live outside of the area also come into the beach, so the police and sheriff’s departments spend time tracking gangs that are not local. Gangs cross geographic, racial, and cultural boundaries. They are not closely affiliated with national organizations, although national gang groups do come through the area. The gangs are extremely violent as they fight for control of turf and control of the drug trade. Newly arriving immigrants, who tend to carry large amounts of cash and face language barriers, are susceptible to being victimized and may turn to Hispanic gangs for protection and support. Most criminal cases involving gang members are prosecuted by state and local prosecutors; however the Department of Justice (DOJ), through its Criminal Division and U.S. Attorneys, brings federal charges under a number of different federal statutes against gang members. U.S. Attorneys prosecute most federal gang-related crimes, and the Criminal Division prosecutes or assists U.S. Attorneys in prosecuting gang cases of national significance or those that involve national gangs operating across United States Attorneys Office (USAO) jurisdictions (e.g. Bloods and MS-13). Officials in 15 USAOs we visited said that U.S. Attorneys’ guidelines on the types of gang cases that warrant federal prosecution are generally those involving the most violent and dangerous gang members and gangs located in the USAO districts. Federal statutes under which gangs members are prosecuted fall into three categories: criminal drug, firearm, and other violent crime statutes where the unlawful acts are not specifically related to membership in a gang; broad statutes under which criminal enterprises—including gangs—may enhanced penalties added to the criminal convictions of defendants who are gang members. Officials in the 15 USAOs we visited most frequently reported prosecuting gang members under statutes for offenses involving drugs, firearms, and violent crimes. These offenses are related to the criminal acts the gang members have committed and not their gang membership specifically. Officials noted the following reasons that they frequently prosecuted gang cases under these criminal statutes: evidence to support such charges is relatively easy for juries to investigative and prosecutive resources required to build the cases are not as great as for organized criminal enterprise prosecutions; and penalties and prison sentences for convictions can be substantial. Table 3 lists the offense and code section, the statutory elements of the offense, and penalties for conviction for the federal statutes USAO officials said they used frequently to prosecute gang members. In some instances, USAOs use two broad federal statutes—the Racketeer Influenced and Corrupt Organizations (RICO) Act and the Violent Crimes in Aid of Racketeering (VICAR) Act—to prosecute criminal enterprise organizations. While RICO was originally enacted to dismantle organized crime groups such as the Mafia, prosecutors have successfully used it to bring cases against members of organized gangs. RICO prohibits the commission of a pattern of racketeering activity to invest in, maintain an interest in, or participate in, directly or indirectly, an enterprise, the activities of which affect interstate or foreign commerce; it also prohibits conspiracy to commit any of these activities. The second statute, VICAR, was intended to supplement RICO and makes it unlawful to commit any of a list of violent crimes in return for anything of pecuniary value from an enterprise engaged in racketeering activity, or for the purpose of joining, remaining with, or increasing a position in such an enterprise. The listed violent crimes are murder, kidnapping, maiming, assault with a dangerous weapon, assault resulting in serious bodily injury, and threatening to commit a crime of violence and may be violations of state or federal law. The statute also makes unlawful attempt and conspiracy to commit the listed crimes. To use these statutes to prosecute gangs, prosecutors must prove several elements. In particular, prosecutors must prove that the gang functions as an “enterprise” through evidence of an ongoing organization, formal or informal, and by evidence that the various associates function as a continuing unit. This may be accomplished by showing that the gang holds meetings, has a specific, stated mission, collects dues, and has a decision-making structure, among other characteristics. DOJ requires that prosecutions under these statutes be coordinated with DOJ Criminal Division and approved for prosecution by the Criminal Division’s Organized Crime and Racketeering Section. In addition, the staff of that section has developed extensive manuals on RICO and VICAR to assist federal prosecutors in the preparation for and litigation of cases involving those statutes. According to USAO officials we spoke to, RICO and VICAR are tools for prosecution that can disrupt and destroy entire gang structures. One USAO official explained that the organized crime statutes allow prosecutors to tell the entire story of a gang’s existence and criminal activity in an indictment and later to a jury. The official noted that every aspect of the gang and its history—including how it acquired its territory; how it makes and disposes of its money; how it uses coded language, hand signals, and graffiti; and what crimes it has committed and why—can be offered in one coherent story during the trial. Gang cases prosecuted under an organized crime statute generally have more co-defendants than cases prosecuted under narcotics trafficking, firearms, and other violent crime statutes. The officials explained that such cases allow prosecutors to reach deep into the gang hierarchy to the top leaders of the gang organizations and are generally well-publicized by the news media. Violations of the criminal provisions of RICO and VICAR carry significant penalties. A gang member convicted under RICO is subject to up to 20 years imprisonment, or up to life imprisonment if the violation is based on racketeering activity for which the maximum penalty includes life imprisonment. Defendants also remain subject to conviction and sentencing for the underlying or predicate crimes that make up the racketeering activity. Additionally, the statute provides for the forfeiture of property maintained or acquired in violation of the Act. Under VICAR, conviction may result in up to life imprisonment depending on the predicate offense committed; VICAR murder is a death-eligible offense. Gang members convicted of crimes under RICO or VICAR may be fined up to $250,000. RICO also contains a civil provision that allows people who have been injured by a RICO defendant to recover damages in federal court. USAO interviewees also cited limitations to the use of RICO and VICAR statutes. RICO and VICAR cases are resource-intensive, time consuming, and complex, according to some interviewees, and only suited to prosecutions of highly structured gang organizations; not the local street gangs that are the predominant gang crime concern in some localities. Although U.S. Attorneys do not maintain data on the number of gang- related organized crime cases they accept for prosecution, the total number of organized crime cases they accept for prosecution is a small percentage of the overall caseload. In fiscal year 2007, a total of 217 organized crime cases of all types, including gangs and traditional groups, against 483 defendants were filed in the United States, about 0.4 percent of the total cases filed—and that figure was an increase of 39 percent from the previous year. A case charged under organized crime statutes in Los Angeles, Calif., provides an example of the large commitment of time and investigative and prosecution resources that are required for these complex cases that result in indictments of leaders and members of large gang operations. A 3-year investigation of the Florencia 13 Gang in Southern Los Angeles, Calif., resulted in indictments against 102 defendants on RICO and narcotics trafficking charges. As of January 2009, 76 defendants had been convicted with other cases pending trial. The defendants were alleged to be part of a controlled drug distribution operation. Gang leaders were charged with collecting fees and/or rent from gang members and others engaged in criminal conduct in areas controlled by the Florencia 13 Gang. Numerous federal and local law enforcement agencies were involved in the investigation. Federal agencies included FBI, DEA, ATF, USMS, ICE, and IRS. Local agencies included Los Angeles City Police Department, Los Angeles County Sheriff’s and Probation Departments, and local police departments in five other area towns. As part of the Violent Crime Control and Law Enforcement Act of 1994, Congress established an enhanced penalty for gang-related crimes by authorizing the imposition of an additional term of imprisonment of up to 10 years for participation in certain federal felonies involving drugs or violence by members of criminal street gangs. Some USAO officials we visited said that the sentencing enhancement for gang members was generally not used because the elements of proof required are difficult for prosecutors to establish. For the defendant to be subject to the enhancement, prosecutors must prove several elements. At the outset, there are four components required to establish the “criminal street gang” under the statute: (1) an ongoing group, club, organization, or association of five or more persons; (2) that has as one of its primary purposes the commission of one or more specified felonies involving violence or drugs in violation of federal law; (3) the members of which engage, or have engaged within the past 5 years, in a continuing series of the same specified felonies; and (4) the activities of the criminal street gang affect interstate or foreign commerce. In addition, prosecutors must establish that the defendant committed the crime for which he was charged to promote the felonious activities of the gang or maintain or increase his position in the gang; that the defendant had been convicted of another crime of violence or drug offense arising to a felony under state or federal law within the past 5 years; and that the defendant participates in a criminal street gang with knowledge that its members engage or have engaged, in a “continuing series” of federal felonies involving violence or drugs. Federal and local agencies, research experts, and others we interviewed, as well as research we reviewed, identified important elements for consideration in developing and implementing an approach for combating gangs. Among others, these elements include: thorough assessment and understanding of local gang problem(s); ongoing communication and coordination among stakeholders; comprehensive and varied efforts (i.e., prevention, intervention, suppression, and re-entry); public and community outreach and visibility for programs; plans for sustainability of programs and efforts; and commitment to long- performance monitoring, evaluation, and feedback incorporation. Presence of these elements does not guarantee that an approach to addressing gangs will be successful. These elements are important for federal, state, and local agencies and communities to consider in developing and implementing anti-gang approaches. In addition, these elements should not be considered as an exhaustive list of items for agencies and communities to consider in developing their anti-gang approaches. Rather, these elements were identified as key considerations by the agencies and individuals we interviewed and the research we examined. Thorough assessment and understanding of local gang problem(s): Agencies and individuals we interviewed and research we examined indicated that it is important for agencies and communities to thoroughly assess their gang problems in order to gain a complete understanding of the nature of the gang threat and the resources needed to address that threat. For example, in identifying best practices that resulted from its projects testing the Comprehensive Community-Wide Anti-Gang Model, the Office of Juvenile Justice and Delinquency Prevention (OJJDP) reported assessments of gang problems helped projects determine types and levels of gang activity, gang crime patterns, community perceptions, and gaps in available services. OJJDP found that assessment also assisted communities in identifying target populations to be served, understanding why those populations merited attention, and making the best use of available resources. Similarly, the Bureau of Justice Assistance (BJA) reported that a needs assessment is often the first step in planning a comprehensive solution to a gang problem, as it can help uncover hidden problems, set priorities, and develop a communitywide consensus about what to do. Similarly, agency officials and research experts we interviewed noted the importance of agencies and communities thoroughly assessing gang problems and threats. For example, one prosecutor told us that a “one– size fits all approach” does not work for addressing gangs. This prosecutor stated that each jurisdiction or community is different, and a program that works in one community may not work in another. This is because of different characteristics and different gang problems in each community; a gang problem in one community has different characteristics and occurs in a different environment than a gang problem in another community. Furthermore, officials from one U.S. Attorneys Office (USAO) suggested that communities recognize and admit their gang problems, complete an assessment of those problems, and plan strategically for how to best address them. Several research experts also emphasized that gangs are inherently a local problem best addressed through targeted solutions carefully vetted through community input. Consensus among key stakeholders: Based on our interviews and examination of research, it is important for stakeholders, including law enforcement agencies, prosecutors, community-based social service agencies, schools, citizens’ groups, and other interested community residents, to reach consensus on an overall approach and goals for a coordinated anti-gang effort. For example, OJJDP reported that partners should try to find shared goals for anti-gang efforts. According to BJA, planning can help stakeholders to establish a common mission and common priorities and minimize parochial perspectives in favor of broader goals. Agency officials and research experts we interviewed also noted the importance of obtaining consensus for anti-gang strategies from all key stakeholders. Officials involved in managing one Department of Justice (DOJ) funded grant program said that it is important for all stakeholders to feel like they have ownership and have bought into the program. They said that allowing all stakeholders to take credit for program successes and providing them all with an opportunity to discuss failures and obstacles helps the program succeed and last over the long term. Furthermore, one local prosecutor stated that a comprehensive approach requires all stakeholders within a community—police, prosecutors, social service providers, and elected officials—to buy into a comprehensive approach for addressing gang problems. If just one of these groups is not committed to such an approach, the approach ultimately may not be successful. Likewise, officials from two USAOs stated that organizing the effort to combat gang violence and understanding the problem are key to combating gangs. They suggested that communities, law enforcement agencies, and researchers agree and buy into the assessment of the gang problem and the strategies developed to address that problem. Ongoing communication and coordination among stakeholders: After consensus is established among stakeholders, agencies we interviewed and research we reviewed indicated that it is important that there be ongoing communication and coordination among stakeholders involved in overall anti-gang efforts. For example, according to BJA, one of the most important components of a successful approach to gangs is multiagency cooperation. BJA reported that communities should actively involve all community components that have a potential interest in responding to gang problems. Federal agencies similarly affirmed the importance of coordination and multiagency efforts to address gangs as part of a comprehensive approach but also within specific efforts. For example, officials from three Federal Bureau of Investigation (FBI) field offices told us that the most critical element to an approach for successfully addressing gangs is for federal, state, and local law enforcement agencies to work together through joint investigations and task forces to leverage information, experience, and resources. These agencies also need to coordinate with and gain cooperation from the USAO to get USAO support for prosecuting cases. Officials from another FBI field office told us that a task force or collaborative approach is crucial in successfully addressing gangs. Agencies that participate in a task force or collaborative approach should clearly define their roles and responsibilities and clearly understand other agencies’ roles, responsibilities, resources, and missions. In addition, Drug Enforcement Administration (DEA) officials discussed the task force model as being a critical element in any successful effort to combat gangs, drugs, and violent crime. They stated that task forces facilitate information sharing among participating agencies, nurture cooperation, and build trust among participants. Comprehensive and varied efforts (i.e., law enforcement, prevention, intervention, and reentry): Agencies and research experts we visited and research we reviewed indicated that overall anti-gang approaches should include a variety of efforts that address law enforcement, prevention, intervention, and reentry. For example, OJJDP reported that comprehensive programs that incorporate prevention, intervention, and enforcement components are most likely to be effective. Gang research experts have argued that enforcement responses are less likely to be successful if isolated from other strategies. It is important that prevention and intervention activities occur in conjunction with suppression, despite challenges in implementing and maintaining such efforts. Moreover, BJA recommended that communities with emerging or existing gang problems plan, develop, and implement comprehensive responses that include a broad range of community-based components. Officials we interviewed similarly commented on the importance of an overall anti-gang approach including programs and initiatives that address law enforcement, prevention, intervention, and reentry. For example, officials managing one DOJ-grant funded program stated that strong state laws and gang enforcement efforts are needed to complement prevention, intervention, and reentry programs. They said that a successful gang approach is one that appropriately includes and balances prevention, intervention, enforcement, and reentry. One local prosecutor suggested that a community or local government invest in prevention and intervention programs as well as law enforcement programs, suggesting that communities may be more successful at combating gangs and curbing gang violence when they take a communitywide approach in addressing the problem. For example, one of the research experts we interviewed suggested that pulling together strategies “across intervention domains” such as law enforcement, social services, and schools is an effective way to sustain anti-gang efforts. Public and community outreach and visibility for programs: Our interviews indicated that it is important for entities involved in implementing anti-gang programs to conduct community outreach and provide publicity and visibility for the programs and program accomplishments. For example, officials from one DOJ-funded grant program told us that one element important to successful program implementation is for program officials to be visible to communities. When program officials are on the streets in communities, it shows communities that the officials care and are invested in the program. DEA officials also suggested that community leaders need to be willing to speak out against violence. They stated that before prevention and intervention efforts can be effective in a specific community, law enforcement agencies first have to get violence under control so that community members are not afraid to participate in community events or report crimes to law enforcement agencies. Moreover, according to one USAO, anti-gang programs and services can be unified under a common brand and marketed aggressively, so that the public is aware of the existence and affiliation of anti-gang efforts. Plans for sustainability of programs and efforts: Entities we interviewed and research we reviewed suggested that agencies and communities should plan on how to sustain their anti-gang programs and initiatives over time, particularly as gang problems and availability of resources change. For example, OJJDP recommended that programs begin planning for long- term sustainability during the initial stages of implementation. Agency officials and experts interviewed also noted that resolving gang problems can require a long-term commitment. For example, one local prosecutor told us it can be challenging to sustain a communitywide approach to dealing with gang problems because such an approach requires leaders committed to making a long-term investment in the approach and resources to sustain the long-term effort. Another local prosecutor noted that it is important to sustain gang programs over the long term because as soon as a community or region believes it has solved its gang problems and scales programs back, gangs reemerge. Research experts noted that communities have to be willing to invest resources in anti-gang programs, particularly comprehensive programs, for a long period of time in order to achieve results and establish programs that include adequate study time up-front in order to measure results. Performance monitoring, evaluation, and feedback incorporation: A final common element identified by individuals we interviewed and research we examined was the regular monitoring and evaluation of program progress and performance and the incorporation of feedback and performance results into programs. For example, OJJDP reported that evaluation is a valuable tool that can tell the community whether it has accomplished what it set out to do and whether there are ways to do it better. Department of Justice, Office of Justice Programs, Office of Juvenile Justice and Delinquency Prevention, OJJDP Comprehensive Gang Model: Planning for Implementation (Washington, D.C.: June 2002). To examine the roles of the Department of Justice (DOJ) and Department of Homeland Security (DHS) in gang enforcement efforts and the extent to which the efforts are coordinated with each other and state and local partners, we reviewed federal agencies’ strategies and plans to combat gang crime and interviewed headquarters DOJ and DHS officials involved in gang crime enforcement activities. We reviewed DOJ’s and DHS’s strategic plans including goals and objectives for efforts to combat gang crime and how the departments assessed their performance in meeting these goals and objectives. We compared DOJ and DHS coordination and information sharing efforts to criteria in our prior work on effective interagency collaboration and results-oriented government. We also examined staffing levels and budgets for DOJ and U.S. Immigration and Customs Enforcement (ICE) within DHS. To assess the reliability of statistical information and budget data we obtained, we discussed the sources of the data with agency officials and reviewed documentation regarding the compilation of data. We determined that the data were sufficiently reliable for the purposes of this report. Using semi-structured interview instruments, we interviewed the U.S. Attorney or designated staff of U.S. Attorneys Offices (USAO) and supervisory agents of DOJ and DHS law enforcement agencies involved in investigating and prosecuting gang members in 15 localities across the country. We also reviewed anti-gang strategies and other documentation of enforcement efforts to reduce criminal gang activity in these localities. We focused our discussions on particular localities within the U.S. Attorneys’ districts rather than the district as a whole. For example, in the U.S. Attorney District of Maryland, we discussed anti-gang efforts in Baltimore. The localities we visited were Atlanta, Ga.; Baltimore, Md.; Brooklyn, N.Y.; Chicago, Ill.; Cleveland, Ohio; Dallas, Tex.; Durham, N.C.; Los Angeles, Calif.; Manhattan, N.Y.; Milwaukee, Wisc.; Newark, N.J.; Pittsburgh, Pa.; Raleigh, N.C.; Richmond, Va.; and Tampa, Fla. We selected these localities based on a mix of criteria that included a desire to talk with officials in communities of varying sizes and geographic locations that had received federal grants to address gang-related crime problems. Other criteria considered in selecting these localities included the location of the USAO and federal law enforcement agency field offices, the presence of federally led task forces to combat gang crime, and recommendations of officials of DOJ and DHS components during our preliminary interviews. We also considered the results of our review of data from the 2006 Federal Bureau of Investigation (FBI) Uniform Crime Report on localities’ population and number of violent crimes. We considered this data in order to select localities that represent a range of population sizes and violent crime concerns. We also considered sites that were located in close geographic proximity where possible in order to maximize travel resources. At each locality, in addition to meeting with federal officials, we met with selected state and local prosecutors and law enforcement officials to discuss the gang problem in their area and the federal agencies’ role in helping to address it. The results of our site visits cannot be generalized across all U.S. Attorney districts, DOJ or ICE field offices, or states and localities in the United States. However, because we selected sites and localities based on a variety of factors, they provided us with a broad overview of activities of DOJ and ICE related to federal anti-gang activities including law enforcement, as well as prevention and intervention programs. See table 4 for additional information on the interviews we conducted in each locality we visited. To determine how DOJ and DHS have measured the results of their gang enforcement efforts, we first assessed how DOJ and DHS components define “gang” and gang-related crimes. We then reviewed data on gang- related investigations and prosecutions maintained by DOJ and DHS law enforcement agencies and U.S. Attorneys. To assess the reliability of statistical information and budget data we obtained, we discussed the sources of the data with agency officials and reviewed documentation regarding the compilation of data. We determined that the data were sufficiently reliable for the purposes of this report. We also interviewed headquarters officials about performance measurement initiatives and reviewed DOJ and DHS strategic plans, budgets, and performance reports, and we reviewed a DOJ Office of Inspector General report that evaluated the Executive Office of U.S. Attorney’s (EOUSA) case management system. We compared DOJ and DHS efforts to measure the results of their gang enforcement efforts to our prior work on effective interagency collaboration and results oriented government. In addition, we asked state and local law enforcement officials in nine of the 15 localities we visited how they measured the results of local gang enforcement efforts. To determine how DOJ administers and/or supports gang prevention, intervention and law enforcement programs through grant funding, we examined documentation of DOJ’s overall approach and objectives for anti-gang grant programs, as well as DOJ-sponsored evaluations and a guide to best practices to address community gang problems. We interviewed Office of Juvenile Justice and Delinquency Prevention (OJJDP) and Bureau of Justice Assistance (BJA) officials about the status of funding, sustainability of anti-gang programs without federal funding, and results of evaluations of the effectiveness of the anti-gang grant programs, among other topics. We also reviewed funding levels for fiscal years 2007 and 2008 for the four active grant programs that we identified as being directly focused on anti-gang efforts and included in the scope of our review. In eight localities we visited that received federal grants for anti-gang efforts, we interviewed grant recipients to determine activities that they were pursuing with the grant funds and how they planned to sustain programs when federal funding expired. In two of these locations, we observed youth gang prevention and intervention programs in process and spoke with participants and representatives of community-based groups implementing them to gain an understanding of the scope of the demonstration projects and how they used federal funds for anti-gang prevention and intervention activities. In addition, we interviewed USAO officials to obtain information on their roles in anti-gang programs. We reviewed guidance on developing and implementing comprehensive prevention, intervention, and suppression programs and key documents related to the four federal grant programs, including grant applications, community reports on the use of grant funding and performance data. We did not, however, review every program supported by federal funding that communities could use for anti-gang efforts or for other law enforcement and crime prevention efforts. For example, DOJ grant programs including the Community Oriented Policing Services, Weed and Seed, Project Safe Neighborhoods, and Edward Byrne Memorial Justice Assistance Grant Program were not in the scope of our review because they are not specifically designated for anti-gang efforts, although communities could choose to use funds from the grants for anti-gang activities or for other law enforcement and crime prevention purposes. We also reviewed available nationwide evaluations of grant programs sponsored by DOJ. Nine criminal justice researchers with expertise on anti-gang issues provided their views on how effective the federal government has been in measuring its gang suppression, prevention, and intervention activities and whether the programs are sustainable without federal funding and likely to be implemented by communities that did not receive federal grants based on lessons learned from the federally funded projects. Semistructured interviews with officials in 15 localities provided information to help address each of our three reporting objectives. During our visits, we interviewed officials of the following offices: FBI; Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF); Drug Enforcement Administration (DEA); and ICE offices when a field division or resident office or an FBI, ATF, DEA or ICE-led task force was present and found to be engaged in local anti-gang efforts through communication with officials of the USAO; local law enforcement agencies when those agencies accepted our request for interview; and state, local, or nongovernmental entities that received or were responsible for administering DOJ anti-gang grants. Table 4 lists the officials we interviewed at each locality visited. We also interviewed officials of the Los Angeles County District Attorney’s Office and the Los Angeles City Attorney’s Office who had an established history of prosecuting gang-related crime, and we pre-tested our structured interview instruments with law enforcement officials in Washington, D.C.; Montgomery County, Md.; and Northern Virginia. Research experts provided input to our reporting objective on how DOJ administers and/or supports gang prevention, intervention and law enforcement programs through grant funding and to appendix IV, which provides perspective on important elements for consideration in developing and implementing an approach for combating gangs. We identified research experts through a review of literature related to gangs and gang crime issues, their participation in gang-related conferences, and by asking federal officials for recommendations. We contacted these research experts by e-mail with several questions, and we either discussed their answers in telephone interviews or received e-mail responses from them. The following research experts contributed their views: G. David Curry, University of Missouri-St. Louis Scott Decker, Arizona State University Finn-Aage Esbensen, University of Missouri–St. Louis Karl Hill, University of Washington Ronald Huff, University of California-Irvine Charles Katz, Arizona State University David M. Kennedy, John Jay College of Criminal Justice, City University of Malcolm Klein, University of Southern California Irving Spergel, University of Chicago We conducted this performance audit from December 2007 through July 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Rebecca Gambler, Assistant Director; Katherine Davis; Anthony Fernandez; Deborah Knorr; Amanda Miller; Jeffrey Niblack; Octavia Parks; Janet Temko; and Jeremy Williams made significant contributions to this report. | The Department of Justice (DOJ) estimates that the United States has about a million gang members. While state and local agencies have primary responsibility for combating gang crime, the federal government has key roles to enforce laws and help fund programs to provide alternatives to gang membership for at-risk youth. GAO was asked to examine federal efforts to combat gang crime. This report addresses (1) the roles of DOJ and the Department of Homeland Security (DHS) in combating gang crime and the extent to which DOJ and DHS agencies coordinate their efforts with each other and state and local agencies; (2) the extent to which DOJ and DHS measure their gang enforcement efforts; and (3) how federal grant funding is used to administer or support activities to reduce gang-related crime. GAO reviewed federal agencies' plans, resources, and measures and interviewed federal, state, and local officials in 15 localities with federally led anti-gang task forces representing varying population sizes and locations. Various DOJ and DHS components have taken distinct roles in combating gang crime, and at the headquarters level, DOJ has established several entities to share information on gang-related investigations across agencies. However, some of these entities have not differentiated roles and responsibilities. For example, two entities have overlapping responsibilities for coordinating the federal response to the same gang threat. Prior GAO work found that overlap among programs can waste funds and limit effectiveness, and that agencies should work together to define and agree on their respective roles and facilitate information sharing. At the field division level, federal agencies have established strategies to help coordinate anti-gang efforts including federally led task forces. Officials GAO interviewed were generally satisfied with the task force structure for leveraging resources and taking advantage of contributions from all participating agencies. Federal agencies have taken actions to measure the results of their gang enforcement efforts, but these efforts have been hindered by three factors. Among other measures, one agency tracks the number of investigations that disrupted or shut down criminal gangs, while another agency tracks its gang-related convictions. However, agencies' efforts to measure results of federal actions to combat gang crime have been hampered by lack of a shared definition of "gang" among agencies, underreporting of information by United States Attorneys Offices (USAOs), and the lack of departmentwide DOJ performance measures for anti-gang efforts. Definitions of "gang" vary in terms of number of members, time or type of offenses, and other characteristics. According to DOJ officials, lack of a shared definition of "gang" complicates data collection and evaluation efforts across federal agencies, but does not adversely affect law enforcement activity. DOJ officials stated that USAOs have underreported gang-related cases and work, in part because attorneys historically have not viewed data collection as a priority. In the absence of periodic monitoring of USAO's gang-related case information, DOJ cannot be certain that USAOs have accurately recorded gang-related data. Further, DOJ lacks performance measures that would help agencies to assess progress made over time on anti-gang efforts and provide decision makers with key data to facilitate resource allocation. DOJ administers several grant programs to assist communities to address gang problems; however, initiatives funded through some of these programs have had mixed results. A series of grant programs funded from the 1980s to 2009 to test a comprehensive communitywide model are nearing completion. Evaluations found little evidence that these programs reduced youth gang crime. DOJ does not plan to fund future grants testing this model; rather, DOJ plans to provide technical assistance to communities implementing anti-gang programs without federal funding. DOJ also awarded grants to 12 communities during fiscal years 2006 to 2008 under another anti-gang initiative. The first evaluations of this initiative are due in late 2009, and no additional grants will be funded pending the evaluation results. |
VA’s Veterans Benefits Administration (VBA) is responsible for administering benefit programs, such as disability compensation and pension. Veterans and their families can apply for benefits at any of VA’s 58 VAROs. Significant differences exist among VAROs; for example, as of September 30, 1994, their claims processing staffs ranged in size from 11 to 219. Likewise, performance varies considerably; for example, the time needed to process initial disability claims ranged from 86 to 367 days in 1994. VA’s ability to process claims for benefits in a timely way has been a major topic of concern for many years. In 1990, VA took steps to fundamentally change the way services are provided to veterans. A key element of those changes is modernization of VBA’s automated information systems, projected to be completed in 1998. Progress on this effort has been slow, and we have raised significant concerns about the adequacy of planning and implementation. In response to our initial work, VA agreed with the Director of the Office of Management and Budget (OMB) to, among other things, increase project oversight, establish outcome-oriented performance measures and document the system’s effect on service, and update the project’s economic analysis. The OMB agreement included timeliness goals to be met by the end of fiscal year 1998—as well as interim goals—for selected types of claims, including initial disability compensation and initial pension claims. In 1990 the Secretary of VA also asked all VAROs to identify and implement innovative changes aimed at speeding up claims processing and reducing the growing backlog. In response, some VAROs undertook major restructuring initiatives, but most continued using the traditional “assembly-line” approach to processing. Under this approach, each claim passes through several individuals, each of whom performs a specific task. One person enters the claim into the computerized system and opens the claims file. Another then determines what information is needed and develops requests for that information. Another communicates with VA hospital staff if a physical examination is needed. These steps continue until an “authorizer” approves the decision. Often, files are centrally located and are sent back and forth from the central files to various claims processors many times before a claim is decided. The claims backlog and processing times did not decrease but grew from 1990 to 1993. The backlog of compensation and pension claims grew from about 378,000 to about 528,000 during that period. Table 1 shows that, during the same period, average processing time increased for the four types of claims specifically included in VA’s agreement with OMB. VA attributed its claims processing difficulties to several factors, including significantly increased workloads resulting from downsizing of the military, increased complexity of claims, and expanded responsibility resulting from decisions by the U.S. Court of Veterans Appeals created in 1988. “There is no ownership or accountability associated with the process. The claim physically moves from one location to the next, with each person responsible for a small part of the process and each movement contributing to further delay in the claim.” The panel’s recommendations addressed what it saw as three key problem areas in claims processing: (1) inadequate claims development, (2) excessive response time for obtaining evidence, and (3) an unacceptably long time to rate cases. (See app. I for a list of the panel’s recommendations.) The recommendations were based, in part, on initiatives already implemented in one or more VAROs and on panel members’ judgment. The panel was composed of people from both inside and outside VA with extensive experience and knowledge of VA operations and relied on expertise and judgment to identify root causes and develop recommended changes. Our work and the work of others have also identified these three areas as significant problems for VA. During 1994 some VAROs continued or began making changes intended to improve claims processing, and VA worked to develop guidance and policies for implementing the Blue Ribbon Panel recommendations. During that year the number of claims awaiting a decision decreased somewhat, from 528,000 in 1993 to about 485,000 in 1994. However, average processing times increased and VA moved further away from, rather than closer to, the 1998 timeliness goals. Officials told us that processing times increased because, during the later part of the year, VA focused on reducing the backlog of old claims, thus increasing the average age of claims closed. Table 2 shows the average 1994 processing times for the four types of claims included in the OMB agreement compared with the average time in 1993 and the 1998 goals. To determine VA’s plans for implementing change, we examined ongoing and planned efforts to change claims processing structures and procedures in seven VAROs (see app. II). We judgmentally selected VAROs that differed in size and the number and type of changes already made. At these locations we discussed the impact of changes with officials, analyzed pertinent processing data and reports, and observed claims processing activities. We also visited VA’s eastern and western area offices, where we discussed the initiatives that VAROs in each area had implemented and the area offices’ role in implementing and monitoring those and future initiatives.In addition, we analyzed the findings and recommendations of VA’s Blue Ribbon Panel and headquarters’ plan for implementing them. To evaluate VA’s plans for determining the effectiveness of VARO changes, we discussed plans for assessing the impact of changes with officials from VBA’s Compensation and Pension Service and Program Analysis and Evaluation staff. We also discussed how VA plans to ensure that VAROs implement those initiatives that offer the greatest promise for solving their claims processing problems. Our work focused on changes in VAROs’ claims processing structures and procedures and not on VBA’s computer system modernization effort or VBA’s reengineering task force—which is charged with looking beyond compensation and pension issues to improving operations throughout VBA. An ongoing GAO study is addressing VA’s systems modernization efforts and their relationship to the changes in VARO structures and procedures discussed in this report and to VBA’s reengineering task force. Additionally, at VA’s request, the Center for Naval Analyses is conducting an independent assessment of the coordination, control, and integration of key modernization activities, including their relationship to other initiatives aimed at improving claims processing. Our review was conducted between October 1993 and August 1994 in accordance with generally accepted government auditing standards. VA has developed several model claims processing structures that incorporate some key Blue Ribbon Panel recommendations. In addition, VA is working to modify regulations and other claims processing policies to encourage and allow a variety of procedural changes. VAROs will have flexibility in deciding which initiatives to implement, given their individual circumstances. However, our review showed that some VAROs are not likely to implement some initiatives. They may be reluctant to make changes or face logistical obstacles to doing so. Also, they may not have knowledge of the experiences of other VAROs, knowledge that could overcome reluctance or show ways to get past obstacles. The Blue Ribbon Panel concluded that VA’s traditional assembly-line claims processing system should be completely restructured. The goals of the revised structures represented in the models, as described by VA officials, are to put fewer resources into clerical functions and more into decisionmaking—especially rating claims—and to ensure good service as required by the Government Performance and Results Act. To replace the current assembly-line system, VA developed models that reorganize staff along two basic types of team structures, one based on a case management approach and the other on a functional alignment approach. The case management approach organizes staff into small work teams responsible for all claims processing steps for all or most types of claims. This approach reduces the number of staff involved in processing each claim. The functional alignment approach organizes staff into two types of work teams. One team handles the processing of all claims that require a rating decision, thereby allowing some specialization of staff responsible for the most complex claims VA processes. The other team performs all claims-related activities for claims that do not require a rating decision. Implementation of each approach could follow one of two paths. Option one would integrate some of the VAROs’ staff responsible for all direct contact with customers—Veterans Services Division staff—with staff responsible for processing claims—Adjudication Division staff. This would allow a veteran to talk directly, in person or by telephone, to the individuals most knowledgeable about his or her specific claim. Option two would keep the functions of the two divisions separate during a transitional phase, after which the two divisions would be fully integrated. In addition to changing the claims processing structure, VA is planning a variety of other initiatives—not specifically related to any one model—to improve processing procedures. These initiatives include, for example, allowing claims examiners to contact claimants by telephone, developing a system to better track and locate claim files, and having claims examiners specialize by type of claim such as initial disability compensation or pension. Although empirical data were often not available to show a positive impact from these initiatives, VARO officials we spoke to who were implementing them believed the initiatives were improving timeliness or other aspects of service. Four VAROs that we visited allowed claims processors to contact sources of evidence by telephone rather than by the standard practice of sending a letter. Officials at all four VAROs found that using telephones was helpful. Officials at one VARO said that contacting sources by telephone shortens the time required to obtain the evidence and helps ensure that claimants and other sources of evidence understand exactly what VA needs. Officials at another VARO noted that applications frequently come in lacking critical information such as social security numbers. In such instances, processors simply telephone applicants to obtain the missing information. Locating files is a continuing problem in VA, one that regional officials acknowledged takes considerable staff time. One VARO modified VA’s existing computer system to better track claim files. The files at this VARO were well organized, and officials said they have almost eliminated the problem of lost and misplaced files. Two other VAROs planned to modify their systems in a similar manner. In addition, VA is revising existing computer software that uses bar codes to track files. The revisions will allow more VAROs to use the bar code system. Another initiative—specialization—allows processors to become more knowledgeable about complex issues related to a specific type of claim. According to officials, this practice increases processors’ proficiency. Data from one VARO that implemented specialized work teams late in 1993 show that processing times decreased for the four types of claims included in VA’s agreement with OMB. For example, the time to process initial disability compensation claims decreased from 161 days in 1993 to 141 days in 1994. In trying to improve claims processing, VA is allowing VAROs to make the changes they themselves deem necessary. VA has mandated that VAROs choose one of the models for reorganizing staff as a basis for their new claims processing structures. However, VAROs can modify the chosen model. VA disseminated the models to the VAROs in late November 1994. By January 1995, each VARO must submit a proposed claims processing structure for VA approval. In general, VA headquarters’ response to the Blue Ribbon Panel recommendations has been to amend policies to allow, but not require, VAROs to implement changes, such as using telephones in claims development or removing the requirement for review and approval of the decision on each claim. VA officials said that regional directors are in the best position to determine whether specific actions will work in their given situation. In their opinion, mandating specific actions nationwide without considering the diversity that exists among VAROs—such as size and local resources—would be counterproductive. Some VAROs may be reluctant to make some changes or may face difficulties in doing so. This reluctance could explain the slow progress many VAROs have made in implementing changes. Early in 1994, more than 3 years after the Secretary called for VAROs to make fundamental and innovative changes, only 35 of 58 VAROs responded positively to VA’s request for information on changes made. On average, the 35 VAROs made fewer than three changes, and some of those changes were minor. For example, one VARO simply displayed graphs showing claims processing goals and target dates in the claims processing work area. Some VAROs we visited were reluctant to implement changes that appear to have considerable advantages. Officials at several VAROs, for example, expressed concern about allowing claims processors to use telephones to contact veterans, although VA officials believe that such contact is helpful. One VARO official said he believed this would lead staff to use the telephones for personal business. An official at another VARO was concerned that staff would spend too much time “on hold” waiting for responses from institutions such as VA hospitals. Furthermore, VAROs that want to change may have difficulty doing so. For example, two VAROs we visited were limited physically in how much they could change. One had recently renovated its space and installed modular furniture, which limited its ability to lay out its space to accommodate work teams. The other had implemented teams but could not store their files in close proximity to the teams because the floor was not strong enough to support the weight of the files. (Colocating files is generally thought to increase the efficiency of teams and improve customer service.) Likewise, regional and headquarters officials noted that some VAROs may encounter physical limitations that would make it difficult to provide all claims processors access to telephones. When we discussed these VARO concerns with officials in VBA’s Compensation and Pension Service, they reiterated that these are the kinds of problems that necessitate flexibility: not all VAROs can implement all changes. They said, however, that in some cases they would negotiate with VARO officials to encourage implementation of specific initiatives, such as using telephones to request information. Some VAROs may not be fully aware of initiatives that have been implemented at other VAROs. Although VA headquarters disseminates information about regional initiatives at periodic headquarters-sponsored meetings of claims processing officials, much of the information sharing among VAROs is informal. There is no reliable mechanism by which VA either collects or disseminates complete information about regional experiences so that VAROs can learn from each other. Much of the information sharing results from informal networking. For example, at one VARO we visited, officials had learned of other VAROs’ examples through informal contacts. Officials at the one VARO took it upon themselves to travel to another to learn about the second VARO’s efforts and results. These informal methods do not guarantee complete information sharing. One official noted that VAROs may not voluntarily share information about initiatives. Likewise, VARO officials who do not make the effort to network may not learn of many initiatives. One area office director noted that VA headquarters needs to do a much better job of compiling and disseminating information about claims processing initiatives. The experience of one VARO demonstrates the usefulness of more formal mechanisms. Officials at that VARO said they learned of an initiative, which they subsequently implemented, during a teleconference the area office set up to discuss ways to reduce claims processing time. Recently, VA has tried to improve information dissemination. VA focused much of its September 1994 meeting of adjudication officers on new initiatives. Much of the discussion concerned new claims processing initiatives that some VAROs have implemented or that VA has proposed—including the new claims processing structures. However, VA’s ability to inform VAROs about initiatives is limited because VA headquarters does not have complete information about regional experiences, either the initiatives that have been tried or their effectiveness. The compensation and pension staff responsible for monitoring VAROs did not have a list showing all initiatives. That staff’s March 1994 data showed that 23 VAROs had implemented 50 initiatives, yet data obtained by the VBA reengineering task force showed that, as of January 1994, 35 VAROs had implemented 86 initiatives. Four VAROs, for example, had implemented some form of claims processing work teams on which the compensation and pension staff had no information. Also, at one VARO we visited, mail clerks processed all death notices received by mail instead of forwarding them to claims processing. This initiative reduced the workload of the claims processors and ensured timely termination of payments but was not included in the data of either the compensation and pension staff or the task force. VA’s current evaluation plans will not provide sufficient information for it to effectively assess VARO initiatives and guide future improvements in VARO operations. This is especially critical because information currently available about the effectiveness of initiatives has been inconclusive. Better evaluation could position VA to react quickly to unsatisfactory results and more effectively disseminate needed information among VAROs. In developing initiatives, VA relied on experience and judgment. The only empirical evidence about initiatives comes from the experience of the VAROs that have already implemented some of the initiatives. However, VA has not required VAROs to evaluate their initiatives and has not provided guidance to those wishing to do so. Not all VAROs have done evaluations, and those done have been inconclusive. An official of VBA’s Program Analysis and Evaluation staff told us that, according to his recent discussions with VARO officials, those officials want headquarters to provide this type of guidance. Some of the VAROs we visited performed weak evaluations. For example, analyses usually considered only the initiatives’ impact on overall processing time or backlog; they did not consider other possible impacts, such as improved communications with veterans. Similarly, some evaluations had technical flaws. One VARO compared the quality of processing for a prototype, team-based unit with that of its unit using the assembly-line approach. Although the comparison showed that the prototype unit was more accurate, the study’s statistical sampling methodology did not allow a valid comparison, raising questions about its conclusion. In other cases, VAROs experienced outcomes that were contradictory or could not clearly be explained by changed procedures. For example, two VAROs of similar size established similar types of specialized claims processing teams but had different results. For unexplained reasons, one’s processing times continued to increase while the other’s decreased. Likewise, where VAROs seemed to be improving, the reasons were unclear. VA identified four VAROs that had recently begun to meet some of the department’s claims processing goals: One used specialization and met processing goals; the other three are among VA’s smallest VAROs, and officials acknowledged that the three were among those that traditionally had the best processing times anyway. In fact, two of those VAROs had reported no changes in their processing structures and procedures. Data are also inconclusive because some initiatives may not have been in place long enough to determine their full impact. It is not clear how long evaluations should continue to accurately assess results. The importance of this issue is demonstrated by dramatically different actions involving three VAROs that have implemented claims processing work teams. Two VAROs disbanded their claims work teams after 7 months or less because processing times or backlog had not been reduced. In contrast, another VARO is continuing to use work teams even though, after nearly 2 years, its processing times and backlog have continued to increase. Additionally, some initiatives can only be implemented fully over the long term so their full impact cannot be evaluated in the short term. For example, the panel’s recommendations included assigning and training additional staff to the rating activity and certifying rating specialists. Revised training materials, performance standards, and a method for certifying rating specialists are not scheduled to be ready until June 1995; then, officials said, it could take 2 years to fully train staff. Therefore, although interim assessments can be made, a full assessment of these initiatives will take several years. VA headquarters plans to continue to routinely assess each VARO’s overall performance in the areas of timeliness, quality, and productivity using national data. Monitoring each VARO’s overall performance in this way is clearly a necessary step. VA needs to know how well regional initiatives, in total, are working. But overall outcome data alone are insufficient. Following its traditional monitoring and evaluation practices, headquarters will evaluate overall outcome data—such as total average time to process each type of claim—for each VARO, semiannually. Each VARO’s progress can be compared with its own past performance and measured against VA’s national goals. Headquarters staff also have a goal of making an on-site visit to each VARO every 2-1/2 years. Additionally, as part of ongoing oversight, area offices will continue their traditional monitoring of VARO operations, including review of outcome data. Using this approach, VA will know which VAROs are improving but will have little sense of what led to the changes or how to help VAROs that are not improving. To guide VAROs, VA will need insight into which initiatives work best under which circumstances and what factors lie behind or obstruct improvement. For example, VA could use information on the following: How individual VAROs implemented their initiatives to help VA interpret why VAROs implementing the same or similar initiatives get different results: For example, several VAROs have created a rating analyst technician position but are using that person differently and may obtain different results.Interim and short-term outcomes to help monitor progress and assess individual initiatives: Because some initiatives address only a part of the process, data related more directly to the initiative itself rather than overall outcomes may be more relevant. For example, for the rating analyst technician who screens claims, the more important measure might be backlogs at the rating board rather than overall backlogs. A variety of factors that could be expected to affect outcomes: These factors might include staff turnover (implementing initiatives may actually increase staff turnover in the near term as job descriptions are changed), workload, and number of cases returned by the Board of Veterans’ Appeals for insufficient evidence. When VA disseminated the new organizational models in November 1994, it mandated that VAROs conduct periodic assessments as part of implementing the models. VA did not, however, specify the nature or scope of those assessments or provide guidance on how they should be conducted. In discussing with us the need for better evaluation of initiatives, officials in the Compensation and Pension Service expressed uncertainty about how to evaluate VARO initiatives to provide headquarters with sufficient information. Although some steps have been taken to determine what information should be collected, VA still needs to (1) determine what information is most critical to interpreting results and (2) develop a plan for obtaining and analyzing the data. VBA’s Program Analysis and Evaluation staff have recognized the need to develop performance measures that are specific to the local environment and the particular initiative. In June and July of 1994, the evaluation staff visited five VAROs to study their work teams and develop ideas for measuring the progress and success of various initiatives. The staff plan to use this information to make suggestions to senior VA management. (These suggestions will incorporate customer satisfaction considerations as required by the Government Performance and Results Act.) This work could be an important first step in developing the information needed to effectively oversee ongoing efforts to improve claims processing. At this point, however, management has not indicated what action it will take. Once VA determines the basic information needed, it can employ a variety of evaluation methods. Ideally, VA would use control groups, possibly setting up separate sections within VAROs, one or more using the revised structure and procedures and others not. Control groups would allow VA to more confidently determine whether changes resulted from the initiatives or from unrelated factors, such as workload or staff turnover. But this method is problematic. Some portion of VARO workload would have to continue to use the existing approach at a time when management sees change as critically needed. Also, VAROs would have to operate for some time using two processing structures, which could significantly strain operations. Though evaluation based on control groups is ideal, it is not absolutely necessary, however. When making management decisions in an organization as diverse as VA, it is not always possible to obtain the definitive information gained from control group methodology. Other evaluation approaches are acceptable. Various statistical methods, for example, would allow VA to compare change over time, using past data to project what the situation would have been—for example, average processing times—if no change in approach had been made and comparing it with the situation under the new approach. Alternatively, qualitative methods could, for example, provide detailed case study information for selected VAROs, focusing on the most important initiatives and choosing VAROs to obtain a mix of approaches and circumstances. Whatever the approach, either VARO staff or headquarters staff could develop the information. Given the urgent need for improving claims processing, the uncertainty about which initiatives will be most effective, and the extent to which some VAROs have already begun making changes, allowing regional flexibility has merit. VAROs can be expected to have different experiences with similar initiatives and therefore need some flexibility. However, if first efforts do not result in sufficient improvement, the VAROs and headquarters need to understand why and to have some basis for determining what other changes have a better chance of success. VA needs information to gain meaningful insight into whether initiatives are working—including whether they are addressing the most significant causes of problems—and how they are affected by regional circumstances. Different results may reflect many factors, not only differences in the types of initiatives undertaken and in VARO size and resources but differences in motivation and commitment to improvement. Without meaningful information to interpret VARO outcome data, headquarters will be hard pressed to ensure improvements as time goes on. Valid VARO assessments of initiatives are critical. Equally important, VA headquarters must understand how results of at least the most significant initiatives were affected by individual VARO circumstances. This broader understanding will better enable VA to disseminate information to VAROs about the pros and cons of various initiatives, provide guidance about what changes to make, and, if necessary, direct VAROs to make specific changes. To better ensure improvement in VARO claims processing, we recommend that the VA Secretary direct the Under Secretary for Benefits to improve plans to evaluate the effectiveness of claims processing initiatives. The improved plans should provide both headquarters and VAROs sufficient information about the effect of initiatives to allow quick response if results are unsatisfactory and to implement even greater improvements if possible. Therefore, the plan should require VAROs to evaluate their major improvement initiatives and provide guidance on how to do so; identify which analytical methods and which data VA headquarters will use to evaluate the various initiatives and make judgments about what changes are most likely to improve claims processing under what circumstances; and describe how VA will disseminate to VAROs information on the experiences, good and bad, that VAROs have in implementing claims processing initiatives. In a letter dated December 13, 1994, commenting on a draft of this report, the Secretary of Veterans Affairs disagreed with our recommendation to develop and implement an evaluation plan. He indicated that VA has in place an evaluation process that includes assessment of performance indicators and that through that process VA reviews, monitors, guides, assesses, and exports initiatives of significance. The Secretary said that the process involves all levels of VBA—from headquarters, including the Compensation and Pension Service; the area offices; and the VAROs themselves. The Secretary also noted that VBA’s project to reorganize claims processing—the major focus of this study—had been in development, testing, and evaluation for at least 2 years. On November 29, 1994, VBA issued organization models to guide VAROs in the future. VA believes its current process—including analysis of outcome data and ongoing monitoring—along with the knowledge and judgment of VA staff, is sufficient to determine the most effective initiatives and provide guidance to VAROs that are not making sufficient progress. In response to the Secretary’s comments, we clarified our recommendation about an evaluation plan to recognize that VA has an evaluation process in place. We continue to believe, however, that the existing process is inadequate. We believe a more thorough evaluation is needed to enable VA to understand not only the outcomes but their causes and to effectively persuade—and, as appropriate, direct—VAROs to adopt the most promising changes. In support of the effectiveness of its improvement efforts to date, VA emphasized that data on average processing times began to show improvement in fiscal year 1995. VA said that processing times for several types of claims for the month of October 1994 were shorter than the times we report for fiscal year 1994 (ended September 1994). It is not clear, however, that these recent data are indicative of an improvement trend. More important, even if they do indicate a trend, VA’s current evaluation process does not allow VA to determine whether changes to the claims processing structure caused the improvement. Interpreting the October 1994 data as the beginning of an improvement trend is questionable because monthly average processing times fluctuate significantly. The national average monthly processing time for original disability compensation claims in fiscal year 1994 ranged from 198 to 227 days while, as we reported, the annual average for that year rose to 212 days from the 1993 average of 198 days. The problem is clearer when viewed at the VARO level. At one VARO, cited by officials as a leader in improving claims processing, the average monthly processing time for original compensation claims fluctuated during fiscal year 1994 from a low of 74 days to a high of 143 days; for 10 months of fiscal year 1994, this VARO’s average was lower than its October 1994 average. More important, whether these data indicate the beginning of a positive trend or not, the VA’s current evaluation process cannot explain with any certainty why these changes are occurring and cannot confidently point to characteristics of VAROs or specific models that have the highest probability of success. For example, VA officials told us that during the later part of fiscal year 1994 VAROs had focused on closing the oldest claims (those over 180 days old). Because, by definition, closing older claims increases average processing times, the reduction in October 1994 may not have resulted from any claims processing initiatives, but, instead, from the 1994 focus on older claims. VARO experiences also demonstrate the difficulty in interpreting outcome data. Although VA points out that its claims processing project has been in development and testing for 2 years, the outcome data—VBA’s key evaluation tool—are inconclusive about the effects of the models. For example, VAROs implementing similar initiatives achieved different results. In fact, the VAROs we visited that had the most experience with changed claims processing structures have not shown a trend toward improved processing times. In the New York VARO—which played a key role in VA’s testing and evaluation of one of the new organization models—data comparing the processing times of staff using the new model with the rest of the staff did not show the new model to be faster. The Secretary also raised a concern about the possible negative impact of our recommendation. He stated that VARO staff were continually seeking ways to improve processing and that it would be “unnecessary and would stifle creativity for all levels of management to know of and to control” each of the many changes until an evaluation showed them to have positive or negative impact. We agree that local creativity should be encouraged. We have not suggested waiting to implement changes in processing structures and procedures until evaluations prove them effective, nor have we suggested that every initiative be evaluated in every VARO. Our report specifically recognizes the urgent need for change in claims processing structures and that some initiatives may be more important than others. However, absent evaluation before widespread implementation, we believe VA should position itself to evaluate at least those initiatives it believes to be the most important, and to do so in a way that allows it to understand the impact different VARO circumstances have on initiatives’ effects. We are sending copies of this report to the Chairman, Senate Committee on Veterans’ Affairs, the Secretary of Veterans Affairs, and other interested parties. This work was performed under the direction of Ruth Ann Heck, Assistant Director. Other major contributors were Richard Wade, Steve Morris, Pamela Scott, and Charles Taylor. Please contact me on (202) 512-7101 if you have questions about this report. 1. Prepare and implement position descriptions to consolidate responsibility for control (i.e., inputting claims into the computer system), development, and award of claims. The consolidated position would be called a rating technician. 2. Create a rating activity responsible for control, development, rating, and authorization of claims requiring a rating. Compile and distribute models for the structure of consolidated rating activities containing both rating specialists and rating technicians. Require all VA regional offices (VARO) to submit for headquarters approval a locally designed plan to restructure their claims processing systems. 3. Elevate to the level of a war effort, the creation, testing, and implementation of the Claims Processing System. This system will be used (1) to help claims processors determine the exact evidence needed to support each claim and (2) to monitor the receipt of that evidence. (VA is developing this computer software package as part of its computer modernization program.) 4. Provide automated on-line access to reference materials (that is, regulations, policy and claims processing manuals, and so forth) through implementation of the Automated Reference Material System. (VA is developing this computer software package as part of its computer modernization program.) 5. Deploy manual development checklists for all aspects of claims processing. 6. Prepare a centralized training program for developing claims. 7. Finish the redesign of the application for disability compensation and pension benefits, and field test the redesigned application. 8. Design a new form to help veterans identify issues and evidence needed to support reopened claims and claims for reevaluation of service-connected disabilities. Convene focus groups to obtain feedback on the design of the new form, and field test the form. 9. Develop, field test, and implement a standard, national package of computer generated letters using input from all VA customers to clarify/improve communications between VA and its customers. 10. Change VA guidelines/procedures to allow claims processors to use other communication modes (telephone, facsimile machine, personal contact, pager, and E-mail). Use these other modes to supplement written communications between claims processors and claimants and other evidence sources. 11. Revise forms/systems to include claimant telephone numbers—both daytime and nighttime. 12. Expand the memoranda of understanding between the Veterans Benefits Administration (VBA) and the Veterans Health Administration (VHA) to include examination quality measures. (VHA completes medical examinations for VBA.) 13. Establish a reporting scheme to monitor the quality, local and national, of VHA examinations. 14. Establish physicians’ coordinators at VA headquarters, medical centers, and VAROs to improve the timeliness and quality of examinations. 15. Establish a joint VBA/VHA education and training effort concerning disability compensation and pension examinations. 16. Improve the automated medical information exchange (AMIE) examination process. (AMIE is a computer system through which VBA requests examinations and VHA reports the results.) 17. Transfer responsibility and associated resources for disability compensation and pension examinations from VHA to VBA. 18. Establish a high-level dialogue with the Social Security Administration (SSA) to communicate VA’s evidence and other needs. 19. Update/verify VBA procedural guidance on obtaining SSA records. 20. If possible, establish a VA/SSA computer link to obtain SSA medical records. 21. Expand the current agreement with the Department of the Army branches for obtaining service medical records to all military service. 22. Assign VA personnel to Department of Defense records centers to assist in obtaining service medical records and to perform liaison activities. 23. Change VBA procedures and forward the claims of separating military personnel to the VARO serving their home state immediately, rather than waiting to send claims from the VARO serving the state where the separating personnel were located. 24. Seek guidance from the environmental support group regarding their sources and capabilities. (The environmental support group is a Department of Defense organization that assists VA in adjudicating claims involving service-connected stress.) 25. Provide guidance on use of evidence sources other than the environmental support group for development of claims involving post traumatic stress syndrome. 26. Continue to educate VBA and VHA staff and veterans service organizations regarding developing claims involving post traumatic stress syndrome. 27. Revise VA regulations to allow acceptance of photocopied documents, rather than requiring certified documents. 28. Ensure that the veterans network design incorporates tracking of case status through the appeal process. (VA is developing the veterans network as part of its computer modernization program.) 29. Initiate national VA/Department of Defense dialogue concerning examinations given to separating military personnel to ensure that the examinations meet VA requirements. 30. Educate Department of Defense medical staff concerning requirements for VA examinations. 31. Provide personal computer processing capability for the rating staff to include standardized formats and glossaries. 32. Use specialization selectively to concentrate on certain categories of complex rating cases. 33. Expand and expedite centrally coordinated training for rating staff. 34. Develop formal training programs for rating staff, and require that the staff obtain certification for rating claims. 35. Develop centralized training for rating staff that utilizes videos, video- and teleconferencing, satellite, and interactive personal computer-based programs. 36. Conduct a special review of VA regulations, manuals, and policies to refine them. 37. Reallocate staff resources to the rating activity; and train staff in the areas of rating, development, and authorization. 38. Complete the evaluation of single-signature authority being tested. (This test eliminated the requirement that a second rating specialist review claims.) 39. Establish help teams wherein several rating specialists from one or more VAROs are temporarily assigned to a VARO with a large backlog of cases awaiting a rating. 40. Implement the veterans records control system as soon as possible. (VA is developing this computer software package as part of its computer modernization program.) 41. Develop, test, and implement the rating board automation system. (VA is developing this computer software package as part of its computer modernization program.) Placed all education claims files in one location for easier access and better control. Allowed veterans benefits counselors to execute simple adjudication claims processing tasks for education claims so that adjudicators could perform more complex adjudication tasks. Participated with another VARO in developing a computer software word processing package for preparing rating decision statements. Converted a traditional claims processing unit to a case management team in October 1993, but disbanded the team after about 4 months of operations because processing times and backlog had not decreased. In April 1994, reorganized all staff into specialized claims processing work teams—one to process claims requiring a rating decision and one to process claims that do not require a rating decision. Created a rating analyst technician position to screen each claim before it is sent to the rating board to ensure that the claim has been properly developed and is ready for action by the rating board. Allowed claims examiners to begin using telephones in lieu of letters to contact veterans and others to request evidence needed to expedite adjudication of a claim. Established no new initiatives. Established case management self-directed work teams in May 1993 to process 25 percent of the office’s workload. These work teams consolidated claims processing and veterans assistance functions and created 2 positions to perform tasks that had been performed by up to 10 individuals. Placed all claims processing work under self-directed work teams in August 1994. In June 1993, established claims processing work teams that included both adjudicators and veterans benefits counselors, but the functions of the individual team members were not changed. Created a rating analyst technician position to assist in the initial development of claims. Developed a check list that shows the evidence needed to support the different types of claims, with a goal of more fully developing claims. (continued) In November 1992, established claims processing work teams along the case management approach to process selected types of claims. In April 1994, reorganized the teams to process 50 percent of all types of claims. Allowed claims examiners to begin using telephones in lieu of letters to contact veterans and others to request evidence needed to expedite adjudication of a claim. Created a rating analyst technician position to screen each claim before it is sent to the rating board to ensure that the claim has been properly developed and is ready for action by the rating board. Implemented a practice of conditionally approving claims on the basis of photocopies of certified documents until certified copies are obtained. Tested a practice of finalizing claims without independent review by a second person. Developed a check list that shows the evidence needed to support the different types of claims, with a goal of more fully developing claims. In early 1994, established two specialized claims processing teams—one to process claims requiring a rating decision and one to process claims that do not require a rating decision. The two teams were converted to case management claims processing work teams in June 1994. Modified VA’s computer system to permit tracking of claims files. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Department of Veterans Affairs' (VA) efforts to improve its claims processing operations, focusing on the effectiveness of planned changes to veterans affairs regional offices' (VARO) claims processing structures and procedures. GAO found that: (1) VA has taken steps to ensure that VARO implement the changes necessary to improve overall service to veterans; (2) VA needs to implement the Blue Ribbon Panel's recommendations to improve disability claims processing; (3) VA has developed several model claims processing structures designed to reorganize staff so that fewer resources are devoted to clerical functions; (4) the models will serve as a framework for implementing other initiatives such as improving claims folder management and the use of evidence received by telephone or fax; (5) VA is also developing regulations and training materials to encourage VARO to adopt these improvement initiatives; (6) VARO have been given significant flexibility to implement initiatives in ways they believe are appropriate; (7) VA may not have a sound basis for determining what additional changes need to be made for guiding future improvements because it has not developed adequate plans for evaluating the merits of the various initiatives; and (8) VA does not have a formal mechanism to disseminate information about the effectiveness of regional initiatives and other VARO experiences with these initiatives. |
The National Defense Strategy is the foundation for DOD’s direction to the military services on planning their respective force structures. This strategy calls for the U.S. armed forces to be able to simultaneously defend the homeland; conduct sustained, distributed counterterrorist operations; and deter aggression and assure allies in multiple regions through forward presence and engagement. If deterrence fails, U.S. forces should be able to defeat a regional adversary in a large-scale multi-phased campaign, and deny the objectives of—or impose unacceptable costs on—a second aggressor in another region. According to the Army’s force development regulation, the Army seeks to develop a balanced and affordable force structure that can meet the requirements of the National Military Strategy and defense planning guidance tasks. The Defense Planning Guidance operationalizes the National Defense Strategy and provides guidance to the services on their use of approved scenarios, among other things, which serve as their starting point for making force structure decisions and assessing risk. These classified scenarios are used to illustrate the missions articulated in the National Defense Strategy, including the need to defeat one regional adversary while deterring a second adversary in another region, homeland defense, and forward presence. Drawing from the scenarios approved in the Defense Planning Guidance for 2017 through 2021, the Army derived a set of planning scenarios, arrayed across a timeline, that reflect these missions. Congress authorizes the number of personnel the Army is able to have in its active, Army National Guard, and Army Reserve components respectively. The Secretary of the Army—in consultation with the Director of the Army National Guard and the Chief of the Army Reserve— approves how the Army will allocate that end strength within each of the Army’s components. Between fiscal year 2011 and fiscal year 2018, the Army’s planned end strength is projected to decline by 132,000 positions (12 percent), from about 1.11 million soldiers in fiscal year 2011 to 980,000 soldiers in fiscal year 2018, as shown in figure 1. By fiscal year 2018, the individual components expect to be at the following projected end strengths: active (450,000), Army National Guard (335,000), and Army Reserve (195,000). As a result, the reserve component—which includes both the Army National Guard and the U.S. Army Reserve—will make up 54 percent of the Army’s planned end strength starting in fiscal year 2018; a proportion that is comparable to the size and allocation of Army forces across its components prior to the September 11, 2001, terrorist attacks. The Army implements its force development processes to make decisions about how to allocate end strength that has been authorized for each of its components, among other things. Taking into account resource constraints, the five-phase process entails determining organizational and materiel requirements and translating those requirements into a planned force structure of units and associated personnel, as illustrated in figure 2. During the fourth phase—the determination of organizational authorizations—the Army undertakes its annual Total Army Analysis (TAA) process, during which it determines how it will allocate its end strength among its units and manage risk. The TAA process is envisioned to help the Army allocate its end strength among its enabler units—those units that deploy to support combat forces—after initial decisions about the size of combat forces, other types of Army formations, and key enablers are made. The Army’s TAA regulation states that the Army will use force guidance, such as the defense planning guidance, to identify the combat unit structure that will be used as an input to TAA’s analysis of the Army’s enabler unit requirements. The Army also uses the results from its most recently concluded TAA as the starting point for the next TAA. For example, Army officials stated that the planned force structure documented in its October 2015 Army Structure Memorandum was an input for the Army’s ongoing TAA, examining force structure for fiscal years 2019 through 2023. The Army Structure Memorandum documents the force structure approved by the Secretary of the Army for resourcing and is an output of the Army’s TAA process. Army officials said that the Army concluded the quantitative analysis phase for this TAA in December 2015 and they expect that the Army will complete the qualitative analysis phase by June 2016. Army officials said that they have modified the TAA process substantially since the Army last issued its regulation and that an updated regulation that will cover TAA is pending final approval. Last updated in 1995, the Army’s TAA regulation describes the objectives and procedures of the TAA process, which includes documenting the Army’s total planned force structure and any unresourced unit requirements. Army officials said that the Army no longer documents unresourced unit requirements because senior leadership at the time the Army stopped tracking these requirements determined that it was not useful for force planning purposes. Additionally, the Army has expanded the inputs to its TAA process beyond those specified in its regulation to include other segments of its force structure and some enabler units that were not eligible for reduction or reallocation. For example, the Army has identified a minimum number of positions for its generating force—which includes units that enable the Army to train and safeguard the health of its soldiers—and during recent TAAs did not evaluate some types of enabler units for reduction or reallocation that were considered to be in high demand (such as its Patriot Battalions) and units that are considered to be critical to early phases of a major contingency (such as those that provide port opening capabilities). The Army prioritized retaining combat units, as well as other segments of its force structure, when planning to reduce its end strength to 980,000 soldiers and as a result will take proportionately more position reductions from its enabler units. are responsible for fighting enemy forces in a contested environment and include the Army’s Brigade Combat Teams (Armored, Infantry, and Stryker) and combat aviation brigades. to the Army’s combat units when they are deployed. They often provide critical support in early deployment (such as port opening), as well as for long-term sustainment (such as those that transport supplies or establish bases from which combat units can operate). Combat units are dependent on enabler units for long-term sustainment in theater and the Army generally deploys both types of units to meet operational demands. The Army prioritized retaining combat units and incorporated other considerations when planning to reduce its end strength to 980,000 soldiers. Army officials said that the Army used its force planning process to evaluate how it can best implement planned end strength reductions. This process—which is intended to link strategy to force structure requirements given available resources—included robust modeling and incorporated senior leaders’ professional military judgement. The Army incorporated its priorities at the beginning of this process, which influenced the planned force structure that the Secretary of the Army ultimately approved. Foremost, the Army sought to retain as many combat units as possible so that it could better meet the missions specified in DOD’s defense planning guidance and the Army’s classified scenarios as well as to account for near-term uncertainty. Additionally, the Army determined it needed to maintain a minimum number of positions in its generating force and its transients, trainees, holdees, and students accounts, based on separate analyses. Lastly, the Army sought to minimize the disruption to Army National Guard capabilities and reserve component unit readiness that resulted from reductions. Generating Force: Army organizations whose primary mission is to generate and sustain the operating force, including the Army’s Training and Doctrine Command—which oversees the Army’s recruiting, training, and capability development efforts—and Army Medical Command—which provides health and medical care for Army personnel. Trainees, Transients, Holdees, and Students: Active component soldiers not assigned to units are counted as part of the Army’s end strength, separately from its operating force and generating force. Soldiers in these accounts include soldiers in training, cadets attending military academies, injured soldiers, or soldiers en route to a new permanent duty station. Retaining combat units. According to Army officials responsible for TAA, Army leaders determined that it was important that the Army retain as many combat units as possible when assessing how to implement end strength reductions. In 2013, the Secretary of Defense announced the conclusion of the department-wide Strategic Choices and Management Review. As part of this review, DOD examined ways to obtain cost savings by altering the Army’s future force structure. According to Army officials, the Secretary of Defense’s review had, at one point, considered whether the Army could reduce its end strength to 855,000, which would correspond with a force structure of 36 BCTs, including 18 in the regular Army and 18 in the reserve component. Army leaders, reacting to what they considered to be unacceptable reductions, commissioned analyses to determine the end strength and number of BCTs the Army needed to execute the missions specified in defense planning guidance. The analysis determined that the Army should retain a minimum of 52 BCTs, including 30 in the active component, in order to best meet the missions specified in defense planning guidance. Ultimately, Army senior leaders decided to retain 56 BCTs based in part on these analyses as well as their assessment of global events and the potential for increased demand for BCTs. In retaining 56 BCTs in its force structure, the Army took additional steps to redesign its force, reflecting its priority to retain combat capacity. Specifically, the Army plans to eliminate 17 BCTs from its force structure relative to its fiscal year 2011 force (a 23 percent reduction in the number of BCTs). However, because the Army decided to redesign its BCTs by increasing its composition from a two maneuver battalion to three battalion formation, the Army estimates that it will be able to retain 170 maneuver battalions in its force structure—a net reduction of 3 battalions compared to fiscal year 2011 (less than 2 percent), as shown in table 1. Maintain minimum number of positions in generating force units and the trainees, transients, holdees, and students accounts. According to Army officials responsible for TAA, the Army needs to maintain a minimum number of positions in the Army’s generating force (in order to provide medical support and training to Army personnel) and its trainees, transients, holdees, and students accounts (in order to account for personnel that are not assigned to units). Specifically, the Army tasked the two largest organizations in its generating force (U.S. Army Medical Command and TRADOC) with evaluating their position requirements and concluded that the Army needs a minimum of 87,400 active component soldiers in the generating force for an end strength of 980,000 soldiers. Additionally, Army officials said that based on a review of historical levels, the Army assumed that 58,500 regular Army positions (13 percent of a 450,000 active component force) would be filled by trainees, transients, holdees, and students. Minimize the disruption to Army National Guard capabilities and reserve component unit readiness resulting from reductions. According to Army officials, the Army sought to minimize disruption to Army National Guard capabilities needed for state missions and reserve component unit readiness when implementing end strength reductions by relying on the components to develop recommendations for making those reductions. Army officials also told us that the reserve components have better visibility into their ability to recruit personnel into specific positions, or the potential impact that reductions would have on the Army National Guard’s domestic missions. The Army plans to eliminate approximately 34,000 positions from its reserve component—of which nearly 27,000 will be from its non-combat formations. Army National Guard and Army Reserve officials agreed with the Army’s assessment and said that they have developed their own processes for assessing where they can best reduce or reallocate positions within their respective components and still meet Army mission requirements. Given the focus on retaining combat units and the constraints senior leaders placed on changing the Army’s generating force; its trainees, transients, holdees, and students accounts; and its reserve components, the Army will take proportionately more positions from its enabler units than from its combat units as it reduces end strength to 980,000 soldiers. Specifically, in fiscal year 2011 enabler unit positions constituted 42 percent of the Army’s planned end strength (470,000 positions), but the Army intends for 44 percent of its reductions (58,000 positions) to come from its enablers. In contrast, the Army’s combat units constitute 29 percent of the Army’s end strength (319,000 positions), but will account for 22 percent of the planned reductions (29,000 positions). When evaluating enabler unit requirements, the Army focused its attention on those capabilities that were less utilized across a 13-year timeline covered by the Army’s planning scenarios. The Army did not consider reductions for capabilities it determined were critical, such as its Patriot and field artillery units, and reduced the size of or eliminated enabler units that were judged less critical, such as military police, transportation, chemical, and explosive ordnance disposal units. Determining the appropriate amount of enabler capacity has been a persistent problem for the Army. We issued several reports during the 2000s reviewing Army plans and efforts to redesign its combat force, an effort known as “modularity.” In those reports, we found that the Army persistently experienced shortfalls for both key enabler equipment and personnel as it restructured its combat units into brigade combat teams. Between 2005 and 2008 we made 20 recommendations addressing the Army’s challenges in creating a results-oriented plan as it transformed its force, developing realistic cost estimates, and completing a comprehensive assessment of the force as it was being implemented. For example, in 2006, we made 2 recommendations that the Army develop a plan to identify authorized and projected personnel and equipment levels and that it assess the risks associated with any shortfalls. The Army generally agreed with both recommendations but ultimately did not implement them. In our 2014 report, we found that the Army’s report to Congress assessing its implementation of modularity did not fully identify the risks of enabler shortfalls or report its mitigation strategies for those risks. Army officials told us that, based on senior leaders’ professional military judgment, concentrating reductions in enabler units is more acceptable than further reducing the Army’s combat units because combat unit shortfalls are more challenging to resolve than enabler unit shortfalls. Prior Army analysis showed that it would take a minimum of 32 months to build an Armored BCT and Army officials said that the Army cannot contract for combat capabilities in the event of a shortfall in BCTs. In contrast, officials said that some types of enabler units could be built in as few as 9 months. Additionally, a senior Army leader stated that the Army has successfully contracted for enabler capabilities during recent conflicts. The Army did not comprehensively assess mission risk (risk to the missions in DOD’s defense planning guidance) associated with its planned force structure because it did not assess mission risk for its enabler units. As a result, the Army was not well positioned to develop and evaluate mitigation strategies for unit shortfalls. In assessing its requirements for aviation brigades and BCTs, the Army determined where combat units in its planned force structure would be unable to meet mission requirements given current Army practices in deploying forces to meet mission demands. Notably, the analysis assumed that sufficient enabler capability would be available. Using the Army’s scenarios derived from defense planning guidance, the Army estimated how well different numbers of each type of unit would meet projected demands over time, which allowed it to compare how different aviation and BCT force structures would perform. As we reported in 2015, the Army analyzed the risk of its aviation brigades to meeting requirements based on the timing, scope and scale of missed demands, and made key decisions to reshape its aviation force structure based in part on this mission risk analysis. Risk Within the Context of Force Development Mission risk: mission risk is the ability of the Army to meet the demands of the National Defense Strategy as operationalized in DOD’s defense planning guidance. Generally, mission risk can be measured by sufficiency (the ability of supply to meet demand) and effectiveness (the availability of the best unit to accomplish a mission). Risk to the force: risk to the health of the force caused by issues such as increased frequency of deployment with less time at home, or early and extended deployments. It is related but not equivalent to mission risk because it can impact morale and unit effectiveness. The Army used the same type of analysis to compare different quantities of BCTs. The Army analyzed how many, and what types, of BCTs would be needed to meet the mission demands of certain scenarios within the defense planning guidance. The Army’s analysis focused on four different BCT levels, including a high of 60 BCTs at 1.045 million soldiers and the low Army officials said was considered by the Strategic Choices and Management Review of 36 BCTs at 855,000 soldiers. As it did when analyzing aviation requirements the Army assessed the timing, scope and scale of missed demands, given current DOD policies and practices governing the length and frequency of military deployments. The Army also assessed how it could mitigate risk to a major combat operation through strategies such as by changing the deployment schedule, or by temporarily reassigning units away from other non-contingency missions in near-east Asia, the Middle East, or elsewhere. According to Army officials, the Army’s analysis enabled senior leaders to assess risks and tradeoffs for this portion of the force in meeting these demands. The Army did not complete a risk to force assessment for its combat units because officials prioritized retention of these combat units and as a result the Army’s analysis was intended to determine the number and types of these units needed to meet mission requirements. In contrast to the mission risk assessment the Army conducted for its combat units (risk to the Army’s ability to meet the missions in DOD’s defense planning guidance), the Army assessed risk to the force for its enabler units in its most recent TAA (risk to the health of the Army’s enabler units). Assessing risk to the force entails determining how frequently and for how long individual types of enabler units would need to deploy to meet the maximum amount of demands possible, given the previously identified combat force structure, and does not entail identifying missed mission demands or documenting unresourced unit requirements. The Army then determined the length of time at home for each type of enabler assessed, and compared the result with that for the Army as a whole, in order to determine the level of stress (“risk”) on that type of unit. The Army’s analysis necessitates making key assumptions about how enablers would be used, some of which contrasted from current DOD deployment practices. For example, the Army assumed active component enabler units could be deployed indefinitely, which may overstate their availability unless the Secretary of Defense authorizes indefinite operational deployment. Similarly, the Army assumed that it could deploy its reserve component enabler units more frequently than DOD’s current policy allows. Army officials told us that assessing risk to the force for its enablers is useful because the Army can identify the units it would use the most and those that it would use least. Based on its analyses of the frequency and length of deployments for each type of enabler unit assessed, the Army developed and prioritized options to mitigate risk to the enabler force. These options included adding structure to more utilized units and taking reductions from or divesting less-utilized enabler units. For example, the Army’s analyses showed that one type of engineer unit spent far less time at home than the Army’s other units during a contingency, and so the Army added an additional engineer unit to its structure to mitigate this stress. In contrast, the Army determined that it had excess support maintenance companies in its force structure and decided to eliminate 6 of these units. Additionally, the Army analyzed its enabler units to identify which units would be needed during the first 75 days of a conflict. Army officials used war plans to identify the minimum number of each type of enabler unit that would be needed to execute the war plan and then compared that requirement to the number of those units that would be available to meet those requirements. Army officials told us that assessing early deployment requirements is useful because the Army can assess whether it needs to move units from its reserve component to its active component in order to ensure that early deployment requirements can be met. Assessing risk to the force and early deployment requirements does not identify potential mission shortfalls in the enabler inventory, however, and these shortfalls could lead to missed mission demands. When the Army has conducted mission risk assessments for its enabler units outside of TAA it has been able to identify and mitigate risk. In May 2014, the TRADOC Analysis Center completed mission risk assessments for certain types of artillery units, air and missile defense, and truck units, among other units. These analyses showed that some types of units were unable to meet projected mission demands and provided information needed for the Army to develop mitigation strategies. For example, the Army’s assessment of artillery units identified unmitigated mission risk and determined that these units could meet only about 88 percent of demands during a major contingency. To address this risk, Army officials said that they recommended a change to the Army’s deployment practices for these units to allow one type of unit to be substituted for another. This change would enable these units to meet approximately 94 percent of mission demands during a major contingency. Similarly, in another example, the Army’s assessment of its truck units found that planned reductions could limit the Army’s ability to transport troops around the battlefield, among other risks. The Army intends to add 4 medium truck companies to its force structure by the end of fiscal year 2019 in part to address this risk. In its January 2016 report, the National Commission on the Future of the Army identified enabler capabilities that in its view needed further risk assessment and risk mitigation. As previously discussed, Army leaders decided to reduce enabler units they judged less critical, such as military police, transportation, chemical, and explosive ordnance disposal units, in part to preserve the Army’s combat force structure. However, the National Commission on the Future of the Army identified some of these same units as having shortfalls—including units that provide transportation, military police, and chemical capabilities. The Commission recommended that the Army complete a risk assessment and assess plans and associated costs of reducing or eliminating these shortfalls. Army guidance indicates that the Army’s TAA process should assess mission risk for its combat and enabler force structure, but the Army did not complete a mission risk assessment during its most recent TAA. In addition, its TAA process is not being implemented in a manner that would routinely prepare such an assessment. According to the Army’s force development regulation, the Army’s TAA process is intended to determine the requirements for both the Army’s combat and enabler force structure to meet the missions specified in defense planning guidance, document unresourced requirements, and analyze risk given resource constraints. When assessing risk, the Army’s risk management guidance states that the Army should identify conditions that create the potential for harmful events and analyze how such conditions could cause mission failure. Within this context, Army officials told us that the TAA process should assess mission risk by assessing how the Army’s combat and enabler force structure could lead to a failure to meet the missions specified in defense planning guidance. According to the Army’s risk management guidance, once the Army identifies mission risk, it then should analyze and prioritize strategies to mitigate identified risk. In the near term, although the Army’s guidance and risk management framework indicate the Army should complete a mission risk assessment for its combat and enabler force structure, the Army did not do so during its most recent TAA for its enabler units, instead assessing the risk to the force and early deployment requirements for these units. Army officials stated that they did not complete this assessment because the Army assessed how ongoing demands affected the health of the Army’s force and not the mission risk associated with shortfalls. However, our review found that the Army’s guidance does not require that the Army complete an assessment of the risk to force. Army officials are currently revising the Army regulation that documents its force development processes, but the draft does not currently include a requirement that the TAA process assess mission risk for the Army’s combat and enabler force structure. Without an assessment of the mission risk associated with the planned enabler force structure documented in the Army’s October 2015 Army Structure Memorandum, the Army has an incomplete understanding of the risks that may arise from the potential shortfalls in its enabler inventory. Accordingly, the Army is not well positioned to develop strategies to mitigate these risks. Army officials told us the next opportunity to complete this mission risk assessment and develop mitigation strategies would be as part of its ongoing TAA for fiscal years 2019 through 2023. Furthermore, the Army is required to complete TAA every year and as currently implemented its TAA process does not include the modeling and analyses needed to routinely prepare a mission risk assessment for its combat and enabler force structure. Army officials told us that they recognize a need to expand TAA to include mission risk assessments for a set of the Army’s enabler units, consider potential strategies to mitigate this risk, and implement such strategies; but have not revised TAA to include these elements. Without expanding the TAA process to routinely require a mission risk assessment for the Army’s combat and enabler force structure as part of future iterations of TAA, the Army will continue to not be well positioned to identify mission risk and develop mitigation strategies when making future force structure decisions. Facing end strength reductions, the Army made a decision to retain combat capabilities to provide maximum warfighting capability and flexibility. However, the Army’s planned force structure is based on an incomplete assessment of mission risk across its combat and enabler force structure because it did not assess this type of risk for its enabler units. As a result the Army did not comprehensively assess whether its force structure will be able to meet the missions specified in defense planning guidance and, in the absence of that risk assessment, was not well positioned to assess mitigation options when making recent force structure decisions. The Army has an opportunity to more fully assess its recommended force structure’s ability to meet mission demands, identify capability shortfalls, and develop mitigation strategies to address identified shortfalls before it implements its planned force structure. Unless the Army completes this type of assessment, it will lack reasonable assurance that it has identified and mitigated risk that will prevent it from executing the missions specified in defense planning guidance. Additionally, by completing a mission risk assessment for its planned force before completing its ongoing TAA for fiscal years 2019 through 2023, the Army will be better positioned to identify improvements to its TAA process so that it can complete such assessments on a recurring basis moving forward. Unless the Army changes its approach to routinely complete this type of risk assessment as part of its TAA process, it may not be able to identify and mitigate risk associated with changes to its force structure in the future. To identify and mitigate risk associated with the Army’s planned force structure and improve future decision making, we recommend that the Secretary of Defense direct the Secretary of the Army to take the following two actions: 1. Conduct a mission risk assessment of the Army’s planned enabler force structure and assess mitigation strategies for identified mission risk before Total Army Analysis for Fiscal Years 2019 through 2023 is concluded and implement those mitigation strategies as needed. 2. Expand the Army’s Total Army Analysis process to routinely require a mission risk assessment for the Army’s combat and enabler force structure and an assessment of mitigation strategies for identified risk prior to finalizing future force structure decisions. In written comments on a draft of this report, DOD concurred with both of our recommendations and identified the steps it plans to take to address them. DOD’s comments are printed in their entirety in appendix I. DOD also provided technical comments, which we incorporated into the report as appropriate. In response to our first recommendation that the Army conduct a mission risk assessment and assess mitigation strategies for its planned enabler force structure before Total Army Analysis for Fiscal Years 2019 through 2023 is concluded, the Army stated that it recognizes the need to conduct these types of assessments and that it has modified its Total Army Analysis process to include them. As we stated in our report, at the time of our review the Army had not yet incorporated these assessments into its TAA process. Should the Army complete these assessments prior to finalizing its ongoing TAA, it would be better positioned to identify and mitigate the risk associated with its planned enabler force structure and it will have taken the steps needed to satisfy our recommendation. With respect to our second recommendation that the Army expand its TAA process to routinely require a mission risk assessment and an assessment of mitigation strategies for its combat and enabler force structure, the Army stated that it recognizes the need to routinely conduct these types of assessments. The Army stated that it intends to formalize inclusion of these types of assessments in its process by publishing a Department of the Army pamphlet that is currently under development. Should the Army modify its guidance to require these assessments, and implement its TAA process in accordance with its revised guidance, the Army would be better positioned to identify mission risk and develop mitigation strategies when making force structure decisions. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and the Secretary of the Army. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3489 or pendletonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. John H. Pendleton, (202) 512-3489 or Pendletonj@gao.gov. In addition to the contact named above, Kevin O’Neill, Assistant Director; Tracy Barnes; Katherine Blair; Erin Butkowski; Martin De Alteriis; Amie Lesser; Ricardo A. Marquez; Erik Wilkins-McKee; and Alex Winograd made key contributions to this report. | The Army plans to reduce its end strength to 980,000 active and reserve soldiers by fiscal year 2018, a reduction of nearly 12 percent since fiscal year 2011. According to the Army, this reduction will require reductions of both combat and supporting units. Army leaders reported that reducing the Army to such levels creates significant but manageable risk to executing the U.S. military strategy and that further reductions would result in unacceptable risk. The Senate report accompanying a bill for the National Defense Authorization Act for Fiscal Year 2015 included a provision that GAO examine the factors that the Army considers and uses when it determines the size and structure of its forces. This report (1) describes the Army's priorities and planned force structure reductions and (2) evaluates the extent to which the Army comprehensively assessed mission risk associated with its planned combat and enabler force structure. GAO examined the Army's force development regulations and process, DOD and Army guidance, and Army analysis and conclusions; and interviewed DOD and Army officials. The Army prioritized retaining combat units, such as brigade combat teams (BCT) and combat aviation brigades, when planning to reduce its end strength to 980,000 soldiers, and as a result plans to eliminate proportionately more positions from its support (or “enabler”) units, such as military police and transportation units. The Army's force planning process seeks to link strategy to force structure given available resources through quantitative and qualitative analyses. The Army completed analyses showing that it could reduce its BCTs from 73 in fiscal year 2011 to a minimum of 52 in fiscal year 2017; however, the Army plans to retain 56 BCTs. Moreover, by redesigning its combat units, the Army plans to retain 170 combat battalions (units that fight the enemy)—3 fewer battalions than in fiscal year 2011. Given the focus on retaining combat units, and senior Army leaders' assessment that shortfalls in combat units are more challenging to resolve than shortfalls in enabler units, the Army plans to reduce proportionately more positions from its enabler units than from its combat units. GAO found that the Army performed considerable analysis of its force structure requirements, but did not assess mission risk for its enabler units. Combat Forces: The Army's analysis of BCT requirements entailed an assessment of mission risk—risk resulting from units being unable to meet the missions specified in Department of Defense (DOD) planning guidance. The mission risk assessment used current Army deployment practices and assumed that sufficient enabler forces would be available to sustain combat units over a multi-year scenario. The result of this analysis, and a similar analysis of the Army's aviation brigades, showed that the Army's proposed combat force structure would be sufficient to meet most mission demands. Enabler Forces: The Army's analysis of its enabler units entailed an assessment of risk to the force—how frequently and for how long units need to deploy to meet as many demands as possible. Army officials said this analysis is useful because it enables the Army to identify the units it would use the most. However, the analysis overstated the availability of the Army's enabler units because it assumed they could deploy more frequently and for longer duration than DOD's policies allow. The Army did not identify enabler unit shortfalls, or the risk those shortfalls pose to meeting mission requirements. According to Army guidance, the Army's planning process should assess mission risk for both combat and enabler units. The Army did not complete this type of assessment for its enabler units during its most recent force planning process because the Army assessed the risk operational demands pose to the health of the Army's force, not mission risk. Without a mission risk assessment for both the Army's planned combat and enabler force structure, the Army has an incomplete understanding of mission risk and is not well-positioned to develop mitigation strategies. Furthermore, as currently implemented, its process does not include analyses needed for the Army to routinely prepare a mission risk assessment for both its combat and enabler force structure. Without expanding its force planning process to routinely require a mission risk assessment for the Army's combat and enabler force structure as part of future planning processes, the Army will not be well-positioned to comprehensively assess risk and develop mitigation strategies. GAO recommends that the Army complete a mission risk assessment of its planned enabler force structure, and revise its process to routinely require a mission risk assessment for its combat and enabler force structure. The Army agreed with GAO's recommendations. |
The IG Act created independent IG offices at 30 major departments and agencies with IGs appointed by the President, confirmed by the Senate, and who may be removed only by the President with advance notice to the Congress stating the reasons. (A listing of these 30 departments and agencies with presidential IG offices is provided in app. I.) In 1988, the IG Act was amended to establish additional IG offices located in 33 DFEs defined by the act. (A listing of the DFEs with IGs is provided in app. II.) Generally, the DFE IGs have the same authorities and responsibilities as those IGs originally established by the IG Act, but with the distinction that they are appointed and may be removed by their agency heads rather than by the President and are not subject to Senate confirmation. Although not in the scope of our review, there are 10 IGs established by other statutes with provisions similar to those in the IG Act. (A listing of the IGs established under other statutes is provided in app. III.) The IGs appointed by the President are generally located in the largest departments and agencies of the government; the DFEs have smaller budgets and their IGs have correspondingly smaller budgets and fewer staff. The presidentially appointed IGs and the DFE IGs reported to us total budget authority for fiscal year 2010 of about $2.2 billion with approximately 13,652 authorized full-time equivalent staff and 13,390 staff on board at the end of fiscal year 2010. The presidentially appointed IGs’ budget authority constituted about 84 percent (about $1.8 billion) of the total, and they had about 86 percent (11,564) of the total staff on board. The budgets of the DFE IGs made up about 16 percent (about $352 million) of the total budget authority for IGs, and they had about 14 percent (1,826) of the total staff on board at the end of fiscal year 2010. The IG Reform Act of 2008 (Reform Act) amended the IG Act by adding requirements related to IG independence and effectiveness. Among other provisions, the Reform Act requires the rate of basic pay of the IGs appointed by the President to be at a specified level, and for the DFE IGs, at or above a majority of other senior-level executives at their entities. The Reform Act also requires an IG to obtain legal advice from his or her own counsel or to obtain counsel from another IG’s office or from the Council of the Inspectors General on Integrity and Efficiency (IG Council). The IG Act also provides protections to the independence of the IGs while keeping both their agency heads and the Congress fully and currently informed about particularly flagrant problems and deficiencies within their agencies through a 7-day process specified by the act. In addition, the Dodd-Frank Act amended the IG Act with provisions to enhance the independence of IGs in DFEs with boards or commissions. Specifically, the Dodd-Frank Act changed who would be considered the head of the DFE for purposes of IG appointment, general supervision, and reporting under the IG Act. If the DFE has a board or commission the amendments would now require each of these IGs to report organizationally to the entire board or commission as the head of the DFE rather than an individual chairman. In addition, the Dodd-Frank Act requires the written concurrence of a two-thirds majority of the board or commission to remove an IG. Prior to this protection, most DFE IGs reported to, and were subject to removal by, the individual serving as head of the DFE. The Reform Act also included a provision intended to provide additional IG independence through the transparent reporting of their budgets. Specifically, the Reform Act requires the President’s budget submission to the Congress to have the IGs’ requested budget amounts identified separately within their respective agency budgets, along with any comments provided by the IGs on the sufficiency of their budgets. The American Recovery and Reinvestment Act of 2009 (Recovery Act) is one of the federal government’s key efforts to stimulate the economy in response to the most serious economic crisis since the Great Depression. The Recovery Act provided for IG oversight of the funds by creating the Recovery Accountability and Transparency Board (Recovery Board), which has an IG Chairman and 12 additional IG board members to prevent and detect fraud, waste, and abuse in the stimulus-funded programs. Altogether, there are 30 IGs involved with the oversight of Recovery Act funds. (A listing of the IGs providing oversight of Recovery Act funds is provided in app. IV). Also, the IG Act includes a provision addressing the qualifications and expertise of the IGs by specifying that each IG appointment is to be without regard to political affiliation and solely on the basis of integrity and demonstrated ability in accounting, auditing, financial analysis, law, management analysis, public administration, or investigation. The fields in which an IG can have experience are intended to be sufficiently diverse so that many qualified people could be considered, but also limited to areas relevant to the tasks considered necessary. The IG Act Amendments of 1988 created DFE IGs but did not specify that these IG appointments made by agency heads were to be without regard to political affiliation and on the basis of demonstrated ability in specified fields. The Reform Act addressed the differences in criteria for IG appointment by providing the same provisions for both the DFE IGs and the IGs appointed by the President. We addressed our reporting objectives through our summary of responses to a survey sent to the federal statutory IGs established by the IG Act regarding their activities for fiscal year 2010, and additional analysis. We obtained and analyzed survey responses from 62 IGs established by the IG Act: including 30 IGs who were appointed by the President and confirmed by the Senate, and 32 DFE IGs. We augmented the survey data with information obtained from prior GAO reports, the President’s budget submission to the Congress for fiscal year 2011, and the IGs’ semiannual reports to the Congress. For our discussion of the independence of the IGs, we summarized information from the responses to our survey questions about the implementation of selected provisions in the Reform Act, the IG Act, and the Dodd-Frank Act that are intended to enhance IG independence. Specifically, we asked all of the 62 IGs about the implementation of Reform Act provisions intended to keep IG pay and salaries at a specified level for IGs appointed by the President and consistent with other senior- level executives for the DFE IGs, and about the IGs’ sources of legal counsel. Our survey also obtained information about the extent to which the IGs found it necessary to communicate particularly flagrant problems to their agency heads and the Congress within 7 days as prescribed by the IG Act. These IG reports are commonly referred to as 7-day letters. Regarding the effect of Dodd-Frank Act provisions to enhance independence, we obtained the views of the 26 DFE IGs with boards or commissions on whether their independence was enhanced by these provisions designating their boards and commissions as DFE heads rather than individual chairmen, and the requirement for the concurrence of a two-thirds majority of the board or commission for removal of an IG. We also obtained information from the President’s budget submission to the Congress for fiscal year 2011, to determine whether the IG budget amounts were separately identified along with any comments by the IGs regarding the sufficiency of their budgets. To address the effectiveness of the IGs, we obtained information on the accomplishments of the IGs as reported to the IG Council for fiscal year 2009, in preparation for their annual report to the President. We also obtained information reported by the Recovery Board on its mission and accomplishments in providing oversight of Recovery Act funds. In addition, our survey questionnaire obtained information for fiscal year 2010 on management challenges identified by the IGs reporting under requirements of the Reports Consolidation Act of 2000. To identify the extent of oversight provided by the IGs, we summarized the reported management challenges to identify the major focus of these issues and obtained IG reports relevant to these issues provided by the IGs to our survey and from our review of the IGs’ semiannual reports to the Congress. To address the IGs’ qualifications and expertise we summarized the 62 IGs’ survey information provided on the background of each IG, including professional experience, academic degrees, and professional certifications obtained prior to being appointed to an IG position. We compared this information to the areas of demonstrated ability specified by the IG Act and summarized the number of IGs in each area. We conducted our work from November 2010 to September 2011 in accordance with all sections of GAO’s Quality Assurance Framework that are relevant to our objectives. The framework requires that we plan and perform the engagement to obtain sufficient and appropriate evidence to meet our stated objectives and to discuss any limitations in our work. We believe that the information and data obtained, and the analysis conducted, provide a reasonable basis for any findings and conclusions. We requested comments on a draft of this report from the IG Council. Written comments from the IG Council are reprinted in appendix V and summarized in the “Agency Comments” section of this report. We also received and incorporated as appropriate technical comments from several IG offices. Our survey obtained information from 62 federal IGs appointed under the IG Act on actions taken concerning legislative provisions in the Reform Act and the IG Act intended to enhance IG independence. The IGs reported pay that was at the specified levels required by the Reform Act for IGs appointed by the President and consistent with those of other senior- level officials as required for DFE IGs, thus helping to maintain IG independence and enhance their relative stature within their agencies by increasing their fixed compensation and eliminating discretionary compensation that could create a conflict of interest; having access to independent legal counsel reporting to an IG instead of an agency management official, thus helping to ensure the independence of legal advice available to the IG; and rarely using 7-day letters as a way to independently inform agency heads and the Congress of serious problems concerning agency operations because such issues were resolved without the need for such a letter. We also surveyed the 26 DFE IGs affected by the Dodd-Frank Act provisions intended to enhance IG independence for those IGs reporting to boards or commissions. Just over half of these IGs responded that the change of agency head to the full board or commission increased their independence and most responded that the requirements for a two- thirds concurrence among the board or commission members prior to an IG’s removal increased their independence. In addition, based on our review of the President’s fiscal year 2011 budget submission to the Congress, the IGs’ budget amounts were not always separately identified as required by the Reform Act. To the extent the IGs’ budgets are separately identified, the added transparency of these amounts to the Congress can help increase IG independence. After we informed the IG Council about the results of our review concerning the IGs’ budgets, they agreed to review and assess the matter. The Reform Act addressed the compensation of IGs and requires that IGs appointed by the President have their pay adjusted from Executive Schedule IV to Executive Schedule III plus 3 percent. In addition, the Reform Act requires that the grade, level, or rank designation for the DFE IGs be set at or above that of a majority of the senior-level executives of the agency, such as the general counsel, chief acquisition officer, chief information officer, chief financial officer, and the chief human capital officer at that agency. In addition, the DFE IG pay cannot be less than the average total compensation (including bonuses) of the senior-level executives at that agency calculated on an annual basis. Of the 30 IGs appointed by the President, 27 reported being at or even above the required pay level; with the remaining 3 IGs reporting that they were in acting positions and the requirement was not currently applicable to them. Of the 32 DFE IGs who responded to our survey, 29 reported that their pay and salaries were consistent with those of the senior-level executives of their agencies. Of the remaining 3 DFE IGs, 1 was newly established in fiscal year 2011 and had not yet determined an amount of pay consistent with senior-level executives, 1 IG reported having the correct salary but not the corresponding grade level, and 1 IG was in an acting capacity and reported the requirement was not currently applicable. The Reform Act also requires that each IG established by the IG Act have his or her own legal counsel or obtain necessary legal counsel from another IG office or from the IG Council. In a March 1995 report, we reported that the IG community expressed concerns that IGs with attorneys located organizationally in their agencies’ offices of general counsel would not always receive independent legal advice and that the IGs’ own independence could be compromised. The results from our survey show that all the IGs established by the IG Act reported having access to a legal counsel that is organizationally independent, and none of the IGs rely on the general counsel offices of their agencies. For the 30 IGs appointed by the President, 29 employ their own legal counsel while 1 IG uses the legal services of another IG. All 32 DFE IGs who responded to our survey indicated that they obtain independent legal counsel, with 26 employing their own counsel, 5 using the legal counsels of other IGs’ offices, and 1 using the legal resources of the IG Council. The IG Act provides a reporting tool that can protect the independence of the IGs who report immediately to the agency head particularly serious or flagrant problems, abuses, or deficiencies relating the administration of programs or operations. The IG Act requires the agency head in turn to transmit the IG report, with the agency head’s comments, to the appropriate committees or subcommittees of the Congress within 7 calendar days. We asked whether any of the 62 IGs we surveyed had used the 7-day letter at any time during fiscal years 2008, 2009, and 2010. Only one, a presidentially appointed IG, had used the 7-day letter during this time frame. Specifically, on May 6, 2009, the IG delivered a report to the acting head under the IG Act provisions for a 7-day letter, in which the IG disagreed with the terms of a settlement reached by the agency with a grantee. The acting head provided the IG’s report to the chairmen of numerous congressional committees on May 12, 2009, which was within the 7-day time frame. The IG’s report gained the interest of congressional members and the issues were resolved by the President. Generally, issues have been resolved more informally before getting to the point of using a 7-day letter. In 1999 we reported that no IGs had used the 7-day letter during the period of January 1990 through April 1998. In addition, we reported that a 10-year review of the IG Act by the House Committee on Government Operations in 1988 found that the IGs viewed the use of the 7-day letter as a last resort to attempt to force appropriate action by the agency. Provisions of the Dodd-Frank Act amending the IG Act are intended to provide an additional degree of independence to those IGs in DFEs with boards or commissions. Specifically, the Dodd-Frank Act provides that the head of the DFE with a board or commission will be the board or commission and consequently, the IG appointment is no longer subject to the judgment of a single individual. In addition, the Dodd-Frank Act requires the written concurrence of two-thirds of the members of these DFE boards and commissions for the removal or transfer of their IGs. Twenty-six of the 33 DFE IGs are in DFEs with boards and commissions. Of these 26 DFE IGs, 14 reported that the act’s provision designating the boards and commissions as the DFE heads enhances their independence, and 20 responded that their independence is enhanced by requiring a two-thirds majority for their removal. A smaller number of affected IGs stated that these provisions had no effect on their independence, with 10 stating that the provision specifying the board or commission as the head had no effect and 5 reporting that the removal provision had no effect. One DFE IG affected by the provisions did not respond to these survey questions. Also, a former DFE IG stated that reporting to his commission would reduce his independence because the commission has both federal and state members. However, the current IG who took office during our review stated that the primary concern is how nonfederal members would exercise their authority over a federal IG. The Reform Act amended the IG Act to require that IG budget requests include certain information and be separately identified in the President’s budget submission to the Congress. In addition, along with the separately identified IG budgets, the IGs may include comments with respect to the budget if the amount of the IG budget submitted by the agency or the President would substantially inhibit the IG from performing the duties of the office. These budget provisions are intended to help ensure adequate funding and additional independence of IG budgets by providing the Congress with transparency into the funding of each agency’s IG while not interfering with the agency head’s or the President’s right to formulate and transmit their own budget amounts for the IG. The fiscal year 2011 budget included amounts for 28 of the 30 presidentially appointed IGs. One presidentially appointed IG office was newly established and not included in the full fiscal year 2011 budget process. However another IG subject to these requirements did not have a specific budget amount separately disclosed in the President’s budget. Of the 28 presidential IGs with budget amounts separately disclosed in the President’s budget, 1 included comments indicating that the IG’s fiscal year 2011 budget would substantially inhibit the IG from performing the duties of the office. Regarding the DFE IGs, the President’s budget had specific budget amounts for only 7 of the 33 DFE IGs. There were four newly established DFE IGs that were not part of the full fiscal year 2011 budget process. The President’s budget did not contain specified budget amounts for the 22 remaining DFE IGs subject to these requirements. We notified the IG Council that most of the DFE IGs and one presidentially appointed IG did not have separate budget amounts included in the President’s budget submission to the Congress. The IG Council has responded that it will review and assess this matter and, if necessary, work with congressional and administration officials to resolve this issue. The IGs’ effectiveness was reflected in a range of reported accomplishments, such as potential dollars to be saved by the government through the results of federal IG audits, investigations, and other reports. In addition, IG effectiveness was demonstrated in their efforts to help prevent fraud, waste, and abuse. For example, IGs in agencies receiving Recovery Act funds have reported providing oversight in the areas of establishing and maintaining controls to help ensure the funds are used properly. Also, the IGs’ effectiveness was demonstrated by their reporting on oversight of management challenges identified at their agencies. In their annual report to the President, the IGs established by the IG Act identified billions of dollars in savings and cost recoveries and other accomplishments resulting from their work in fiscal year 2009. As part of this report for fiscal year 2009, these IGs identified $43.3 billion in potential savings from audits and investigations; and reported over 5,900 criminal actions, 1,100 civil actions, 4,460 suspensions or debarments, and over 6,100 indictments resulted from their work. Based on this information, the potential dollar savings reported by these IGs represent a return on investment of approximately $18 for every IG dollar spent when compared to total IG fiscal year 2009 budget appropriations of $2.3 billion. In addition to measurable accomplishments, IGs also reported actions taken to prevent problems within their agencies, although these outcomes are more difficult to measure. For example, the IGs assisted in the oversight of expenditures authorized by the Recovery Act by reporting on preventive measures taken to help reduce the vulnerability of Recovery Act disbursements to fraud, waste, and abuse. The Recovery Act requires IG reviews of concerns raised by the public about investments of stimulus funds and provides IGs the authority to examine records and interview Recovery Act fund contractors and grantees. The Recovery Act established the Recovery Board whose members include 12 IGs and an additional IG as the chair, to coordinate and conduct oversight of funds distributed under the act in order to prevent fraud, waste, and abuse. In addition, the board is charged under the act with establishing and maintaining a user friendly website to foster greater accountability and transparency in the use of Recovery Act funds. To help prevent fraud and other potential wrongdoing, the IGs offered training to federal, state, and local employees, as well as contractors, private entities, and award recipients. The IGs’ training was intended to improve awareness of the legal and administrative requirements of the Recovery Act programs. As of June 2011, the Recovery Board reported that the IGs received over 7,000 complaints of wrongdoing associated with Recovery funds, opened over 1,500 investigations, and completed over 1,400 reviews of activities intended to improve the use of Recovery Act funds. In addition, the Recovery Board reported that IGs have provided over 2,000 training sessions to almost 139,000 individuals on the requirements of Recovery Act programs, how to prevent and report fraud, and how to manage grant and contract programs to meet legal and administrative requirements. The management challenges reported annually by federal agencies in their performance and accountability reports along with relevant IG reports to address these challenges are key to focusing on effective IG oversight. The identification of management challenges by the IGs began in 1997 when congressional leaders asked the IGs to identify the 10 most serious management problems in their respective agencies. This request began a yearly process that continues as a result of the Reports Consolidation Act of 2000. This act calls for executive agencies to include their IGs’ lists of significant management challenges in their annual performance and accountability reports to the President, the Office of Management and Budget, and the Congress. Not all agencies with IGs have requirements to report management challenges. Fifty-four of the IGs we surveyed reported having certain responsibilities for identifying management challenges in their agencies for fiscal year 2010. Through our survey, 27 of the IGs appointed by the President and 27 of the DFE IGs reported their agencies’ management challenges and provided examples of audit reports that addressed about 90 percent of those challenges reported. The responses from the IGs appointed by the President show that most of the 203 management challenges they reported for fiscal year 2010 focused on issues specific to their agencies’ missions and performance management. (See fig. 1.) For example, the National Aeronautics and Space Administration’s IG reported that major changes to the direction of the nation’s space program present several management challenges, and the Department of Health and Human Services IG cited the management challenges associated with delivery of the nation’s health care. The other management challenges addressed by the IGs relate to information technology, procurement, financial management, and human resources. In addition, to provide oversight coverage of management challenges, the presidential IGs issued reports that addressed about 93 percent of the management challenges identified. These reports contained recommendations for improving the weaknesses specified by the management challenges. For example, the Federal Deposit Insurance Corporation IG recommended strengthening specific controls over managing the closing process for failed financial institutions which is a key aspect of FDIC’s mission regarding insured depository institutions. Also, the Social Security Administration IG identified transparency and accountability issues as an agency management challenge and provided report recommendations for improved performance in this area. The DFE IGs reported 124 management challenges for fiscal year 2010, with a focus on their agencies’ missions, information technology, and performance management. (See fig. 2.) For example, the Farm Credit Administration’s IG reported management challenges related to the safety, soundness, and mission accomplishment of the Farm Credit System. In addition, information technology, including information security, was often identified as a management challenge. For example, the Federal Maritime Commission’s IG and the National Labor Relations Board’s IG identified challenges in upgrading their agencies’ management systems. The performance management issues the DFE IGs identified as management challenges included timely implementation of IG recommendations by the Peace Corps and expanding public access at the National Archives and Records Administration. The management challenges included in the “other” category included concerns over internal controls, improper payments, and the security of federal property. In addition, the DFE IGs issued reports that addressed almost 90 percent of the management challenges identified and contained recommendations for corrective actions. For example, the Farm Credit Administration IG assessed the agency’s readiness to take enforcement actions related to its mission. In another example, the Postal Service IG provided recommendations to improve the efficiency of postal operations related to performance management in sorting the mail. The 62 federal IGs responding to our survey reported information on their expertise and qualifications including the backgrounds, academic degrees, and professional certifications. The IGs’ information showed a wide range of backgrounds, skills, and professional certifications relevant to their work consistent with the areas of demonstrated experience specified by the IG Act. Figure 3 summarizes the background experiences of the 62 IGs who responded to our survey. Most of the IGs appointed by the President reported that they had a background in criminal justice, investigations, law enforcement, and public administration, while most of the DFE IGs had backgrounds in inspections and evaluations, criminal justice, investigations, law enforcement, accounting and auditing, and financial analysis. As summarized in figure 4, we also obtained information on the academic degrees obtained by the 62 IGs. Most of the IGs reported having degrees in areas that are relevant to performing in an IG position and in areas of demonstrated experience specified by the IG Act. To illustrate, 15 (about half of the IGs appointed by the President) had law degrees and 1 presidential IG had a degree in an accounting and auditing area. Twelve DFE IGs had law degrees and an equal number of DFE IGs had degrees in accounting and auditing related areas. Additional degrees were reported by both presidential and DFE IGs in areas of criminal justice, investigations, law enforcement; management analysis; and public administration. Other academic degrees reported by presidential and DFE IGs included mathematics, science, sociology, education, psychology, and English. With respect to professional certifications, 6 IGs appointed by the President reported having professional certifications and 28 DFE IGs reported they possessed at least one professional certification related to their IG responsibilities. For the presidential IGs, 2 were certified fraud examiners, 1 reported being a certified internal auditor, 1 reported being a certified government financial manager, and 2 had certifications in additional separate areas. Of the DFE IGs, 6 reported they are certified public accountants and 6 reported that they are certified internal auditors. Additional certifications reported by the DFE IGs include 6 certified government financial managers, 4 fraud examiners, 3 certified information systems auditors, and 7 with other certifications such as a certified government auditing professional, certified information security manager, certified information officer, and certified inspector general. (See fig. 5.) We received comments from the IG Council (reprinted in app. V), on September 13, 2011. The council commented that the draft provided useful information on the independence, activities, and accomplishments of the federal inspectors general and, as such, will contribute to a greater understanding of the work of the IGs in providing oversight to a wide range of government programs. We also received, and incorporated as appropriate, technical comments from several IG offices. We will send copies of this report to members of the IG Council, including the Office of Management and Budget’s Deputy Director for Management, the Chairperson, the Vice Chairperson, and the IGs who participated in our survey. We will also send copies of the report to the Chairman and the Ranking Member of the Senate Committee on Finance. If you have any questions or would like to discuss this report, please contact me at (202) 512-8486 or raglands@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. IGs established by the IG Act of 1978, as amended, with appointment by the President and Senate confirmation. IGs established by the IG Act of 1978, as amended, with appointment by the agency head. In addition, the Department of State IG provides oversight of the Broadcasting Board of Governors, which is a designed federal entity. The IG is a member of the Recovery Accountability and Transparency Board. In addition to the contact named above, Jackson W. Hufnagle, Assistant Director; Jacquelyn Hamilton; Werner F. Miranda Hernandez; Rebecca Shea; and Clarence A. Whitt made key contributions to this report. | The Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act) required GAO to report on the relative independence, effectiveness, and expertise of the inspectors general (IG) established by the IG Act of 1978, as amended (IG Act), including IGs appointed by the President with Senate confirmation and those appointed by their agency heads in designated federal entities (DFE). GAO was also required to report on the effect that provisions in the Dodd-Frank Act have on IG independence. The objectives of this report are to provide information as reported by the IGs on (1) the implementation of provisions intended to enhance their independence in the IG Reform Act of 2008 (Reform Act), the IG Act, and the Dodd-Frank Act; (2) their measures of effectiveness, including oversight of American Recovery and Reinvestment Act of 2009 (Recovery Act) funds; and (3) their expertise and qualifications in areas specified by the IG Act. GAO relied primarily on responses to its survey received from 62 IGs established by the IG Act. GAO also obtained information from the President's fiscal year 2011 budget, the IGs' annual report to the President for fiscal year 2009, and the IGs' semiannual reports to the Congress. GAO is not making any recommendations in this report. In comments on a draft of this report, the Council of the Inspectors General on Integrity and Efficiency (IG Council) stated the report contributes to a greater understanding of the work of the IGs in providing oversight to a wide range of government programs.. Information from the 62 IGs in offices established by the IG Act and GAO's analysis showed that the IGs had (1) taken actions to implement statutory provisions intended to enhance their independence; (2) reported billions of dollars in potential savings and other measures of effectiveness, including actions taken to help prevent fraud in the distribution of Recovery Act funds; and (3) a range of expertise and qualifications in the areas specified by the IG Act. With respect to independence, the IGs reported that (1) statutory provisions regarding IG compensation have been implemented where applicable, thereby maintaining the independence of their work and enhancing their relative stature within their agencies; (2) they had access to independent legal counsel who reports to an IG instead of an agency management official; (3) only one IG used a statutory provision for IGs to report particularly flagrant problems through the agency head to the Congress in 7 days because issues are generally resolved before the report is needed; and (4) of the affected 26 DFE IGs, 14 responded that their independence was enhanced by the Dodd-Frank Act provision that changed the designation of agency head from the chair to the entire board or commission, and 20 responded that their independence was enhanced by the provision requiring a two-thirds majority vote for IG removal. Also, the IGs' budgets were not always identified separately in the President's fiscal year 2011 budget submission as required by the Reform Act provision intended to enhance the IGs' budget independence through transparent reporting. The IG Council is currently reviewing the matter. The IGs reported various measures of effectiveness. The IGs reported potential savings of about $43.3 billion resulting from their fiscal year 2009 audits and investigations. Given the IGs' fiscal year 2009 budget authority of about $2.3 billion, these potential savings represent about an $18 return on every dollar invested in the IGs. The IGs also reported about 5,900 criminal actions, 1,100 civil actions, 4,400 suspensions and debarments, and 6,100 indictments as a result of their work. In addition, the IGs reported enhanced effectiveness through additional actions taken to help prevent fraud in their agencies. For example, in fiscal year 2009 the Recovery Act created a requirement for the IGs to provide oversight of the economic stimulus funds disbursed by their agencies, and established the Recovery Accountability and Transparency Board of IG members to help carry out this oversight. As of June 2011, the IGs reported over 1,500 investigations opened, over 1,400 reviews completed, and over 2,000 training sessions provided to detect and prevent fraud, waste, abuse, and mismanagement in the use of Recovery Act funds. With respect to expertise, the IGs reported having backgrounds, academic degrees, and certifications in a range of areas related to their statutory responsibilities. The IGs reported backgrounds and academic degrees in accounting, auditing, financial analysis, law, management analysis, public administration, and investigations. In addition, the IGs, particularly the DFE IGs, reported numerous professional certifications related to their responsibilities. |
Under its 1957 statute, IAEA is authorized, among other things, to facilitate the peaceful uses of nuclear energy, including the production of electric power, by supplying materials, services, equipment and facilities to its member states, particularly considering the needs of the developing countries. About 90 countries receive technical assistance, mostly through over 1,000 projects in IAEA’s technical cooperation program. IAEA’ s technical cooperation program funds projects in 10 major program areas, including agriculture, the development of member states’ commercial nuclear power programs, and nuclear safety. The average cost of a member state’s technical assistance project is about $60,000. IAEA provided about $800 million in technical assistance to its member states from 1958 through 1996, for equipment, expert services, training, and subcontracts (agreements between IAEA and a third party to provide services to IAEA member states). IAEA’s training activities include fellowships, scientific visits, and training courses. Egypt was the largest recipient of IAEA’s technical assistance overall. About 44 percent of the assistance was spent for equipment, and—from 1980 through 1996—about half of the funds were provided for assistance in three program areas—the application of isotopes and radiation in agriculture, general atomic energy development, and safety in nuclear energy. For 1997 through 1998, IAEA approved $154 million more in technical assistance for its member states. Technical assistance projects are approved by IAEA’s Board of Governors for a 2-year programming cycle, and member states are required to submit written project proposals to IAEA 1 year before the start of the programming cycle. The proposals are appraised for funding by IAEA staff and IAEA member states in terms of the projects’ technical and practical feasibility, national development priorities, and the projects’ long-term advantages to the recipient countries. Because IAEA’s full-scope safeguards, as embodied in the 1970 Treaty on the Non-Proliferation of Nuclear Weapons (NPT), emerged after IAEA was established, all IAEA member states in good standing are eligible for the same privileges, including receiving technical assistance. IAEA does not bar technical assistance for member states that do not have IAEA’s full-scope safeguards or are not parties to the NPT. For example, Pakistan, Israel, and Cuba receive IAEA’s technical assistance but do not have full-scope safeguards and are not parties to the NPT. U.S. participation in IAEA’s technical cooperation program is coordinated through an interagency group—the International Nuclear Technology Liaison Office—which is chaired by the Department of State and includes representatives from the Department of Energy (DOE), the Arms Control and Disarmament Agency (ACDA), and the Nuclear Regulatory Commission (NRC). The United States also maintains a presence at IAEA through the U.S. Mission to the United Nations System Organizations in Vienna, Austria. U.S. contractors from Argonne National Laboratory and the National Academy of Sciences/National Research Council support U.S. training and fellowship activities for the program. In addition to developing and coordinating U.S. policy towards IAEA’s technical cooperation program, the interagency group (1) proposes and recommends U.S. support for specific projects—known as “footnote a” projects—only in IAEA member states that are parties to the NPT or other nuclear nonproliferation treaties;(2) selects courses and participants for U.S.-hosted IAEA training courses and places IAEA fellows at U.S. institutions, such as national laboratories and universities; (3) facilitates purchases of U.S. equipment on behalf of IAEA; (4) recommends U.S. experts and consultants to represent the United States at IAEA meetings, conferences, and symposia; and (5) recruits U.S. nationals to provide expert advice to IAEA and to staff IAEA’s operations. In addition, according to a U.S. Mission official, almost 200 U.S. nationals are employed by IAEA. U.S. officials and representatives of other IAEA major donor countries told us that the principal purpose of IAEA’s technical cooperation program is to help ensure that IAEA member states, many of whom are developing countries, support IAEA’s safeguards and the NPT. Most of the member states participate in IAEA primarily for the nuclear technical assistance it provides. In the past, the United States and other major donors raised concerns about the effectiveness and efficiency of the technical cooperation program. However, since 1992, IAEA has been implementing improvements to the program that the United States and other IAEA member states strongly support. While the United States and other IAEA major donor countries believe that applying safeguards is IAEA’s most important function, most developing countries believe that receiving technical assistance through the technical cooperation program is just as important, and they participate in IAEA primarily for the technical assistance it provides. State Department, ACDA, and NRC officials told us that the principal purpose of U.S. participation in IAEA’s technical cooperation program is to help ensure that IAEA member states, many of whom are developing countries, support IAEA’s nuclear safeguards system and the NPT. A State Department document noted that the United States regarded support for the technical cooperation program to developing countries as the “price tag” for safeguards. At an October 1996 meeting, IAEA’s Director General told us that the opportunity to receive technical assistance dissuades member states from engaging in the proliferation of nuclear weapons. Representatives from four IAEA major donor countries—Australia, Canada, Germany, and Japan—told us that they generally agree with U.S. views that technical assistance is necessary to ensure that developing countries support safeguards and the NPT. However, representatives from six developing countries that have benefited from IAEA’s technical assistance—Argentina, Brazil, China, India, Pakistan, and South Africa—told us that their countries participate in IAEA primarily because their participation enables them to receive technical assistance.According to the representatives from India, Pakistan, and South Africa, IAEA would simply become an international “policing” organization for monitoring compliance with safeguards if IAEA did not provide technical assistance. A U.S. Mission official stated that several member states, including India and Pakistan, would be likely to withdraw from IAEA if its technical assistance were severely scaled back. According to IAEA officials, IAEA carries out its dual responsibilities and manages the competing interests of its member states by maintaining a balance in funding between providing technical assistance and ensuring compliance with safeguards. As figure 1 shows, in 1996, IAEA spent about $97 million on safeguards and about $89 million on technical assistance, accounting for approximately 30 percent and 27 percent, respectively, of IAEA’s total expenditures of about $325 million. Technical assistance ($89.0) Other programs ($67.2) In the past, officials in the United States and other IAEA major donor countries had concerns about the effectiveness and efficiency of the technical cooperation program. A 1993 State Department cable stated that the United States had long been concerned that “footnote a” projects were devoid of significant technical, health, or socioeconomic benefit to the recipient country. Some of the evaluations that we reviewed indicated other deficiencies in the technical cooperation program. For example, an October 1993 special evaluation review of lessons learned from completed evaluation reviews noted that inadequate project plans and designs resulted in implementation problems and delays in 30 percent of the technical assistance projects reviewed from 1988 through 1993. Some of the negative effects IAEA cited that resulted from insufficient project planning included (1) approving a 2-year project without obtaining sufficient evidence about its feasibility; (2) planning research reactor activities that did not yield significant results because they were premature or ambitious in relation to local resources; and (3) conducting nuclear physics projects in Africa that lacked clear results and benefits to the recipient country. IAEA officials in the Department of Technical Cooperation told us they have not prepared a comprehensive report on the accomplishments of the program since its inception in 1958. Although IAEA has provided its member states with detailed descriptions of all of its technical assistance projects, it did not assess the success or failure of these projects in the past. According to the head of IAEA’s Department of Technical Cooperation’s Evaluation Section, evaluations of projects’ impact were not required because IAEA was focusing on the efficiency of projects’ implementation. Moreover, IAEA stated that in 1993, the technical cooperation program’s priorities shifted from implementing research and infrastructure-building activities efficiently to designing projects that have an impact on the end-user and provide nuclear science and technology activities that contribute to national development. IAEA noted that it is unrealistic to expect impact analyses of projects designed and implemented according to standards that did not embody measures of impact at the time. In the year 2000, IAEA plans to review the program’s performance against the criteria for success contained in IAEA’s strategy for technical cooperation. We reviewed 40 reports prepared by IAEA’s Department of Technical Cooperation’s Evaluation Section and summaries of four audits of the program prepared by IAEA’s Office of Internal Audit and Evaluation Support, which covered the period from 1985 through 1996, to determine whether they contained assessments of the program’s effectiveness. We found that most of the 40 reports and audit summaries did not assess the impact of specific technical assistance projects, and no performance criteria had been established to help measure the success or failure of the projects. The evaluations and audits were also limited because insufficient travel funds generally precluded visits by IAEA staff to the recipient nations. We also reviewed the project files for four selected technical assistance projects in Iran, North Korea, Bulgaria, and Egypt that had been completed or canceled by IAEA. None of the project files we reviewed contained information on the project’s accomplishments. Our review of other project files was limited by IAEA’s policy on confidentiality, which regards information obtained by IAEA under a technical cooperation project as belonging to the country receiving the project. Under this policy, IAEA cannot divulge information about a project without the formal consent of the receiving country’s government. Since 1992, IAEA’s Deputy Director General for Technical Cooperation has taken steps to improve the effectiveness and efficiency of the technical cooperation program. For example, IAEA is establishing a system for measuring the quality and performance of some of its technical assistance projects. However, in 1996, IAEA’s Secretariat reported to the Board of Governors that outcomes were still clearly defined for only 25 percent of the 90 technical assistance projects whose results they had monitored from January through October 1996. The Evaluation Section of IAEA’s Department of Technical Cooperation is also helping the department to establish criteria for measuring the results of a project while planning it. The United States and other IAEA major donor countries strongly support IAEA’s efforts to improve the effectiveness and efficiency of the program, but U.S. officials are concerned that all of the improvements may not be fully implemented and made permanent in the 2 years before the term of the current Deputy Director General for Technical Cooperation ends. (App. I discusses the status of IAEA’s efforts to improve the effectiveness and efficiency of the technical cooperation program and the U.S. position on these actions.) According to a State Department cable describing the results of meetings held in September 1996, the major donors in attendance were highly supportive of IAEA’s initiatives to improve the program. The donors concluded that they were under increasing pressure at home to demonstrate that their countries’ contributions to IAEA were being well spent; supportive of the Deputy Director General for Technical Cooperation’s efforts to make the entire technical cooperation program more efficient and effective; concerned because the technical cooperation program had not set priorities or established a schedule for accomplishing improvements to the program; and concerned that IAEA’s Department of Technical Cooperation may not have the management skills required to accomplish these improvements. More recently, during the Board of Governors’ June 1997 meeting, the members highly praised IAEA’s efforts in carrying out its initiatives to improve the effectiveness and efficiency of the technical cooperation program. Most of the funding for IAEA’s technical cooperation program—about 70 percent—comes from voluntary contributions made by member states to IAEA’s technical cooperation fund. In 1996, the United States provided a total of about $99 million to IAEA, which consisted of about $63 million for IAEA’s regular budget and an additional voluntary contribution of $36 million. About $16 million of the $36 million U.S. voluntary contribution to IAEA went to the technical cooperation fund; this contribution represented about 32 percent of the fund, which totaled $49 million. The remainder of the U.S. voluntary contribution to IAEA—about $20 million—was spent on other forms of support for the technical cooperation program, including (1) U.S.-hosted IAEA training courses, (2) “footnote a” projects, (3) placements of IAEA fellows at U.S. institutions, (4) the services of U.S. experts, and (5) support for other IAEA programs, including safeguards. In 1996, the United States was the largest single supplier of equipment for the program. (App. II provides information on the sources of funding for IAEA’s technical assistance program from 1958 through 1996.) Because many IAEA member states are not paying into the technical cooperation fund, the United States and some other major donors are paying for a larger percentage of the fund than designated. IAEA has informally adopted a target funding level for member states’ contributions to the technical cooperation fund. IAEA’s data show that, as of August 1997, 52 of 124 member states had paid into the 1996 technical cooperation fund. The United States and Japan contributed the most, accounting for over half of the total payments to the fund. Seventy-two—or 58 percent—of the member states made no payments at all, yet 57 of these states received technical assistance. In a statement made to IAEA’s Board of Governors in June 1996, the U.S. Ambassador to the U.S. Mission to the United Nations System Organizations in Vienna, Austria, observed that the United States strongly believed that IAEA’s technical assistance should go only to those member states that support technical assistance fully, by paying their fair share. The Ambassador further noted that, because many IAEA member states are not paying their designated share of the technical cooperation fund, some member states, including the United States and Japan, are carrying the program financially, by paying more than their share. (App. III lists the IAEA member states and their shares of and payments to the 1996 technical cooperation fund.) The Ambassador of the Permanent Mission of the Republic of South Africa in Vienna, Austria, who chairs IAEA’s Informal Consultative Working Group on the Financing of Technical Assistance, told us that the group was designed to, among other things, encourage member states to increase their payments to the fund and to review whether member states that have not regularly paid into the fund should receive the benefits of IAEA’s technical assistance. The Ambassador from South Africa also told us that many of the developing countries that are members of IAEA believe that funding for the technical cooperation program should be predictable and assured and have proposed that the program be funded through member states’ contributions to IAEA’s regular budget. The major donors do not support this proposal because they believe that the program will be adequately funded if all member states provide financial support for the program. Representatives of the major recipients of IAEA’s technical assistance, including Argentina, China, Pakistan, and South Africa, told us that they are concerned that some major donors are considering reducing their voluntary contributions to IAEA, which fund the technical cooperation program. Canadian and German representatives told us that their countries may reduce their voluntary contributions to IAEA because of budget constraints. In a statement before the June 1997 meeting of IAEA’s Board of Governors, the Ambassador from South Africa said that the members of the working group were deeply divided on whether to put the technical cooperation fund into IAEA’s regular budget. She believed, however, that IAEA should take member states’ records of payment to the technical cooperation fund into account in deciding upon requests for technical assistance. IAEA officials stated that they took member states’ past payments to the fund into account when preparing for their 1997-98 program. U.S. officials do not systematically review or monitor all of IAEA’s technical assistance projects to ensure that IAEA’s activities do not conflict with U.S. nuclear nonproliferation and safety goals. We found that U.S. officials had sporadically reviewed projects in countries of concern to the United States. Several of IAEA’s technical assistance projects were related to a nuclear power plant under construction in Iran, to uranium prospecting and exploration in North Korea, and to a nuclear power plant whose construction has been suspended in Cuba. These are countries where the United States has concerns about nuclear proliferation and threats to nuclear safety. Moreover, since 1996, a portion of the funds for projects in countries of concern to the United States has come from U.S. voluntary contributions to IAEA. The Special Assistant to the U.S. Representative to IAEA in the State Department’s Bureau of Political-Military Affairs told us that the State Department, in conjunction with its contractor at the Argonne National Laboratory, is chiefly responsible for reviewing IAEA’s technical assistance projects for consistency with U.S. nonproliferation and safety goals before the projects are approved by IAEA’s Board of Governors. However, we found that although U.S. officials at the State Department and U.S. Mission have reviewed technical assistance projects in countries of concern to the United States sporadically, they have not done so systematically. Officials in IAEA’s Department of Technical Cooperation told us that they do coordinate with IAEA’s Department of Safeguards in reviewing projects that may involve the transfer of nuclear materials or other items with implications for proliferation. We also spoke with officials in IAEA’s Department of Safeguards to determine whether they systematically review all of IAEA’s technical assistance projects for consistency with nonproliferation goals. These IAEA officials told us that they do not. We found that the International Nuclear Technology Liaison Office—the interagency group that coordinates U.S. participation in the technical cooperation program and includes representatives from the State Department, DOE, ACDA, and NRC—and the U.S. contractor at Argonne National Laboratory focus their review on the “footnote a” projects that the United States may want to support with U.S. funds. The interagency group does not systematically review the majority of the technical assistance projects that are proposed for funding through IAEA’s technical cooperation fund. Neither does it regularly monitor ongoing projects. An Argonne official informed us that he reviews the list of “footnote a” projects to determine whether they have technical merit and should be funded by the United States; however, he is not responsible for assessing whether these or other projects funded through the technical cooperation fund are in keeping with U.S. nuclear nonproliferation and safety goals. State Department officials in the Bureau of International Organization Affairs told us that the Department did not have the resources to review all of the ongoing technical assistance projects and that U.S. oversight of these projects could be improved. ACDA, DOE, and U.S. Mission officials told us that the vast majority of IAEA’s technical assistance projects do not pose any concerns about nuclear proliferation because the assistance is provided in benign areas, such as medicine and agriculture, that do not involve transferring sensitive nuclear materials and technologies. IAEA’s Director General also told us that IAEA will not provide technical assistance in sensitive areas, such as the reprocessing and enrichment of nuclear material. State Department and U.S. Mission officials told us that if the United States does have concerns about specific technical assistance projects, it can informally raise its objections to IAEA’s Secretariat. However, U.S. officials we spoke with generally could not recall whether the United States had raised objections or had attempted to cancel any projects in the past several years. These U.S. officials also said that the United States does not have absolute control over the approval of specific technical assistance projects because decisions about approving and funding the projects are made collectively every 2 years at the December meeting of IAEA’s Board of Governors. A former U.S. Mission official told us that U.S. Mission representatives can meet informally with IAEA staff to discuss a preliminary list of technical assistance projects months before the Board of Governors’ meeting. The United States and other IAEA member states also have an opportunity to formally review the proposed list of technical assistance projects at IAEA’s General Conference in September and at the November meeting of the Technical Assistance and Cooperation Committee, the final meeting where member states can provide recommendations for the December Board of Governors’ meeting. U.S. officials told us that by the time the list of technical assistance projects reaches the Board of Governors, IAEA member states consider the projects to be approved. The U.S. officials added that it would be rare for representatives from the United States or any other member state to object formally to a specific technical assistance project during a meeting of IAEA’s Board of Governors. Of the total amount in technical assistance (about $800 million) that IAEA provided from 1958 through 1996 for its member states, about $52 million was spent on technical assistance for countries of concern to the United States, as defined by section 307(a) of the Foreign Assistance Act of 1961, as amended. These countries include Cuba, Libya, Iran, Myanmar (formerly Burma), Iraq, North Korea, and Syria. Iran and Cuba ranked 19th and 21st, respectively, among the 120 nations that received assistance over this period, receiving about 1.5 percent each of the total amount in technical assistance that IAEA provided. Projects IAEA provided for these countries involved nuclear training and techniques in medicine and agriculture, including establishing laboratory facilities for the production of radiopharmaceuticals in Iran and using nuclear techniques to improve the fertility of the soil in Iraq and the productivity of the livestock in Libya. (App. IV provides information on the dollar amounts and types of technical assistance that IAEA provided for its members states, including the countries of concern to the United States, from 1958 through 1996.) Although IAEA provides most of its technical assistance in areas that do not generally pose concerns about nuclear proliferation, our review of projects in countries of concern to the United States identified three cases in which IAEA provided technical assistance to countries where the United States has concerns about nuclear proliferation and threats to nuclear safety. A discussion of these three cases follows. The United States strongly opposes the sale of any nuclear-related technology to Iran, including the sale of Russian civilian reactor technology, because the United States believes that any nuclear technology and training could help Iran advance its nuclear weapons program. At an April 1997 hearing on concerns about proliferation associated with Iran, held before the Committee on Foreign Relations, Subcommittee on Near Eastern and South Asian Affairs, the former director of the Central Intelligence Agency stated that through the operation of the Bushehr reactor, the Iranians will develop substantial expertise that will be relevant to the development of nuclear weapons.For 1995 through 1999, IAEA has budgeted about $1.3 million for three ongoing technical assistance projects for the Bushehr nuclear power plant under construction in Iran. As of May 1997, about $250,000 of this amount had been spent for two of these projects. According to IAEA’s project summaries for 1997 through 1998, the three projects are (1) developing a nuclear regulatory infrastructure by training personnel in nuclear safety assessment; (2) establishing an independent multipurpose center that will provide emergency response services, train nuclear regulators, and conduct accident analyses in preparation for licensing the plant; and (3) building the capability of the nuclear technology center in Iran to support the Bushehr plant. (See app. V for more details on the assistance IAEA is providing to Iran for the Bushehr nuclear power plant.) IAEA also spent about $906,000 more for three recently completed technical assistance projects for the Bushehr plant in Iran. According to IAEA’s status reports, the objectives of these projects were (1) to increase the capacity of the Atomic Energy Organization of Iran for evaluating nuclear power plant bids and to develop a regulatory infrastructure and policy; (2) to assist in assessing the status of the Bushehr plant before construction resumed, including advising on nuclear safety criteria for licensing and assisting in developing a national infrastructure for work on the plant’s construction; and (3) to assist in assembling and installing a radioactive waste incinerator for the plant. Under these projects, IAEA has sent experts on numerous missions to conduct safety reviews of the Bushehr plant and has provided equipment, such as computer systems. According to IAEA documents, IAEA believes that this assistance made a valuable contribution to the establishment of an infrastructure for Iran’s nuclear power program. In addition, IAEA cited an on-site assessment of the reactor building and components by Russian contractors as a critical element in the decision to complete the plant. We asked the State Department’s Deputy Assistant Secretary for Nonproliferation for his views on the technical assistance that IAEA has provided for Iran’s Bushehr nuclear power plant. According to his representative in the Bureau of Political-Military Affairs, the Special Assistant to the U.S. Representative to IAEA, the United States, as a general rule, opposes nuclear cooperation with Iran and the State Department would rather not see IAEA provide technical assistance for Iran’s Bushehr nuclear power plant. The State Department official also told us that the United States had informally raised concerns to IAEA about its provision of technical assistance to the Bushehr nuclear power plant. In March 1994, Senator Jesse Helms sent a letter to the President stating his concerns about IAEA’s providing technical assistance for uranium exploration in North Korea at a time when the country was suspected of developing a nuclear weapons program. According to an April 1994 letter to IAEA’s Director General from the U.S. Ambassador to the U.S. Mission, IAEA’s Director General had earlier assured U.S. congressional representatives that IAEA had suspended its technical assistance for North Korea because North Korea was in violation of its obligations under the NPT for failing to comply with IAEA’s safeguards. The U.S. Ambassador to the U.S. Mission stated that he was unaware that several technical assistance projects for North Korea were still ongoing or had recently begun. At the June 1994 meeting of the Board of Governors, the U.S. delegation strongly recommended that IAEA’s Director General suspend the provision of technical assistance to North Korea for all activities related to nuclear material, fuel cycle, and nuclear industrial applications until concerns about North Korea’s compliance with IAEA’s safeguards had been resolved. North Korea withdrew from IAEA in June 1994, and its technical assistance projects were canceled. From 1987 through 1994, IAEA spent about $396,000 in technical assistance for two projects on uranium prospecting and exploration in North Korea. According to IAEA’s April 1997 project status reports, the objectives of these projects were (1) to enable North Korea to better assess the potential of its nuclear raw materials in view of its increasing commitment to nuclear power and (2) to provide support for North Korea’s uranium exploration program. Under the uranium prospecting project, which was completed in 1994, the status report shows that IAEA contributed a considerable amount of uranium exploration equipment to North Korea, as well as a microcomputer and software for data processing. IAEA spent more than one-third of the $87,000 budgeted for the follow-on project on uranium exploration before the project was canceled following North Korea’s withdrawal from IAEA. In March 1997, when we issued our report on IAEA’s nuclear technical assistance for Cuba, including IAEA’s technical assistance to the partially completed nuclear power plant, the State Department’s Deputy Assistant Secretary for Nonproliferation visited IAEA’s Deputy Director General for Technical Cooperation to raise concerns about IAEA’s technical assistance projects for the nuclear power plant. The Deputy Assistant Secretary noted that strong U.S. support for IAEA’s technical cooperation program could be endangered by perceptions that IAEA is supporting Cuban plans to build an unsafe reactor. He also told IAEA’s Deputy Director General for Technical Cooperation that the United States found it hard to justify IAEA’s provision of assistance to Cuba’s nuclear power plant for quality assurance and licensing when, because of financial constraints, it was unlikely that the plant would be completed. However, as of June 1997, IAEA was still conducting these two projects in licensing and quality assurance for the Cuban plant. In our March 1997 report, we noted that, from 1981 through 1993, the United States was required, under section 307(a) of the Foreign Assistance Act of 1961 and related appropriations provisions, to withhold a proportionate share of its voluntary contribution to the technical cooperation fund for Cuba, Libya, Iran, and the Palestine Liberation Organization because the fund provided assistance to these entities. The United States withheld about 25 percent of its voluntary contribution to the fund for these entities. From 1981 through 1995, the State Department withheld a total of over $4 million. State Department officials told us they believe that the withholding was primarily a symbolic gesture that had no practical impact on the total amount of technical assistance that IAEA provided to these countries. On April 30, 1994, the Foreign Assistance Act was amended, and Myanmar (formerly Burma), Iraq, North Korea, and Syria were added to the list of entities from which U.S. funds for certain programs sponsored by international organizations were withheld. At the same time, IAEA was exempted from the withholding requirement. Consequently, as of 1994, the United States was no longer required to withhold a portion of its voluntary contribution to IAEA’s technical cooperation fund for any of these entities. However, State Department officials told us that they misinterpreted the act and continued to withhold funds in 1994 and 1995. Beginning in 1996, the State Department discontinued withholding any of the U.S. voluntary contribution to the fund. The United States and other IAEA major donor countries have had concerns about the effectiveness and efficiency of the technical cooperation program. However, IAEA has taken steps to improve the effectiveness and efficiency of the technical cooperation program and the measurement of the program’s performance. The United States and others strongly support these initiatives, but concerns remain about the sustainability of these improvements. The United States is paying for more than its designated share of the technical cooperation fund because many member states are not paying into the fund. Yet many of these states are receiving the benefits of IAEA’s technical assistance. This is contrary to the State Department’s position that all IAEA member states, particularly those that receive technical assistance, should provide financial support for the program. Although U.S. officials are sporadically reviewing technical assistance projects in countries of concern to the United States, they are neither systematically reviewing technical assistance projects before their approval nor regularly monitoring ongoing technical assistance projects. Without a systematic review, U.S. officials may be unaware of specific instances in which IAEA’s assistance could raise concerns for the United States about nuclear proliferation and threats to nuclear safety. Most of the assistance that IAEA provides is not considered to be sensitive. However, in several cases, the technical assistance that IAEA has provided is contrary to U.S. policy goals. Moreover, since 1996, a portion of the U.S. funding has supported technical assistance projects that will ultimately benefit nuclear programs, training, and techniques in countries of concern to the United States, including Iran and Cuba. To assist the Congress in making future decisions about the continued U.S. funding of IAEA’s technical cooperation program, the Congress may wish to require that the Secretary of State periodically report to it on any inconsistency between IAEA’s technical assistance projects and U.S. nuclear nonproliferation and safety goals. If the Congress wishes to make known that the United States does not support IAEA’s technical assistance projects in countries of concern, as defined by section 307(a) of the Foreign Assistance Act of 1961, as amended, it could explicitly require that the State Department withhold a proportional share of its voluntary funds to IAEA that would otherwise go to these countries. We recommend that the Secretary of State direct the U.S. interagency group on technical assistance, in consultation with the U.S. representative to IAEA, to systematically review all proposed technical assistance projects in countries of concern, as covered by section 307(a) of the Foreign Assistance Act of 1961, as amended, before the projects are approved by IAEA’s Board of Governors, to determine whether the proposed projects are consistent with U.S. nuclear nonproliferation and safety goals. If U.S. officials find that any projects are inconsistent with these goals, we recommend that the U.S. representative to IAEA make the U.S. objections known to IAEA and monitor the projects in these countries. We provided copies of a draft of this report to the Department of State for review and comment. The Department obtained and coordinated comments from Argonne National Laboratory; ACDA; DOE; NRC; the U.S. Mission to the United Nations System Organizations in Vienna, Austria; and IAEA. On August 1, 1997, we met with officials from the Department of State—including the Deputy Director, Office of Technical Specialized Agencies, Bureau of International Organization Affairs—and from the Department of Energy— including a Foreign Affairs Specialist in the Office of Nonproliferation and National Security. The agencies provided clarifying information and technical corrections, which we incorporated into the report. The agencies generally agreed with the facts as presented in the report and made no comments on our recommendations. They did, however, express one concern about our matters for congressional consideration. Specifically, they suggested that withholding a part of the U.S. voluntary contribution to IAEA that is proportional to all of the assistance that IAEA provides to Cuba, North Korea, and other countries of concern would be seen as a politicization of the technical assistance process that could undercut U.S. nonproliferation objectives. The agencies added that they do not object to IAEA’s providing technical assistance to countries of concern in the areas of nuclear safety, medicine and agriculture. We cannot speculate on how others might view such a withholding requirement. However, as discussed in the report, the United States did, from 1981 through 1995, withhold a portion of its voluntary contribution to IAEA, amounting to over $4 million, for technical assistance for countries of concern to the United States. IAEA was exempted from the withholding requirement in 1994, although the State Department continued to withhold funds in 1994 and 1995. Our report also notes the recent introduction into the Congress of a bill proposing that the United States withhold a proportional share of its funds for IAEA’s programs or projects in Cuba. In addition, the agencies said that IAEA’s technical cooperation program, in general, has strongly supported U.S. nuclear safety policy objectives, most notably in Central and Eastern Europe and in the Newly Independent States that operate unsafe Soviet-designed reactors. The agencies further observed that the United States continues to support IAEA’s nuclear safety efforts. In appendix IV, we acknowledge IAEA’s contribution to nuclear safety, noting that from 1958 through 1996, IAEA spent about 16 percent of its technical assistance on safety in nuclear energy. We discussed U.S. participation in IAEA’s technical cooperation program with officials of and gathered data from the Department of State; DOE; ACDA; NRC; Argonne National Laboratory; and the National Academy of Sciences/National Research Council in Washington, D.C., as well as from the U.S. Mission to the United Nations System Organizations and IAEA in Vienna, Austria. We met with IAEA’s Director General; Deputy Directors General for Administration, Research and Isotopes, Nuclear Energy, Nuclear Safety, and Technical Cooperation; the Principal Officer for the Deputy Director General for Safeguards; a Senior Legal Officer in the Department of Administration; and other staff. We reviewed program files at the Department of State and at the U.S. Mission to the United Nations System Organizations in Vienna, Austria. We gathered financial and programmatic data from IAEA on its technical cooperation for the period from 1958, when the program began, until 1996. Programmatic data for the entire period were not always available from IAEA. We did not independently verify the quality and accuracy of IAEA’s data. We also met in Vienna, Austria, with representatives from four of the member states that are major financial donors to the technical cooperation program and six of the states that receive extensive technical assistance or represent the views of the developing countries. The four major donors were Japan, Australia, Canada, and Germany; the six major recipient and/or developing countries were Argentina, Brazil, China, India, Pakistan, and South Africa. We also reviewed 40 reports on various aspects of the technical cooperation program that were prepared by IAEA’s Department of Technical Cooperation’s Evaluation Section; summaries of four audits of the program prepared by IAEA’s Office of Internal Audit and Evaluation Support that covered the period from 1985 through 1996; and four project files for selected technical assistance projects in Iran, North Korea, Bulgaria, and Egypt that were completed or canceled. We reviewed IAEA’s data on the technical assistance projects provided for countries of concern to the United States to determine whether IAEA’s assistance conflicted with U.S. nuclear nonproliferation and safety goals. We observed two meetings of the International Nuclear Technology Liaison Office (the U.S. interagency group that coordinates U.S. participation in IAEA’s technical cooperation program), the November 1996 meeting of the Technical Assistance and Cooperation Committee, and the December 1996 meeting of IAEA’s Board of Governors in Vienna, Austria. We performed our work from July 1996 through August 1997 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretaries of State and Energy, the Chairman of the Nuclear Regulatory Commission, the Director of the Arms Control and Disarmament Agency, and other interested parties. We will also make copies available to others on request. Please call me at (202) 512-3841 if you or your staff have any questions. Major contributors to this report are listed in appendix VI. In 1992, the International Atomic Energy Agency’s (IAEA) Deputy Director General for Technical Cooperation embarked on a series of improvements so that the technical cooperation program would better meet the needs of its recipients and its impact would be measurable. The United States and other IAEA member states strongly support the Deputy Director General’s efforts to improve the program. When IAEA’s current Deputy Director General for Technical Cooperation began his term in 1992, he established a new strategy for improving the effectiveness and efficiency of the program. According to an IAEA paper, the goal of the new strategy is to develop partnerships between IAEA and its member states so that technical assistance produces a “measurable socio-economic impact by directly contributing in a cost-efficient manner to the achievement of the highest development priority of the country.” Important components of the strategy are “model” projects that are expected to respond to a real need of the recipient country, produce a significant economic or social impact by looking beyond the immediate recipient of assistance to the final end user, demonstrate sustainability after the project’s completion through a strong require detailed workplans and objective performance indicators, and demonstrate an indispensable role for nuclear technology with distinct advantages over other approaches. Since 1994, IAEA has initiated nearly 60 model projects, including those under the 1997-98 technical cooperation program. Few model projects have been completed, so it is too early to assess their impact. Nevertheless, some of the model projects that IAEA expects will have measurable results include using a radioimmunoassay to screen for thyroid deficiency in newborn providing nuclear methods to evaluate the effectiveness of a government food supplement intervention program to combat malnutrition in Peru, supporting a program for using nuclear techniques to improve local varieties of sorghum and rice in Mali, and eliminating the tsetse fly from the island of Zanzibar using radiation to sterilize male flies. IAEA is also working to design model projects within a “country program framework.” The goal of this framework is to achieve agreement between IAEA and the recipient country on concentrating technical cooperation on a few high-priority areas where projects produce a significant national impact. IAEA expects to have concluded the frameworks with one-half of the recipients of technical assistance by the year 2000. Like most other IAEA member countries, the United States supports the efforts of IAEA’s Deputy Director General for Technical Cooperation to improve the effectiveness and efficiency of the technical cooperation program. U.S. officials believe that the initiatives and strategic goals of the Technical Cooperation Department and IAEA are extremely significant, particularly now that donor countries’ resources may be declining and the effectiveness and efficiency of all international organizations are being questioned. Since these reform efforts began, the United States has been a strong supporter of the program, making experts available to IAEA, funding specific model projects, and supporting the program in statements before IAEA’s Board of Governors. Although the United States, with other IAEA major donor countries, supports efforts to improve the technical cooperation program, it also shares some concerns with the other major donors about the sustainability of these improvements. State Department officials, including U.S. Mission officials, believe that IAEA must focus on implementation if the efforts at improvement are to last beyond the tenure of the current Deputy Director General, which ends in 1999. According to State Department officials, there is a difference between initiating change and achieving permanent change. These officials have insisted that the Department of Technical Cooperation provide IAEA’s Board of Governors with a strategic plan that will lead to permanent change. Within IAEA, the Department of Technical Cooperation and three other technical departments—the departments of Research and Isotopes, Nuclear Safety, and Nuclear Energy—are the main channels for technology transfer activities within the technical cooperation program. IAEA receives funding for the costs of administration and related support in the Department of Technical Cooperation and for activities in the three technical departments through IAEA’s regular budget. However, most of the funding for IAEA’s technical assistance—about 70 percent—comes from voluntary contributions made by the member states to IAEA’s technical cooperation fund, as figure II.1 shows. In addition to the technical cooperation fund, other sources of voluntary financial support for the program include the following: Extrabudgetary cash contributions are made by member states for specific technical assistance projects—known as “footnote a” projects—and for training. Although “footnote a” projects are considered to be technically sound by IAEA, they are of lower priority to recipient member states than the projects that are financed through the technical cooperation fund. The United States endeavors to provide support for “footnote a” projects in countries that are parties to nonproliferation treaties. Assistance in kind includes equipment donated by member states, expert services, or fellowships arranged on a cost-free basis. The United Nations Development Program (UNDP) provide funds through IAEA for its development projects that IAEA implements in areas involving nuclear science and technology. Member states ($93.1) 7% In-kind ($56.8) UNDP ($84.9) Technical cooperation fund ($558.7) For calendar year 1996, fewer than half of the 124 IAEA member states contributed to the technical cooperation fund. As table III.1 indicates, 52 states contributed a total of about $48.6 million. Of these states, the United States and Japan contributed the most, accounting for over half of the total payments to the fund. Twenty-four member states that contributed to the fund also received about $22.5 million in technical assistance from IAEA. Actual percentage of total payments (continued) In 1996, 72, or about 58 percent, of the 124 IAEA member states did not contribute to the technical cooperation fund. Fifty-seven of these states received a total of $26,039,722 in technical assistance from IAEA, as table III.2 indicates. Myanmar (Burma) (continued) Amount of technical assistance received in 1996 (Table notes on next page) IAEA spent about $800 million on technical assistance for its member states from 1958—when the technical cooperation program began—through 1996, for equipment, expert services, training, and subcontracts. Figure IV.1 shows that about 44 percent of the funds were spent for equipment, such as computer systems and radiation-monitoring and laboratory equipment. In 1996, the United States was the largest single supplier of equipment for IAEA’s technical cooperation program. 8% Training course ($67) 1% Subcontracts ($11) Expert services ($195) Fellowships/scientific visits ($174) Equipment ($346) Of the more than 120 IAEA member states that received IAEA’s technical assistance from 1958 through 1996, 10 states received more than 20 percent of the $800 million given, or about $175.7 million collectively, as table IV.1 indicates. Egypt, which started to receive technical assistance from IAEA in 1970, has received the largest total amount. About half—or $334 million—of the $648 million that IAEA spent for technical assistance from 1980 through 1996 was provided for three program areas—the application of isotopes and radiation in agriculture, general atomic energy development, and safety in nuclear energy—as figure IV.2 shows. Moreover, two other program areas—nuclear engineering and technology, and the application of isotopes and radiation in industry and hydrology—received about 26 percent of the funds, for a total of about $169 million. IAEA approved about $154 million more in technical assistance projects for its member states for 1997 through 1998. Over half of this additional assistance will be provided for the application of isotopes and radiation in medicine, agriculture, and safety in nuclear energy. Of the about $800 million in technical assistance provided by IAEA to all of its member states from 1958 through 1996, about $52 million was spent on countries currently of concern to the United States. As table IV.2 indicates, most assistance given to these countries was in the form of equipment. In 1973, a German firm began the construction of two reactors in Iran near Bushehr, but construction was halted during the Islamic Revolution in 1979. In 1995, Iran and Russia reached an $800 million agreement for the Ministry of the Russian Federation for Atomic Energy (MINATOM) to resume the construction of Unit 1 of the Bushehr nuclear power plant and to switch from a German-designed to a Russian-designed VVER-1000 model reactor. According to IAEA’s project summaries for the proposed 1997-98 program, the decision to resume the Bushehr project with a new design has placed heavy responsibility on Iran’s Nuclear Safety Department, the regulatory body of the Atomic Energy Organization of Iran. For 1995 through 1999, IAEA budgeted about $1.3 million for three ongoing technical assistance projects for the Bushehr nuclear power plant under construction in Iran. As of May 1997, about $250,000 of this amount had been spent for two of these projects. According to IAEA’s project summaries for 1997-98, the three projects are (1) developing a nuclear regulatory infrastructure by training personnel in nuclear safety assessment; (2) establishing an independent multipurpose center that will provide emergency response services, train nuclear regulators, and analyze accidents in preparation for licensing the plant; and (3) building the capability of the Esfahan Nuclear Technology Center in Iran to support the Bushehr plant. This ongoing project was originally approved in 1995 and is partly a continuation of another project—completed in 1995 for about $77,000—to increase the capability of staff at the Atomic Energy Organization of Iran to evaluate nuclear power plant bids and to develop a regulatory infrastructure and policy. The aim of the ongoing project is to develop a nuclear regulatory infrastructure by training personnel in nuclear safety assessment and in operator responsibilities. Under the project, IAEA has sent experts on numerous missions to Iran to provide advice and training in quality assurance, project management, and site and safety reviews; has provided supplies such as books and journals; and has sponsored some fellowships and scientific visits. A workshop for the top management of Iran’s atomic energy authority was held on quality assurance in 1995. Eight reports have been prepared under the project by experts on topics such as quality assurance, a preliminary safety review of the plant, and a review of seismic hazard studies at the plant site. As of May 1997, IAEA had spent about $241,000 for expert services, equipment (supplies), and fellowships—or about half of the approximately $494,000 that it plans to spend through 1998, as indicated in table V.1. This new model project, which was approved under IAEA’s 1997-98 technical cooperation program, is intended to improve the overall safety of the plant by establishing an independent multipurpose center that will provide emergency response services, train regulators, and analyze accidents. IAEA will furnish experts to advise, assist, and provide training in the following areas: (1) identify safety features and evaluate them in the context of the VVER-1000 design for formulating the regulatory requirements; (2) formulate a safety policy and associated licensing and supervisory procedures for the completion of the plant; (3) train regulatory staff; (4) evaluate submitted regulatory documents; and (5) establish a national regulatory inspectorate to carry out inspections during the design, construction, commissioning, and operation of the plant. IAEA has already sent a number of experts on missions to Iran as a part of the project. IAEA expects that the project will help the national regulatory body to discharge its statutory responsibilities for ensuring that the plant is constructed according to regulatory standards conducive to safe operation. As of May 1997, IAEA had provided approximately $8,440 in expert services and was planning to provide a total of approximately $403,000 for expert services and fellowships though 1999. Another new project for the plant, which was approved under IAEA’s 1997-98 technical cooperation program, will enhance the ability of Iran’s Esfahan Nuclear Technology Center to support the Bushehr plant. IAEA’s project summary states that while Iran’s nuclear technology center has adequate technical and scientific expertise on nuclear safety and quality assurance to support Iran’s nuclear regulatory body and the plant, the center has asked for IAEA’s expert advice and transfer of up-to-date knowledge. IAEA will provide expert services to help the center analyze the capabilities of the power plant and will provide training in reactor safety analysis and reactor technology. According to the project summary, this project will develop expertise at the center in safety analysis and other technical expertise for the Bushehr plant. IAEA plans to provide a total of $400,800 for expert services and fellowships for the project by 1999. Nuclear Nonproliferation: Implementation of the U.S./North Korean Agreed Framework on Nuclear Issues (GAO/RCED/NSIAD-97-165, June 2, 1997). International Organizations: U.S. Participation in the United Nations Development Program (GAO/NSIAD-97-8, Apr. 17, 1997). Nuclear Safety: International Atomic Energy Agency’s Nuclear Technical Assistance for Cuba (GAO/RCED-97-72, Mar. 24, 1997). Nuclear Safety: Uncertainties About the Implementation and Costs of the Nuclear Safety Convention (GAO/RCED-97-39, Jan. 2, 1997). Nuclear Safety: Status of U.S. Assistance to Improve the Safety of Soviet-Designed Reactors (GAO/RCED-97-5, Oct. 29, 1996). Nuclear Nonproliferation: Implications of the U.S./North Korean Agreement on Nuclear Issues (GAO/RCED/NSIAD-97-8, Oct. 1, 1996). Nuclear Safety: Concerns With the Nuclear Power Reactors in Cuba (GAO/T-RCED-95-236, Aug. 1, 1995). Nuclear Safety: U.S. Assistance to Upgrade Soviet-Designed Nuclear Reactors in the Czech Republic (GAO/RCED-95-157, June 28, 1995). Nuclear Safety: International Assistance Efforts to Make Soviet-Designed Reactors Safer (GAO/RCED-94-234, Sept. 29, 1994). Foreign Assistance: U.S. Participation in FAO’s Technical Cooperation Program (GAO/NSIAD-94-32, Jan. 11, 1994). Nuclear Nonproliferation and Safety: Challenges Facing the International Atomic Energy Agency (GAO/NSIAD/RCED-93-284, Sept. 22, 1993). Nuclear Safety: Progress Toward International Agreement to Improve Reactor Safety (GAO/RCED-93-153, May 14, 1993). Nuclear Safety: Concerns About the Nuclear Power Reactors in Cuba (GAO/RCED-92-262, Sept. 24, 1992). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO examined: (1) the purpose and effectiveness of the International Atomic Energy Agency's (IAEA) technical cooperation program; (2) the cost of U.S. participation in IAEA's technical cooperation program; and (3) whether the United States ensures that the activities of IAEA's technical cooperation program do not conflict with U.S. nuclear nonproliferation and safety goals. GAO found that: (1) while the United States and other IAEA major donor countries believe that applying safeguards is IAEA's most important function, most developing countries believe that receiving technical assistance through IAEA's technical cooperation program is just as important; (2) the United States and other major donors principally participate in the program to help ensure that the member states fully support IAEA's safeguards and the 1970 Treaty on the Non-Proliferation of Nuclear Weapons; (3) in the past, concerns were raised about the effectiveness and efficiency of the technical cooperation program; (4) most of IAEA's program evaluation reports, internal audits, and project files that GAO reviewed did not assess the impact of the technical cooperation program, and no performance criteria had been established to help measure the success or failure of the program; (5) for the past 5 years, IAEA's Deputy Director General for Technical Cooperation has been taking steps to improve the overall effectiveness and efficiency of the program, but State Department officials are concerned about their sustainability; (6) the United States, historically the largest financial donor to the fund, provided a voluntary contribution of about $16 million, or about 32 percent of the total $49 million paid by IAEA member states for 1996; (7) for 1996, 72 of the 124 member states made no payments at all to the technical cooperation fund yet most of these states received technical assistance from IAEA; (8) officials from the Department of State, the Arms Control and Disarmament Agency, and the U.S. Mission to the United Nations System Organizations in Vienna, Austria, told GAO that they do not systematically review or monitor all of IAEA's technical assistance projects to ensure that they do not conflict with U.S. nuclear nonproliferation or safety goals; (9) however, GAO found that U.S. officials had sporadically reviewed projects in countries of concern to the United States; (10) U.S. officials also told GAO that the vast majority of IAEA's technical assistance projects do not pose any concerns about nuclear proliferation because the assistance is generally in areas that do not involve the transfer of sensitive nuclear materials and technologies; (11) however, GAO found that IAEA has provided nuclear technical assistance projects for countries where the United States is concerned about nuclear proliferation and threats to nuclear safety; and (12) moreover, a portion of the funds for projects in countries of concern is coming from U.S. voluntary contributions to IAEA. |
Since 1992, the Medicare program has used a resource-based fee schedule to pay for physician services in the traditional FFS Medicare program. The physician fee schedule includes three components: the relative value for the service, a geographic adjustment, and a conversion factor. The relative value for a service compares the resources involved in performing one service with those of other services. There are more than 7,000 physician services in the fee schedule and each one is assigned a relative value. The geographic adjustment was designed to ensure that fees appropriately reflect the geographic variation in costs associated with operating a medical practice. Finally, the fee schedule uses a conversion factor expressed in dollars to determine the payment rate for a particular physician service. The conversion factor is updated annually based on the SGR formula, which is set by law. The SGR is a spending target system designed to control growth in spending attributable to increases in the number of services, known as volume, and to the services’ complexity and costliness, known as intensity. Although the SGR formula has called for negative physician fee updates in recent years, Congress has mandated either no change or a positive update that has been less than growth in the estimated cost to physicians for providing their services. Beginning in 2010, physician fees are projected to be reduced by 21 percent, according to the Congressional Budget Office (CBO). Medicare generally pays physicians a predetermined amount for each service provided. Physicians who “accept assignment” are those who agree to accept Medicare’s fee as payment in full. The fee includes the coinsurance amount (usually 20 percent) paid by the beneficiary. Physicians who sign Medicare participation agreements—referred to as participating physicians—must accept assignment for all Medicare- covered services that they provide to beneficiaries. Physicians who do not sign participation agreements—referred to as nonparticipating physicians—can either opt to accept assignment on a service-by-service basis or not at all. When a nonparticipating physician accepts assignment, the fee schedule amount, also known as the Medicare-approved amount, is reduced by 5 percent. Medicare pays the physician 80 percent of the reduced amount; the beneficiary pays 20 percent of the reduced amount. When a nonparticipating physician does not accept assignment, the Medicare-approved amount is also reduced by 5 percent, but the physician may collect from beneficiaries a portion of the difference between his or her charge and the Medicare-approved amount—a practice known as balance billing. Several recent surveys of Medicare beneficiary access to physician services have not identified major access issues. For example, a 2008 Medicare Payment Advisory Commission (MedPAC) survey, a 2007 Center for Studying Health System Change (HSC) survey, a 2007 Commonwealth Fund survey, and a 2007 AARP survey all concluded that Medicare beneficiaries had few problems obtaining physician services. MedPAC found that most beneficiaries were able to schedule timely routine appointments and find a new physician when needed. Additionally, Medicare beneficiaries reported similar or better access to physician services compared to individuals covered by private insurance, according to MedPAC, HSC, Commonwealth Fund, and AARP surveys. Both the Commonwealth Fund and AARP also found that Medicare beneficiaries are more likely than those with private insurance to report high levels of satisfaction with their health care and access to physicians. Physician spending under the Medicare program has historically grown at a rapid pace, at times reaching double-digit increases, but these increases may not mean better health care or better outcomes for beneficiaries. Specifically, some of the higher volume and intensity that drive spending growth may not be medically necessary. Physicians have a financial incentive to perform as many services as possible because Medicare pays them a fee for each service provided, with little accountability for quality or efficiency. The Senate Finance Committee has stated that the combination of high health care spending and lagging quality is unsustainable for both the government and patients. Several studies of geographic variation in Medicare spending have concluded that some utilization may not be warranted. A February 2008 CBO report found that per capita Medicare spending varied substantially among states, ranging in 2004 from $4,000 in Utah to $6,700 in Massachusetts. CBO found that the price paid for health care services and severity of illness were important factors, but cited research indicating that these two factors together likely account for less than half of the geographic variation in spending. CBO also found that patient preferences and income appear to explain little of the variation and concluded that some variation in medical practice may be attributable to differences in the supply of medical resources, such as specialist physicians. Several studies from Dartmouth have found that Medicare beneficiaries living in areas with high levels of health care spending and utilization do not experience better health outcomes or quality of care. One Dartmouth study noted that Medicare spending would fall by 29 percent if spending levels in the lowest decile of areas were realized in all higher spending regions. Dartmouth researchers concluded that geographic variation in Medicare spending can be attributed to how physicians respond to technology, capital, and other resources under FFS. For example, physicians in higher-spending regions were more likely than those in lower-spending regions to recommend discretionary services and more resource-intensive services. Clinical decisions are associated with physician discretion when the evidence does not point clearly to a correct action in a specific clinical situation. In a review of 2,500 treatments for a variety of medical conditions, more than half were subject to physician discretion. Congress has recently shown an interest in varying annual Medicare physician payment updates by geographic area. In the Deficit Reduction Act of 2005, Congress directed MedPAC to examine alternatives to the current payment system, including options that varied payment updates by geographic areas. In a 2007 study, MedPAC found that setting fee update amounts by geographic area would recognize that practice patterns differ regionally and therefore have different contributions to overall growth in volume and spending. MedPAC suggested that regional updates would improve equity across the nation and could help reduce geographic variation over time. Congress has also held hearings on revising the method used to update physician payments, and the Chairman of the Senate Finance Committee has stated that reforming physician payment is an important component of health care reform. Together, the three types of indicators we reviewed show that Medicare beneficiaries experienced few problems accessing physician services. Small percentages of Medicare beneficiaries reported never easily obtaining appointments; measures of beneficiaries receiving physician services increased nationwide from 2000 to 2008; and indicators of physician willingness to serve Medicare beneficiaries and to accept Medicare fees as payment in full also increased from 2000 to 2008. Few Medicare beneficiaries reported major difficulties accessing physician services in 2007 and 2008. (See table 1.) Specifically, among those who needed routine care, very small percentages of beneficiaries reported that it was never easy to schedule an appointment as soon as they felt they needed it—2.5 percent in 2007 and 2.4 percent in 2008. Similarly, in both 2007 and 2008, 2.1 percent of beneficiaries who needed to see a specialist reported that it was never easy to get appointments with specialists when needed. Nationwide, the percentages of beneficiaries who reported major difficulties accessing routine or specialist care were the same for those living in urban areas and in rural areas in 2008—2.4 percent for routine care and 2.1 percent for specialist care. Within every state and the District of Columbia, less than 5 percent of the beneficiaries reported major difficulties accessing physician services in 2008. For example, the proportion of beneficiaries who reported never being able to easily schedule an appointment with a specialist in 2008 ranged from 0.3 percent in North Dakota to 4.7 percent in New Mexico. The proportion of beneficiaries who reported it was never easy to promptly schedule an appointment for routine care ranged from 1.5 percent in Oregon to 4.0 percent in Alaska. In general, the proportion of beneficiaries who received physician services rose during the period covered by our review. (See fig. 1.) Specifically, from 2000 to 2008, the proportion of beneficiaries receiving services during the month of April rose from about 46 percent to about 50 percent. Although the proportion of beneficiaries receiving physician services increased from 2000 to 2008, the rate of increase was not constant. The measure declined slightly in April 2003, but the proportion of beneficiaries receiving services remained about one percentage point higher than in April 2000, and the upward trend resumed in 2004. Nationwide, this measure increased in both urban and rural areas. Specifically, the proportion of beneficiaries receiving services rose from about 47 percent in April 2000 to about 51 percent in April 2008 in urban areas and from about 42 percent in April 2000 to about 45 percent in April 2008 in rural areas. From 2000 through 2008, the proportions of beneficiaries receiving services in April varied by state urban and rural areas. For example, in April 2000, the proportion of beneficiaries served ranged from 28.4 percent in rural Alaska to 51.8 percent in urban Pennsylvania. In April 2008, the proportion of beneficiaries served ranged from 32.7 percent in rural Alaska to 57.5 percent in urban Florida. Within 88 of the 99 urban and rural areas we examined, the proportion of beneficiaries receiving services increased from April 2000 to April 2008. (See fig. 2.) The largest increase in the percentage of beneficiaries receiving services was 7.9 percentage points in rural Maryland. There was a slight decline—less than 1 percentage point—in six areas: rural California, rural Colorado, rural Idaho, urban Maine, rural Montana, and rural Oregon. The largest decline in the proportion of beneficiaries served—about 2 percentage points—occurred in rural New Hampshire and rural Hawaii. From April 2000 to April 2008, an increasing number of services were provided to beneficiaries who were treated by a physician. (See fig. 3.) Specifically, in that period, the average number of services provided per 1,000 beneficiaries who were treated increased by about 15 percent—from about 3,400 to about 3,900. From April 2000 through April 2008, the number of services provided per 1,000 beneficiaries who were treated was lower in rural areas relative to urban areas. However, in percentage terms, the urban and rural areas experienced similar increases in the number of services per 1,000 treated beneficiaries—about a 17 percent increase in urban areas (from about 3,500 in April 2000 to about 4,100 in April 2008) and about a 13 percent increase in rural areas (from about 3,200 in April 2000 to about 3,600 in April 2008). The number of services provided also varied among states’ urban areas and rural areas. For example, in April 2000, the number of services per 1,000 beneficiaries served ranged from about 2,800 in rural Utah to about 3,900 in urban Texas. In April 2008, the number of services per 1,000 beneficiaries served ranged from about 3,100 in rural Hawaii to about 4,500 in urban Florida. Within every state’s urban and rural areas, there was an increase from April 2000 to April 2008 in the average number of services provided for each beneficiary who was treated by a physician. (See fig. 4.) In 59 of the 99 areas we examined, the number of services provided per 1,000 beneficiaries increased by about 12 percent or more. Among the 51 urban areas we examined, the percentage increase in the number of services provided per 1,000 beneficiaries ranged from about 5 percent in Vermont to about 24 percent in New York. Among the 48 rural areas, the increase ranged from about 1 percent in Alaska to about 23 percent in Connecticut. The average number of physician office visits—an indicator of beneficiary access to the typical entry point into the health care system and most basic level of physician services—rose for Medicare beneficiaries from April 2000 to April 2008. (See fig. 5.) The number of office visits increased during that period from about 29 to 31 (about 7 percent) per 1,000 Medicare beneficiaries for new patients and from about 442 to 504 (about 14 percent) per 1,000 Medicare beneficiaries for established patients. Research indicates that an increased number of emergency room visits above the growth in physician services could signify problems accessing primary care because patients who have difficulties obtaining routine care may instead seek health care in emergency rooms. However, our analysis demonstrates similar increases in emergency room visits, total office visits, and overall physician services from 2000 to 2008. Specifically, emergency room visits rose from about 34 to 39 per 1,000 beneficiaries— about 15 percent—which was approximately equal to the increase in total (new and established patient) office visits and the increase in the overall number of physician services per 1,000 beneficiaries treated. Two additional access-related indicators—the number of physicians billing Medicare for services and the percentage of services for which Medicare’s fees were accepted as payment in full—increased since 2000. (See fig. 6.) Specifically, the number of physicians billing Medicare increased from about 419,000 in April 2000 to about 474,000 in April 2007. The number of physicians continued to increase even as the number of beneficiaries in Medicare FFS declined over the last 2 years. The number of beneficiaries in traditional FFS Medicare decreased from about 33.4 million in 2005 to about 31.9 million in 2007, as more beneficiaries joined Medicare Advantage plans. Increases in the number of physicians billing Medicar the number of physicians billing Medicar in spite of the decline in Medicare FFS beneficiaries, suggest that in the in spite of the decline in Medicare FFS beneficiaries, suggest that in the aggregate, ph aggregate, physicians continued to accept FFS Medicare patients during this period. ysicians continued to accept FFS Medicare patients during this period. From April 2000 to April 2008, the majority of Medicare physician services were performed by physicians who accepted Medicare’s fees as payment in full. (See fig. 7.) In April 2000, about 98 percent of physician services were performed by physicians who accepted Medicare’s fee as paymen t in full (on assignment), and in April 2008, about 99 percent of physician services were paid on assignment. A smaller share of beneficiaries were likely subject to balance billing for physician services in April 2008 tha April 2000, as the percentage of services for which physicians did not n in accept Medicare’s fee as payment in full decreased from about 1.8 percent to about 0.7 percent. The proportion of services provided by participating icipating physicians—that is, physicians who formally agreed to participate in the physicians—that is, physicians who formally agreed to participate in the on assignment—increased from Medicare program and submit all claims Medicare program and submit all claims on assignment—increased from about 95 percent in April 2 about 95 percent in April 2000 to about 97 percent in April 2008. 000 to about 97 percent in April 2008. Physicians may decide on an annual basis whether they will be Medicare participating physicians. Potentially overserved areas tend to be the more densely populated urban regions. Higher population density tended to increase an area’s likelihood of being potentially overserved. Nearly half of the 32 metropolitan divisions—the most densely populated group of areas—were potentially overserved. (See table 2.) Similarly, a little more than a quarter of large MSAs were potentially overserved while among small MSA areas and rural areas, barely 1 in 10 was potentially overserved. Of the 296 geographic areas we examined, about one in four was potentially overserved—that is, they were in the top half of areas in both utilization of physician services in 2000 and growth in utilization of these services from 2000 to 2008. Areas that were in the top half in utilization in 2000 were nearly as likely to be in the top half in growth from 2000 to 2008 as areas that started in the bottom half in utilization. Specifically, of the 148 areas that were in the top half in utilization in 2000, 72 were in the top half in growth from 2000 to 2008. (See table 10 in app. I.) Similarly, of the 148 areas that were in the bottom half in utilization in 2000, 76 were in the top half in growth from 2000 to 2008. Potentially overserved areas and other areas experienced wide differences in utilization. These differences tended to be widest in the more densely populated regions. In 2000, the average number of services per beneficiary who received services was 3.58 in potentially overserved areas versus 3.24 in other areas, or a difference of more than 10 percent. (See app. II for more information on utilization by type of geographic area.) Among areas with the largest populations—the metropolitan divisions and large MSAs— average utilization in 2000 was 9 percent higher in potentially overserved areas, compared with a difference of about 5 percent among small MSA areas and 8 percent among rural areas. The growth in utilization from 2000 to 2008 displayed a similar pattern. Overall, the average increase for potentially overserved areas was nearly 18 percent, while for other areas it was just over 12 percent. The average increase in utilization in potentially overserved metropolitan divisions was 21 percent, compared with 12 percent in other metropolitan divisions. For the less densely populated areas, utilization also grew more rapidly in potentially overserved areas, although the gap in growth rates between potentially overserved and other areas tended to be smaller than it was for the metropolitan divisions. For example, the average increase in utilization was 17 percent in potentially overserved large MSAs, compared with 13 percent for other large MSAs. Our analysis found that areas in states east of the Mississippi River were much more likely to be potentially overserved. (See fig. 8.) Of the 174 areas in states that are east of the Mississippi River, 60 were potentially overserved. For example, nearly the entire states of Alabama, Florida, and Illinois comprised potentially overserved areas. Of the 122 areas in states that are west of the Mississippi River, only 12 were potentially overserved. Beneficiaries residing east of the Mississippi River are much more likely to reside in a potentially overserved area, because the most densely populated areas in the east are more likely to be potentially overserved than are those in the west. In 2008 nearly half the beneficiaries who resided in a state east of the Mississippi River were in a potentially overserved area, while in the western part of the country only 1 beneficiary in 10 resided in a potentially overserved area. In terms of population, the largest of the major metropolitan divisions east of the Mississippi River, including New York-White Plains, Chicago-Naperville- Joliet, and Philadelphia, were potentially overserved areas. In contrast, the largest western metropolitan divisions of Los Angeles-Long Beach- Glendale, Dallas-Plano-Irving, and Santa Ana-Anaheim-Irvine were not potentially overserved areas by our measure. Similarly, beneficiaries in five of the six most populous large MSAs in the east were in potentially overserved areas, while beneficiaries in five of the six most populous large MSAs in the west were not. Only a minority of small MSA areas and rural areas in the east were potentially overserved and none of either of these two area types were potentially overserved in the west. (See app. III for a list of all areas.) Potentially overserved areas and other areas are largely similar in characteristics that could drive the use of physician services, including demographic characteristics and the capacity to provide health care services. In contrast, certain types of physician services are performed more frequently in potentially overserved areas than in other areas, suggesting differences in physician practice patterns. Potentially overserved and other areas appear similar in demographic characteristics that could be expected to affect the use of physician services. (See table 3.) For example, in 2006 Medicare beneficiaries in both groups of areas had similar risk scores, meaning they are expected to require similar amounts of Medicare resources because of their health status. Potentially overserved areas and other areas also had a similar racial composition and average income levels, although they differed somewhat in educational attainment. While these local factors are not under the control of the health care delivery system, they could be expected to influence the utilization of health care services. For example, income levels and insurance coverage have been shown to be related to patient preferences and demand for health care. Potentially overserved areas and other areas are also similar in terms of their capacities to provide health care services, as measured by number of beds and physicians per 1,000 people. (See table 4.) Specifically, in 2005, potentially overserved and other areas had a similar number of hospital beds per 1,000 people. In 2004, potentially overserved areas and other areas also had a similar number of physicians per 1,000 people. Studies have demonstrated strong associations between the number of hospital beds and hospital utilization and between physician supply and the rate of physician visits. As table 4 shows, in 2004, potentially overserved areas and other areas had similar numbers of primary care physicians compared to specialists— about a one-to-two ratio. Studies have shown that areas with higher ratios of primary care physicians to specialists have better health outcomes and better meet quality measures, such as administering beta-blockers after a heart attack or performing regular eye exams on diabetic patients. Conversely, studies have demonstrated that areas with more specialty services are associated with higher spending but not better access or health outcomes. Potentially overserved areas and other areas have similar Medicare beneficiary satisfaction, as measured by beneficiary perceptions of health care and health status. (See table 5.) For example, 94 percent of beneficiaries in potentially overserved areas reported having a personal doctor, compared to 93 percent of beneficiaries in other areas. Beneficiaries in both groups of areas also reported similar average health status, and similarly rated their health care and personal doctors highly. This finding is consistent with studies showing that geographic areas with high Medicare spending do not have better outcomes or perceptions of quality of medical care. When we compared types of physician services provided to Medicare beneficiaries, we found that potentially overserved areas and other areas differed in the frequency with which certain categories of physician services are used. (See table 6.) Specifically, we found that in April 2008, potentially overserved areas used substantially more evaluation and management services, minor procedures, and imaging services per 1,000 beneficiaries than other areas. For example, potentially overserved areas had 44 percent more minor procedures—which include services such as ambulatory procedures, eye procedure treatments, and colonoscopies— per 1,000 beneficiaries than other areas. Potentially overserved areas also had 29 percent more laboratory tests and 19 percent more imaging services per 1,000 beneficiaries than other areas. The two groups of areas, however, had similar rates of major procedures. (See app. IV for additional trends in selected physician services in potentially overserved and other areas.) We also found that specific services associated with the exercise of physician discretion are performed more frequently in potentially overserved areas, indicating differences in physician practice patterns. (See table 7.) When there is not a universally accepted treatment approach, the choice of services is subject to physician discretion. Several studies have identified certain services as prone to overuse or misuse for various reasons, including physician discretion. Two of the three physician services identified in the literature as being related to physician discretion were performed substantially more frequently in potentially overserved areas than in other areas in April 2008. Advanced imaging services, which includes computed tomography (CT) and magnetic resonance imaging (MRI), were 16.1 percent more prevalent per 1,000 beneficiaries in potentially overserved areas than in other areas in April 2008. Electrocardiograms (EKG) were performed 30.1 percent more frequently per 1,000 beneficiaries in potentially overserved areas than in other areas. However, the frequency of knee replacements was similar in potentially overserved and other areas. We also found that two of three services identified in the literature as being universally accepted approaches to diagnoses or having low rates of inappropriate use were performed at similar frequencies in the two groups of areas in April 2008. Specifically, hip surgery for hip fracture and cataract removal were performed at similar frequencies in the two groups of areas. However, colonoscopy for cancer screening—another procedure identified in the literature as effective and strongly supported by evidence—was performed 10.8 percent more often per 1,000 beneficiaries in potentially overserved areas than in other areas in the United States. Although concerns have been raised that Congress’s efforts to control spending on physician services could limit beneficiary access to those services, our analysis suggests that beneficiary access generally remained the same or increased from 2000 to 2008. These findings are consistent with our earlier work. However, we also found that some geographic areas of the country experienced much higher levels of utilization of physician services and much greater increases in utilization compared to the rest of the nation—which may indicate excessive care not driven by medical need. Our definition of areas that are potentially overserved was based on both levels of service and growth rates, while past research has generally concentrated on levels of service. Nevertheless, our findings are consistent with past research—they underscore the importance of geography in the utilization of physician services and can help inform ongoing discussions regarding Medicare physician payment reform. Medicare’s SGR, which is used to help control spending on physician services, does not account for geographic differences in utilization rates. As Congress considers options for revising the SGR and other payment reforms, the issue of geographic differences will likely continue to be part of this discussion. In written comments on a draft of this report, CMS noted the agency’s longstanding practice of monitoring the effect of policy changes on beneficiary access to Medicare services, and stated that this report would help in that effort. We have reprinted CMS’s letter in appendix IX. We obtained oral comments on a draft of our report from an official representing the American Medical Association (AMA). The AMA official shared two overall observations. First, the AMA official said that the rate of growth in per beneficiary utilization of physician services had declined each year since 2004. While the growth rate has not been uniform, our report finds that an increasing number of services were provided to beneficiaries who were treated by a physician from April 2000 to April 2008. Second, the AMA official said that beneficiaries could face access problems that would not appear in our analysis of survey and claims data. The AMA official explained, for example, that physicians could increase the number of claims they submit, while seeing fewer patients or could be accepting fewer Medicare beneficiaries seeking new appointments. However, the three overall indicators we constructed to measure access trends—from both the beneficiary and physician perspective— demonstrated sustained beneficiary access to services. As we reported in our draft, we found very few beneficiaries reporting major access difficulties in 2007 and 2008, the utilization of services increased nationwide from April 2000 to April 2008, and physician participation in Medicare also rose over this period. The AMA official also shared technical comments with us, which we incorporated into our report as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At the time, we will send copies to the Acting Administrator of CMS and interested congressional committees. The report also will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512-7114 or steinwalda@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix X. This appendix explains the scope and methodology that we used to address our reporting objectives. Specifically, we wanted to (1) determine how beneficiary access to physician services has changed from 2000 to 2008; (2) identify areas of the country where Medicare beneficiaries are potentially overserved by physicians; and (3) describe characteristics that distinguish potentially overserved areas from other areas in the nation. To determine how beneficiary access to physician services changed from 2000 to 2008, we constructed three types of indicators to measure beneficiary access to physician services: beneficiary perceptions about access, utilization of physician services, and indicators of physicians’ willingness to participate in Medicare and serve Medicare beneficiaries. To measure beneficiary perceptions of access, we analyzed 2007 and 2008 responses to the Consumer Assessment of Healthcare Providers and Systems (CAHPS) survey conducted by the Centers for Medicare & Medicaid Services (CMS). We used CAHPS survey data that asked beneficiaries to describe their experiences with the Medicare fee-for- service (FFS) program. These annual surveys are a nationally representative source of Medicare beneficiary perceptions of their access to health care that would enable comparisons over time among states and between urban and rural areas. Respondents were asked about their experiences in the 6 months before the survey. CMS surveyed approximately 431,000 FFS beneficiaries in the spring of 2007 and approximately 306,000 in the spring of 2008. We excluded responses from beneficiaries residing outside the 50 states and the District of Columbia from our analysis. The number of FFS beneficiaries residing in the areas that were part of our analysis who completed the survey was 199,000 in 2007 and 163,000 in 2008. We focused on two CAHPS questions that were related to beneficiary access to physician services. The questions, reproduced in table 8, asked about the ease of scheduling prompt appointments for routine care and beneficiary ability to gain access to specialists. For each question, we included only the responses from those beneficiaries who could have encountered an access problem—that is, those who reported trying to schedule an appointment with any doctor and those who attempted to make an appointment to see a specialist. For example, we include responses to the specialist access question only for those beneficiaries who answered in a prior survey question that they needed to see a specialist in the past 6 months. We calculated the proportion of respondents who responded the most negatively—those who responded that they were “never” able to schedule an appointment as soon as they thought they needed it. We also examined whether these responses varied by state or between urban and rural areas nationwide. To analyze Medicare beneficiary access to physician services based on their utilization of services, we used Medicare Part B claims data from the National Claims History (NCH) files. We constructed data sets for 100 percent of Medicare claims for physician services performed by physicians in the first 28 days of April of 2000 through 2008, which yielded more than 60 million claims per year. These claims represent an annual snapshot of beneficiary access to physician services for each of the 9 years. We selected April to allow time for the annual fee updates to be implemented beginning January 1 and for physician behavior to adjust to the new fees. To avoid “calendar bias”—that is, the occurrence of more weekdays in April in one year compared to another—and to create an equal number of weekdays in each year’s data set, we limited each year’s claims to services performed within the first 28 days of the month. These data encompass several periods: 2 years in which fee increases were greater than the increase in the estimated cost of providing services (2000 and 2001), 1 year in which fees decreased (2002), and 6 years in which fee increases were less than the growth in the estimated cost of providing services (2003 through 2008). We established a consistent cutoff date (the last Friday in September of each year) for each year’s data file and only included those claims for April services that had been processed by that date. Because claims continue to accrete in the data files, this step was necessary to ensure that earlier years were not more complete than later years. To determine the number of FFS beneficiaries, we used the April enrollment data from the Denominator file—a database that contains enrollment data and entitlement status for all Medicare beneficiaries enrolled, entitled, or both in each month in a given year. In addition, on the basis of beneficiary location, we associated each service with an urban or rural location, using the Office of Management and Budget (OMB) classification of metropolitan statistical areas (MSA). We constructed multiple utilization measures to determine whether Medicare beneficiaries experienced changes in their access to physician services; these indicators included the percentage of Medicare FFS beneficiaries obtaining services in April of each year and the number of physician services per 1,000 beneficiaries who received services. We analyzed these utilization measures nationally, for urban and rural areas within each state, and for specific services, such as office visits for new and established patients and emergency room visits. Using MSAs, we classified the nation’s counties as urban or rural, consolidated the urban counties and rural counties in each state and the District of Columbia, and created 99 geographic areas to analyze access at a subnational level. To indicate physicians’ willingness to participate in Medicare, we determined the number of physicians billing Medicare from 2000 through 2007, whether services were performed by participating or nonparticipating physicians, and whether claims for physician services were paid on assignment or not on assignment. We did not adjust the data for factors that could affect the provision and use of physician services, such as incidence of illness or coverage of new benefits. To identify areas of the country where Medicare beneficiaries are potentially overserved by physicians, we identified areas of the country where utilization of physician services in Medicare is potentially excessive. Because policymakers have expressed concerns about both the level and growth of services in the Medicare program, we incorporated both factors in our measure of potential overservice. Specifically, we identified areas that were both relatively high in their level of utilization and relatively high in their growth in utilization. We analyzed one of our access indicators, services per beneficiary served, to measure potential overutilization. Using the U.S. Census Bureau and OMB classifications, we divided states into urban and rural areas and made additional distinctions among urban areas, allowing us to classify counties into one of four types of areas, as shown in table 9. We did not allow areas to cross state lines, so a metropolitan division or MSA that crossed state lines was subdivided into separate areas for each state. We examined a total of 296 areas, ranking them by their level of utilization in 2000 and their change in utilization from 2000 to 2008. To determine an area’s utilization status, we designated areas in the top half of both measures as “potentially overserved” and the rest as “other” areas. (See table 10.) This method resulted in 72 of the 296 areas being designated as potentially overserved. To describe characteristics that distinguish potentially overserved areas from other areas in the nation, we reviewed literature to identify characteristics that could drive the use of physician services. Using the most recently available data sources, we constructed several area-level characteristics and compared them between potentially overserved areas and all other areas. Specifically we compared various demographic characteristics and compared the capacity to provide health care services—both of which are factors that could drive the use of physician services. We also compared beneficiary satisfaction with their health care and the types of physician services provided in the two types of areas. We examined the provision of physician services broadly. However, our review of clinical and economic studies in the literature suggested that certain services might be or might not be prone to overuse and thus we also compared the utilization of these services for both potentially overserved and other areas. We obtained demographic data on mortality, race, and education as well as data on health services capacity from the Area Resource File (ARF), a national county-level health resource information database produced by the Health Resources and Services Administration of the Department of Health and Human Services; population and income data from the U.S. Census Bureau; beneficiary age and enrollment data from the Denominator file; and risk score data from the CMS Medicare Advantage rate calculation data for 2009. We obtained data on beneficiary perceptions and satisfaction with care from the 2008 CAHPS survey. We used Medicare Part B claims to obtain physician services utilization data. As part of our analysis we found that potentially overserved areas were more likely to be urban—that is, have greater population density—than other areas. Therefore, when comparing various characteristics of potentially overserved areas and other areas, we accounted for this difference in population density by weighting the data from other areas to reflect the same proportion of urbanization found in potentially overserved areas. We also analyzed the utilization of specific types of physician services in potentially overserved and other areas. This analysis was based on the Berenson-Eggers Type of Service (BETOS) code assigned to each physician service in the Part B claims data. According to CMS, the BETOS coding system consists of readily understood clinical categories, is stable over time, and is relatively immune to minor changes in technology or practice patterns. We compared the number of services per 1,000 Medicare FFS beneficiaries in potentially overserved and other areas for selected service categories. We collapsed data on other services and procedures into summary categories. We took several steps to ensure that the CAHPS, Medicare claims and enrollment, U.S. Census Bureau, and ARF data were sufficiently reliable for our analysis. For the CAHPS survey data, we examined the accuracy and completeness of the data by testing for implausible values and internal consistency and reviewed relevant documentation. In addition, we interviewed experts at CMS about whether the CAHPS data could appropriately be used as we intended. We concluded that the data were sufficiently reliable for the purpose of this analysis. Our analysis of the proportion of beneficiaries reporting major difficulties accessing physician services was limited to beneficiaries who needed an appointment for either routine or specialist care; it does not refer to the entire population of Medicare beneficiaries. Medicare claims data, which are used by the Medicare program as a record of payments made to health care providers, are closely monitored by both CMS and the Medicare carriers—contractors that process, review, and pay claims for Part B-covered services. The data are subject to various internal controls, including checks and edits performed by the carriers before claims are submitted to CMS for payment approval. Although we did not review these internal controls, we did assess the reliability of the NCH data. First, we reviewed existing information about the data, including the data dictionary and file layouts. We also interviewed knowledgeable CMS officials about the data. We examined the data files for obvious errors, missing values, values outside of expected ranges, and dates outside of expected time frames. We found the data to be sufficiently reliable for the purposes of this report. We assessed the reliability of the U.S. Census Bureau and ARF data by reviewing relevant documentation and examining the data for obvious errors. We conducted this performance audit from May 2008 through August 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We classified geographic areas of the country by type of area and utilization status. Our measure of utilization—services per beneficiary served—is based on Medicare claims data for services performed in the first 28 days of April 2000 and April 2008. An area was designated as potentially overserved if it was in both the top half of all areas in number of services per beneficiary served in 2000 and in the top half of the growth rate in services per beneficiary served from 2000 to 2008. Table 11 presents information, by geographic area type and utilization status, on the average number of services per beneficiary in 2000 and 2008, and the average change in the number of services per beneficiary from 2000 to 2008. We identified potentially overserved areas of the country by classifying U.S. counties into one of four types of geographic areas: metropolitan divisions, large MSAs, small MSA areas, and rural areas. This classification process yielded 296 areas across the United States. We measured utilization in each of these areas by examining the number of services per beneficiary who received services in April 2000 and April 2008, and ranked them by their level of utilization in 2000 and changes in utilization from 2000 to 2008. To determine an area’s utilization status, we designated areas in the top half of both measures as potentially overserved areas and designated the rest as other areas. Across all areas, the median number of services per beneficiary served was 3.28 in 2000 and 3.71 in 2008. The median percentage change in services per beneficiary served from 2000 to 2008 was 13.7. (See table 12.) Using the BETOS code to which each procedure code in our claims data was assigned, we reviewed specific categories of physician services. We collapsed data on other services and procedures into summary categories. We classified geographic areas of the country by type of area and utilization status. Our measure of utilization—services per beneficiary served—is based on Medicare claims data for services performed in the first 28 days of April 2000 and April 2008. An area was designated as potentially overserved if it was in both the top half of all areas in number of services per beneficiary served in 2000 and in the top half of the growth rate in services per beneficiary served from 2000 to 2008. Table 13 shows the specific service categories we reviewed, the change in the number of services provided per 1,000 FFS beneficiaries from April 2000 to April 2008, and the number of and difference in services provided in potentially overserved and other areas. Medicare designates 87 distinct physician payment localities across the 50 states and the District of Columbia to adjust physician payments for the geographic difference in the costs of operating a private medical practice. We measured utilization in each of the Medicare physician payment localities by examining the number of services per beneficiary who received services in April 2000 and April 2008, and ranked each payment locality by the level of utilization in 2000 and the change in utilization from 2000 to 2008. To determine a locality’s utilization status, we designated Medicare physician payment localities in the top half of both measures as potentially overserved payment localities and designated the rest as other payment localities. Across all Medicare physician payment localities, the median number of services per beneficiary served was 3.34 in 2000 and 3.81 in 2008. The median percentage change in services per beneficiary served from 2000 to 2008 was 13.4 percent. (See table 14.) Using the BETOS code to which each procedure code in our claims data was assigned, we reviewed specific categories of physician services. We collapsed data on other services and procedures into summary categories. We classified Medicare physician payment localities by utilization status. Our measure of utilization—services per beneficiary served—is based on Medicare claims data for services performed in the first 28 days of April 2000 and April 2008. A Medicare physician payment locality was designated as potentially overserved if it was in both the top half of all areas in number of services per beneficiary served in 2000 and in the top half of the growth rate in services per beneficiary served from 2000 to 2008. Table 15 shows the specific service categories we reviewed, the change in the number of services provided per 1,000 FFS beneficiaries from April 2000 to April 2008, and the number of and difference in services provided in potentially overserved and other Medicare physician payment localities. Hospital referral regions (HRR) are the 306 distinct geographic regions designated by Dartmouth researchers to represent regional health care markets for tertiary medical care. Each HRR contains at least one hospital that performs major cardiovascular procedures and neurosurgery. We measured utilization in each of the HRRs by examining the number of services per beneficiary who received services in April 2000 and April 2008, and ranked each HRR by level of utilization in 2000 and change in utilization from 2000 to 2008. To determine an HRR’s utilization status, we designated HRRs in the top half of both measures as potentially overserved HRRs and designated the rest as other HRRs. Across all HRRs, the median number of services per beneficiary served was 3.26 in 2000 and 3.72 in 2008. The median percentage change in services per beneficiary served from 2000 to 2008 was 13.7 percent. (See table 16.) Using BETOS code to which each procedure code in our claims data was assigned, we reviewed specific categories of physician services. We collapsed data on other services and procedures into summary categories. We classified HRRs by utilization status. Our measure of utilization— services per beneficiary served—is based on Medicare claims data for services performed in the first 28 days of April 2000 and April 2008. An HRR was designated as potentially overserved if it was in both the top half of all areas in number of services per beneficiary served in 2000 and in the top half of the growth rate in services per beneficiary served from 2000 to 2008. Table 17 shows the specific service categories we reviewed, the change in the number of services provided per 1,000 FFS beneficiaries from April 2000 to April 2008, and the number of and difference in services provided in potentially overserved and other HRRs. In addition to the contact named above, Jessica Farb, Assistant Director; Todd Anderson; Krister Friday; Jenny Grover; Jessica T. Lee; Richard Lipinski; and Sarabeth Zemel made major contributions to this report. | Congress, policy analysts, and groups representing physicians have raised questions about beneficiary access to Medicare physician services. At the same time, high levels of spending for health care in some parts of the country, and rapid increases in spending for physician services, have been identified as factors that threaten the long-term fiscal sustainability of the Medicare program. GAO was asked to assess beneficiary access to physician services and to identify indicators of potential overutilization of physician services. In this report, GAO (1) examines whether, from 2000 through 2008, beneficiaries had problems accessing physician services; (2) identifies areas of the country in which Medicare beneficiaries are potentially overserved by physicians; and (3) describes characteristics that distinguish the potentially overserved areas from other areas in the nation. GAO analyzed the most recent data available from several sources, including an annual Centers for Medicare & Medicaid Services (CMS) survey of fee-for-service (FFS) Medicare beneficiaries, Medicare physician claims for services provided in April of each year from 2000 through 2008, the Health Resources and Services Administration's Area Resource File, and the U.S. Census Bureau. GAO found that Medicare beneficiaries experienced few problems accessing physician services during its period of study. Very small percentages of Medicare beneficiaries--less than 3 percent--reported major difficulties accessing physician services in 2007 and 2008. The proportion of beneficiaries who received physician services and the number of services per beneficiary served increased nationwide from April 2000 to April 2008. Indicators of physician willingness to serve Medicare beneficiaries and to accept Medicare fees as payments in full also rose from 2000 to 2008. Potentially overserved areas--areas that were in the top half in both the level and growth in utilization of physician services--tend to be in the more densely populated urban regions and the eastern part of the United States. Large metropolitan areas were much more likely to be potentially overserved than rural and small metropolitan areas. Areas east of the Mississippi River were also more likely to be potentially overserved than those in the west. Potentially overserved and other areas are similar in demographic characteristics and the capacity to provide health care services. The two groups are also similar in Medicare beneficiary satisfaction with health care. In contrast, certain types of physician services, such as advanced imaging and minor procedures, are performed more frequently in potentially overserved areas relative to other areas, suggesting differences in physician practice patterns. In commenting on a draft of this report, CMS noted the agency's longstanding practice of monitoring the effect of policy changes on beneficiary access to Medicare services, and stated that this report would help in that effort. |
In late October and early November 1998, Hurricane Mitch struck Central America, producing more than 6 feet of rain in less than a week, mostly over Honduras. The heavy rainfall caused flooding and landslides that killed thousands of people; left tens of thousands homeless; and devastated infrastructure, agriculture, and local economies. In addition, in September 1998, Hurricane Georges hit several eastern Caribbean islands and the island of Hispaniola, which comprises the Dominican Republic and Haiti. Hurricane Georges also caused the deaths of hundreds of people and severely damaged infrastructure, crops, and businesses. See figure 1 for a map of the region and the countries affected by Hurricanes Mitch and Georges that we visited. U.S. relief efforts began immediately and USAID began providing limited reconstruction assistance using redirected program funds and other sources. However, the Congress and the administration recognized the need for longer-term assistance for recovery and reconstruction. In March 1999, President Clinton visited Central America and promised to help these countries rebuild their economies and social sectors. At the same time, USAID began developing a recovery plan for each hurricane-affected country, which outlined USAID’s funding estimates and proposed programs. In late May 1999, the Congress passed and the President signed an emergency supplemental appropriation that provided, among other things, $621 million for the countries affected by Hurricanes Mitch and Georges. In general, the funds were to be used to rebuild infrastructure, reactivate host country economies, and restore basic services. USAID was the primary agency responsible for carrying out the U.S. disaster recovery program. Of the $621 million authorized, USAID was directly responsible for about $587 million, including about $62 million in agreements with other U.S. departments and agencies, such as the U.S. Department of Agriculture and U.S. Geological Survey. The remaining $34 million was transferred directly by USAID to other U.S. departments and agencies, such as the Departments of Housing and Urban Development and State. Based on an informal agreement with congressional staff, USAID agreed to expend all the funds by December 31, 2001—about 30 months from enactment of the supplemental appropriation. As shown in table 1, USAID and the other U.S. departments and agencies had completed most of their programs by the deadline. Some activities, such as a $40 million urban water and sanitation program in Honduras, are still being implemented. Appendix IV contains further details on funding and expenditures for USAID and the other U.S. government entities. USAID and the other U.S. government entities implemented disaster recovery activities that helped the hurricane-affected countries rebuild their infrastructure and restore economic activity. USAID’s overall objectives were to help bring about economic recovery, restore and improve basic services, and mitigate the effects of future natural disasters. Each country’s program varied based on country conditions and the USAID mission’s approach. In general, the funds were used for repairing or rebuilding the infrastructure needed for reactivating economies (e.g., roads and bridges), public health infrastructure (e.g., potable water systems, sewage and drainage systems, and health clinics), housing, and schools; providing loans, credits, and technical assistance for small- and medium-sized farms and businesses; strengthening disaster mitigation efforts such as civil defense, early warning and prevention, and watershed management; and strengthening accountability. In Honduras and Nicaragua, USAID financed the repair of 2,817 kilometers (about 1,756 miles) of secondary and tertiary roads. In Honduras, USAID funded the repair of 62 municipal water and sanitation systems and 1,211 rural water systems. In the Dominican Republic, USAID funding repaired 1,514 houses and constructed 2,248 new homes (see fig. 2). The activities of other U.S. agencies ranged from installing stream gauges for early flood warning to equipping national public health laboratories. These and many other projects resulted in improved transportation, agricultural land restored to productive use, improved health through potable water and sanitation systems, increased access to health care and education, increased employment through credit programs, and improved capabilities to mitigate the effects of future disasters. USAID attempted to ensure that projects and activities would be sustainable after its disaster recovery activities were completed. For example, in Honduras, USAID funded training for municipal officials and local water boards to provide them with the management and budget skills to operate and maintain new water and sanitation systems. Also, the Honduran government ministry responsible for road maintenance gave USAID-funded roads priority in its 2-year maintenance schedule. However, the hurricane-affected countries are poor and in debt and, in many instances, plagued by bureaucratic inefficiencies and corruption. It is too early to determine if national governments and local officials will have the resources or political will to maintain the infrastructure rebuilt with USAID funds. Due to widespread concerns that such a large program with a 30-month time frame would be susceptible to misuse or corruption, USAID missions were cautious from the outset of the program. In addition to its regular program and financial controls, USAID set up some additional oversight measures, such as hiring accounting firms to oversee a host country’s expenditures. In addition, the supplemental legislation provided funds for USAID’s Office of the Inspector General and for us to monitor the provision of the assistance. This additional oversight and monitoring resulted in instances of problems being identified and addressed by USAID and other U.S. government departments and agencies as activities were under way and changes could still affect the success of the program or project. USAID missions generally said that the additional oversight measures were useful in enhancing accountability but that the time required by staff to comply with numerous auditors was burdensome and sometimes affected program implementation. In Honduras, the major infrastructure construction programs—totaling about $135 million—were implemented primarily by the Honduran Social Investment Fund, a government agency established to ease the impact of structural adjustment policies through employment generation and social programs. To help protect the U.S. assistance from potential misuse, the mission established a separate oversight unit within the fund for its $50 million road and bridge program (see fig. 3). A U.S. project manager headed the unit with a U.S. chief engineer and local technical and support staff. For both the road and municipal water and sanitation system programs, the mission contracted with financial services firms to handle disbursements to the fund following approvals by USAID and the oversight unit. For the water and sanitation program, USAID relied on the U.S. Army Corps of Engineers to provide technical oversight. For its school construction program, USAID only reimbursed the fund after units were completed and inspected by USAID and its oversight contractors (see fig. 4). Finally, in many instances, the Honduran mission hired U.S. management services firms and private voluntary organizations to oversee other activities implemented by local entities. USAID’s program in Nicaragua was mostly implemented by U.S. and international voluntary and local implementing organizations that had a proven track record with the mission and whose ongoing cooperative agreements were easily amended. For its only program with the Nicaraguan government—a $2.1 million municipal infrastructure program implemented by the Emergency Social Investment Fund (an entity similar to the Honduras fund)—USAID hired a U.S. management services firm to provide oversight and technical assistance. USAID also relied on the Corps of Engineers and the U.S. Department of Agriculture to review municipal infrastructure designs and make recommendations accordingly. A primary component of oversight is having sufficient staff to monitor project activities and spending and identify any problems that may occur along the way. As USAID’s direct-hire foreign service staff levels have declined over the years, it has turned increasingly to using personal services contractors to conduct most of the day-to-day oversight of its programs, including the disaster recovery program. USAID hired numerous personal services contractors to help oversee its activities and provide technical and administrative support. In Honduras, the program office and technical officers throughout the mission shared responsibility for oversight. The mission hired 33 additional personal services contractors to oversee its program and provide administrative support. In Nicaragua, USAID contracted for a reconstruction coordinator and hired 40 additional personal services contractors. In the Dominican Republic, the mission set up a separate reconstruction team comprised mostly of contract staff. In addition to our monitoring, the Regional Inspector General’s Office in El Salvador contracted with the Defense Contract Audit Agency and local affiliates of international accounting firms to conduct concurrent audits of vulnerable programs and regular audits of many other activities. It also hired five full-time personal services contractors to oversee its financial audit activity. According to the Deputy Regional Inspector General, as of December 31, 2001, its office had conducted 165 financial audits covering $218 million in USAID-managed funds. The Regional Inspector General’s Office also conducted 14 performance audits in 6 countries and provided fraud awareness training in 7 countries to 2,141 participants. The USAID Inspector General gave the USAID missions generally high marks for their financial management of the disaster recovery program, noting that the small amount of questioned costs identified by its audits (about $5 million, or 2.2 percent as of December 31, 2001) demonstrated the effectiveness of ongoing oversight. Through increased oversight of this program, potential or ongoing problems were identified as project implementation was under way. In many cases, the USAID mission staff responsible for program oversight identified problems and took immediate action to keep their programs on track. In other instances, our visits, regional inspector general audits, and others with technical expertise identified concerns that USAID corrected. During a trip to northern Honduras in October 2000, we traveled a road repaired with USAID funds that had been poorly compacted. As a result, recent rains had turned the road to mud and it was nearly impossible to drive on. This road is an important access route for transporting African palm oil to the coast for export and for local commerce. The U.S. engineer responsible for technical oversight agreed with our concerns and took prompt action to ensure that the road was repaired properly. On a subsequent visit, we noted that the road had been repaired and was in excellent condition. In July 2000, during a visit to El Pataste in northern Honduras, we observed a housing project with well-constructed houses but no firm plans for potable water, despite a contractual obligation to ensure that key services were incorporated into housing communities. USAID eventually was successful in having the implementing organization negotiate a way to provide potable water. To better track and report on the progress of its housing program, USAID also developed a matrix for each housing project that specified how water and other infrastructure were to be provided as well as proof that an environmental assessment had been completed (see fig. 5). USAID provided $2.5 million to a Honduran agricultural lending cooperative for loans for small- and medium-sized farms despite a record of concerns about its management problems and financial viability. According to USAID, this was the only organization available to provide credit for smaller producers. USAID hired a management services firm to handle loan disbursements and provide technical assistance for implementing management reforms, but the problems persisted. Based on USAID’s continuing oversight and our review, USAID strongly encouraged the organization’s Board of Directors to accept major restructuring of its organizational, management, and financial framework. In January 2001, the Honduran minister of finance signed a memorandum of understanding with the lending organization outlining these changes and the likely consequences if the reforms were not made. USAID subsequently released $500,000 of the $2.5 million loan fund that it had suspended pending the signing of the memorandum. In Nicaragua, we visited numerous sites where four international private voluntary organizations were implementing USAID’s cash-for-work and food-for-work rural road rehabilitation projects. After consulting with project engineers and Corps of Engineers staff, we pointed out several deficiencies in the quality of the work, including roads not properly crowned to prevent standing water, ditches not adequately dug to facilitate drainage of water, and roadbed materials not suitable for withstanding traffic and weather. Based on these observations, the private voluntary organizations hired engineers to oversee road activities. We observed a noticeable improvement in USAID’s road projects on subsequent visits (see fig. 6). USAID, in an effort to further improve the quality of road repairs in Nicaragua, decided that the four nongovernmental organizations would use heavy machinery on the more difficult roads. These cash-for-work and food-for-work programs initially emphasized income generation, and USAID’s plan was that the nongovernmental organizations would only use hand labor. However, USAID and the Corps of Engineers soon realized that some roads could not be adequately repaired using only hand labor and would not withstand normal weather and traffic. USAID subsequently required the organizations to use both heavy equipment and hand labor and the road quality improved substantially. In addition, some organizations later coordinated their roadwork activities and shared equipment, resulting in lower costs. In October 2000, we visited a health post in rural Nicaragua where a private voluntary organization constructed a residence for medical personnel and rehabilitated a clinic. USAID had been told that the work was completed, the Ministry of Health had assigned medical personnel, and the post was in operation. However, when we arrived, the facility was vacant and evidently had been so for months. We questioned whether USAID should be involved in such a project, given the ministry’s lack of support. In January 2001, we returned to the clinic unannounced and found that the clinic was operating and a doctor was present and living at the residence. He had been assigned following our earlier visit. In December 2000, we visited a reforestation and agricultural project in El Salvador. With USAID disaster recovery funding, a U.S. nongovernmental organization was teaching farmers to grow cashews and lemon trees to increase their incomes and provide erosion protection. Although a well was nearby, the community leader pointed out to us that the farmers needed a pump to irrigate the new plantings during their first dry season. We saw that some trees had already died and others would soon die without irrigation. In response, USAID committed to finance a new pump. In October 2001, we returned to the community and observed that the pump had been installed and that the plantings were growing. In May 2000, we visited a school in the Dominican Republic that was undergoing repairs with disaster recovery funds. The initial project included only classroom repairs. However, the sanitation facilities had also been destroyed and we were told that students were using the nearby field. After we reported the apparent oversight, USAID responded by adding latrines to the project. New latrines were in place when we visited in August 2001. Several USAID officials stated that our oversight and monitoring not only encouraged specific improvements, but also provided a continuous deterrent effect because contractors, grantees, host government officials, and project beneficiaries were actively aware of U.S. congressional scrutiny over the program. One mission director added that our visits were used to encourage contractors and grantees to stay on track and comply with the terms of their agreements. The acting mission director at another mission noted that, although the multiple layers of auditing were sometimes overwhelming, the audit findings helped the mission manage the program and report to the Congress on its progress. We also monitored the pace of expenditures and the activities of most of the other U.S. departments and agencies. In June 2001, we attended meetings of the Office of Management and Budget, USAID, and the other U.S. departments and agencies. At the time, it was apparent that a few departments and agencies were not expending their funds in a timely manner and that they likely would not meet the December 31, 2001, deadline for completing their activities. In early September 2001, an official with the State Department’s Bureau for International Law Enforcement and Narcotics told us that, of the $923,600 the bureau planned to spend in the Dominican Republic, $400,000 would be reprogrammed for an assets forfeiture project in the Dominican Republic and the remaining $523,600 would be reprogrammed for a de-mining program in Central America. However, the necessary arrangements to implement those proposals had not been completed. After our inquiries, on September 30, 2001, the bureau completed the paperwork to reprogram the $400,000. In January 2002, the bureau told us that the remaining $523,600 would not be reprogrammed and that it had returned $514,242 to the U.S. Treasury. In March 2001, the Department of Housing and Urban Development (HUD) canceled a $1.1 million housing micro-credit project in Honduras because the in-country organization tasked to implement the project did not have the capacity. When we followed up in August 2001, HUD had not finalized plans for what it would do with these funds. Subsequent to our inquiry, in September 2001, HUD modified the housing finance contract to specify how the funds were to be used for two different projects in the Dominican Republic and El Salvador. The work in the Dominican Republican began soon after the contract was modified. In El Salvador, a contract with a private lender to capitalize a revolving loan fund for a housing micro-credit program was signed in December 2001. USAID worked with the 12 U.S. departments and agencies that implemented about $96 million in disaster recovery activities to help plan their efforts and provide administrative support. Because many of these agencies had little or no experience working in developing countries, their involvement in the program was time-consuming and burdensome for USAID staff in the beginning stages. USAID officials noted, however, that some agencies provided needed technical expertise. The other agencies generally acknowledged that it took time to incorporate their activities into USAID’s program but added that it had been a positive experience overall. USAID also coordinated with other bilateral and multilateral donors through formal consultative group meetings and informal contacts among mission staff and other donors. In contrast to many donors, USAID concentrated its activities in rural areas and smaller cities, making duplication with other donor efforts unlikely. We found no evidence that USAID activities duplicated those of other U.S. departments and agencies or other international donors. Many of the U.S. government entities involved in the disaster recovery program had little or no prior experience in working overseas. At the outset, USAID staff spent considerable time incorporating these agencies into USAID’s disaster recovery program and helping the agencies develop work plans in accordance with USAID’s development approach. In addition, the agencies’ administrative requirements, such as office space, residences, vehicles, equipment, and supplies, had to be coordinated with the respective U.S. embassy’s overall administrative services account. According to USAID officials, coordinating with numerous other U.S. entities was demanding and time-consuming for USAID staff, particularly at the outset of the disaster recovery program when staff were involved in initial relief and reconstruction activities. Nevertheless, USAID officials generally agreed that many agencies added value once the initial coordination problems were resolved. In particular, USAID officials most often cited the four agencies with scientific, technical, and engineering expertise not available at USAID— the Corps of Engineers, the National Oceanic and Atmospheric Administration, the U.S. Department of Agriculture, and the U.S. Geological Survey—as those that added the most value to the USAID recovery program. For example, these agencies provided engineering advice on infrastructure projects and carried out a number of activities designed to mitigate the effects of future natural disasters, such as conducting watershed management studies, installing stream gauges to monitor river flooding, and providing technical assistance on early warning and prevention systems to host government staff. Officials from the other U.S. departments and agencies expressed concerns about the time it took to incorporate a relevant program into USAID’s framework and the administrative constraints of operating overseas. Officials from some agencies noted that each USAID mission and embassy operated a little differently, and some missions asked for additional paperwork that may not have been required at another mission. One agency official told us that it received varying information on the need for country clearances for travel. Another noted that the missions and USAID headquarters sometimes provided conflicting information on the work plan and reporting requirements. One agency reported that it had some difficulty coordinating with the missions. However, as summarized in appendix III, most agencies noted that working with USAID was a positive experience and that USAID had been very helpful in guiding them through the reconstruction program. USAID regularly coordinated with international financial institutions, multilateral organizations, and other bilateral donors. For the Hurricane Mitch countries, the highest level of coordination occurred at the international consultative group level. At a consultative group meeting held in May 1999 in Stockholm, Sweden, the governments of Central America and the international community developed the guiding principles and goals for reconstruction, known as the “Stockholm Declaration.” The overriding goal of reconstruction, as stated in the declaration, was to reduce the social and ecological vulnerability of the region. At subsequent meetings, donors and recipient countries, including civil society representatives, reviewed the progress toward reconstruction. Although no consultative groups were formed to assist the Dominican Republic, Haiti, and other Caribbean islands affected by Hurricane Georges, USAID similarly coordinated with its counterparts in the international donor community. At Stockholm, the international community pledged $9 billion, including the U.S. pledge of $1 billion. However, these pledges have not been fully paid. According to USAID officials, commitments totaling about $5.3 billion are still considered firm as of May 2002. We were unable to obtain information on the status of other donors’ actual expenditures. Based on discussions with officials of USAID, host governments, nongovernmental organizations, and other donors, USAID was among the first to expend funds and complete most of its program. In Honduras and Nicaragua, we saw evidence of the contributions of other bilateral donors, particularly bridges and other infrastructure built by the Swedish and Japanese aid agencies. Coordination among USAID and other donors was evident at the country level. In Honduras and Nicaragua, donor representatives met regularly to discuss their respective aid programs and emerging issues. In addition, USAID technical staff coordinated with their counterparts at the program and project level. For the most part, USAID targeted its activities in rural areas where other donors had little or no activity. In instances of potential duplication, we found that USAID took action to ensure that its activities added value. For example, when USAID began public health activities in a remote area of northern Nicaragua along the Honduran border, it found that the Organization of American States was conducting similar health- related activities in the same region. After several meetings and with guidance from the Nicaraguan Ministry of Health, USAID and the organization’s representatives agreed to target their activities to avoid duplication. Specifically, the Organization of American States agreed to continue its monthly training with community health agents, and USAID agreed to focus its funds on sexual and reproductive health, disaster prevention and mitigation, and other activities not covered by the organization’s project. Although coordination existed within the international community, some USAID officials stated that coordination with the host governments was less than optimal. Each Central American country developed its plan for hurricane reconstruction with assistance and support from the donor community. However, according to U.S. and other donor officials, in practice, some governments generally did not maintain up-to-date information on donor activities or prioritize their proposed projects. The conference report accompanying the legislation for the supplemental appropriation directed USAID to help the affected countries develop an institutional capacity to resist corruption. USAID’s efforts to combat corruption through assistance to audit institutions had mixed results. In Honduras, USAID provided $1.3 million to the Controller General’s Office to strengthen its capacity to audit reconstruction programs and promote enhanced awareness of the importance of vigilance over public funds. This funding for equipment, technical assistance, and training continued institutional strengthening efforts initiated before Hurricane Mitch. However, in other instances host government realities limited USAID’s overall progress in this area. The Nicaraguan government diluted the independence of its Controller General’s Office by creating a panel of five appointees representing two parties to oversee the office’s activities. USAID subsequently terminated its regular program with the office 9 months later when it became apparent that the panel would not take the advice of USAID-funded technical advisors. Similarly, USAID terminated its program with the Dominican Republic Controller General’s Office because it lacked independence. USAID also contributed $4.2 million to the Inter-American Development Bank to establish independent oversight units within the Honduran and Nicaraguan governments. These units are intended to oversee the operations of government ministries and independent government agencies, similar to U.S. government offices of inspectors general. In early June 2002, USAID released $1 million to the bank to contract for the consulting services for the Honduras unit. According to USAID, the unit in Honduras began operating in June 2002 and the remaining $2.2 million should be disbursed by the end of 2002. In Nicaragua, according to USAID officials, the implementation of this unit was slowed by the bank’s lengthy project approval process, the time needed to gain financial support from other donors, and the previous government’s lack of commitment. The government elected in November 2001 supports the project and proposed some modifications to strengthen local capacity building rather than merely hiring contractors to implement the unit. USAID expects the unit to begin operations in September 2002 with USAID’s $1 million covering the initial costs. USAID faced numerous challenges in initiating this large-scale disaster recovery program that affected the pace of implementation in the beginning phases. USAID had to balance the competing interests of expediting implementation of the program with ensuring that appropriate oversight and financial controls were in place and procurement actions were open and transparent. Overall, USAID does not have the “surge capacity” to quickly design and implement a large-scale infrastructure and development program with relatively short-range deadlines. The reasons are institutional, systemic, and long-standing and will require deliberate and sustained actions if USAID is to improve its ability to respond more quickly to such situations in the future. With a few exceptions, USAID began expending disaster recovery funds from the supplemental appropriation in January 2000, about 7 months after the supplemental appropriation of $621 million was approved. (See fig. 7 for a timeline illustrating USAID’s expenditures.) Some of this time was used to notify the Congress about how the supplemental funds were to be expended. In most cases, the funds were available during July and August of 1999. USAID then had to complete its contracting processes and ensure that program management and oversight were in place. During 1999, before the supplemental funds were available for use, USAID missions used $189 million in other funds for emergency relief and initial reconstruction programs, such as food-for-work activities to rebuild infrastructure in hurricane-affected areas. During this time, USAID missions were also operating with the staff resources allocated based on their regular programs, and they also had to deal with the rotation of several senior-level staff during the summer of 1999. Before Hurricane Mitch, the Honduran and Nicaraguan missions were managing annual programs of about $23 million and $30 million, respectively. The Honduran mission had recently been considerably reduced in size and it took many months to fill the positions needed to oversee the disaster recovery program. In particular, the Honduran mission did not have a permanent contracts officer—it had been sharing one with the Nicaraguan mission—until October 1999, a year after Hurricane Mitch. Other missions also shared contracts officers. As noted in appendix II, the missions in Honduras, Nicaragua, and the Dominican Republic said the absence of full-time contracts officers led to delays. The number of USAID direct-hire staff in general, and contracts officers in particular, has declined in recent years and USAID had difficulty finding qualified personnel to manage this large-scale emergency program on an expedited basis. This problem was compounded by some USAID senior- level staff (for example, contracts and administrative officers) rotations during the summer of 1999. Although USAID’s headquarters office attempted to ease the burden by providing temporary staff in the hurricane- affected countries, the missions lacked needed continuity, and, according to the Honduran mission, the lack of travel funds precluded timely assistance for some activities. The Honduran mission emphasized that the need to obtain qualified staff more quickly is one of the most important lessons learned from the hurricane reconstruction program. USAID also does not have any procedures to expedite the hiring of personal services contractors. As a result, acquiring personal services contractors with the requisite language and technical skills to manage the reconstruction program often took 6 months to more than a year. The process involves revisions in position descriptions and scopes of work, internal and external position announcements, screenings, interviews, and medical and security checks. For example, the Nicaraguan mission experienced major delays in security clearances—one person accepted a job elsewhere after waiting more than a year for a clearance. The hiring and clearance process also precluded the timely arrival of in-country staff from other U.S. departments and agencies to conduct their programs. Because contractor and other U.S. agency staff provided much of the day-to-day management of the program, these delays were burdensome for the USAID staff on board and slowed the pace of implementation. In addition to building up staffing levels, the missions in some countries decided to implement certain accountability measures prior to program implementation. For example, before it began its host country contracting programs for major infrastructure projects, the USAID mission in Honduras advertised for and selected a U.S. engineering and project management firm to oversee the technical aspects and a third-party accounting firm to handle disbursements to the Honduran government. Although USAID missions had the authority to waive full and open competition for awarding contracts and grants, it was used sparingly. The Honduran mission used the waiver authority to bypass the normal requirement to advertise in the Commerce Business Daily, which saved 60 days in awarding some contracts. In many instances, missions amended existing cooperative agreements and contracts to accelerate the procurement process. However, although using sole source awards would have speeded up the award process, it may have precluded U.S. firms from being awarded contracts. The Honduran mission, for example, redesigned much of its municipal water and sanitation program to allow U.S. firms to compete, resulting in a later start date. The involvement of numerous other U.S. government departments and agencies presented a challenge for which the USAID missions were unprepared. Mission staff told us that, at the beginning of the program, coordinating with officials from other agencies, helping them with their work plans, and facilitating their administrative needs took considerable time away from their already busy workload. The burden eased as some agencies assigned in-country personnel, but it took considerable time for these people to arrive because their positions had to be approved by the embassy and they needed security clearances. Some U.S. entities did not assign staff in country and USAID had to coordinate temporary duty tours for these personnel as well. During our review, USAID and the other U.S. government entities provided their observations on lessons learned and some ideas for improving the delivery of disaster recovery assistance in the future. USAID and the other agencies almost unanimously agreed that the December 31, 2001, deadline was a major factor in how they planned, designed, and implemented their disaster recovery activities, and it also affected the extent to which sustainability could be built into the program. USAID missions suggested limiting the number of other U.S. government entities involved, using umbrella agreements and indefinite quantity contracts to hasten the procurement process, avoiding host country contracting, and relying on organizations that are already working with USAID in the country. Other U.S. government entities noted that they had learned much about coordinating an interagency program overseas and had come to appreciate the complexities of working in developing countries. Some noted the need for a simpler method of dealing with administrative costs while in country—one suggestion was for USAID to create one account for charging all administrative, logistical, financial, and procurement services for future emergency programs. (As previously noted, see apps. II and III for more detailed summaries of the responses from the USAID missions and other U.S. departments and agencies.) USAID officials in the overseas missions and in USAID’s Washington, D.C., headquarters generally agreed with our observations on the obstacles it faced in getting the disaster recovery program off the ground. They emphasized that the lead role that USAID was expected to perform in planning and implementing the disaster recovery program was a significant challenge. In mid-2000, USAID's Bureau for Latin America and the Caribbean drafted a “lessons learned” analysis of the disaster recovery program's start-up and offered recommendations for the systemic and procedural changes needed for a similar response in the future. It suggested options for funding flexibility, staff mobilization, program design and planning, accountability, and the role of other U.S. government agencies and the private sector. The USAID administrator subsequently formed the Emergency Response Council to conduct an agencywide review of its experiences with international emergencies. In December 2001, the council proposed several program and procedural reforms to provide more flexibility in planning and implementing activities in post-crisis or post-emergency situations. In particular, the memorandum proposed that USAID missions include in their development strategies and implementation instruments (such as contracts, grants, and cooperative agreements) a “crisis modifier” clause to provide resources more quickly; consider funding alternatives in the absence of supplemental appropriations, such as increased borrowing authority to use available USAID resources programmed for other activities; develop a package of procurement waivers for reconstruction activities, allowing, among other things, the purchase of certain commodities without regard to source and origin; develop strategies for addressing legislative authorities to obtain more flexibility in reconstruction programming; and develop a skills database of internal resources available for deployment on reconstruction design teams. In May 2002, the USAID administrator approved the council’s recommendations in the areas of strategic planning and programming, funding alternatives, and staffing. In addition, also in May 2002, a USAID contractor hired to independently assess the agency’s response to Hurricanes Mitch and Georges outlined numerous and sometimes detailed actions that USAID can take to improve its response to future reconstruction programs. These recommendations included options for program design, staff mobilization, procurement, interagency coordination and administrative support, and accountability. USAID and the other U.S. departments and agencies provided disaster recovery assistance that helped the affected countries recover from the devastating effects of Hurricanes Mitch and Georges. USAID’s programs and projects and those of the other U.S. government entities spanned all sectors and affected countries, helping to rebuild infrastructure, restore economic activity and access to basic services, and mitigate the effects of future disasters. Increased oversight of the disaster recovery program helped ensure that funds were spent for intended purposes and not misused. However, USAID faced numerous obstacles and challenges. Primarily, USAID did not have the flexibility to readily replace key staff—primarily contracts officers—or the ability to expeditiously hire personal services contractors to help plan for and initiate the disaster recovery program. Available USAID mission staff were also involved in providing emergency relief, initial reconstruction assistance, and continuing regular development programs. USAID missions in some countries also implemented certain measures to help ensure accountability over the assistance funds prior to program implementation. In addition, coordinating with and helping the other U.S. departments and agencies develop their programs was burdensome and time-consuming for the missions. As a result, the initial pace of implementation was slowed as USAID took steps to obtain adequate staff, incorporate oversight and accountability measures, and coordinate the activities of other U.S. government entities. USAID will likely be called upon to deliver and oversee disaster recovery assistance again as natural and man-made disasters continue to occur. The proposal for USAID to oversee and implement a rebuilding program in Afghanistan after more than two decades of war is the most immediate but not the only example. USAID’s Emergency Response Council and an independent contractor have examined USAID’s response to Hurricanes Mitch and Georges and made numerous recommendations and proposals for improving the agency’s response to disaster recovery programs. Our review further demonstrates that more flexible mechanisms and better interagency coordination procedures are needed to facilitate initiation of large-scale disaster recovery programs and could allow USAID to improve its response time in future similar situations while maintaining adequate oversight and accountability. We recommend that the USAID administrator expedite implementation of the Emergency Response Council’s proposals approved in May 2002 to help ensure that USAID has the flexibility and resources needed for a timely response to future disaster recovery and reconstruction requirements. To further improve USAID’s ability to respond in similar situations, we recommend that the administrator develop and implement procedures that would (1) allow USAID to quickly reassign key personnel, particularly contracts officers, in post-emergency and post-crisis situations; (2) allow missions to hire personal services contractors to augment staff on an expedited basis; and (3) facilitate coordination of efforts with other U.S. departments and agencies that may be involved in future programs. USAID provided written comments on a draft of this report, noting that the report is comprehensive and constructive (see app. V). USAID concurred with the report’s findings and conclusions on both the success of the program and the challenges and impediments faced by USAID, particularly in the initial phases. USAID stated that it has carefully considered the lessons learned from the reconstruction experience in Latin America and will continue to identify changes in its structure and functioning to make it more flexible in responding to future similar crises. USAID did not comment on our recommendations. USAID elaborated on recent steps taken to address three of the five council recommendations in the areas of strategic planning and programming, funding alternatives, and staffing. We note, however, that these efforts are just beginning and that USAID did not address the other two council recommendations on expanded procurement waivers and legislative authorities. We further note that these efforts do not address our recommendations to develop procedures to (1) expedite the reassignment of key direct-hire personnel, such as contracts officers, in post-emergency situations and (2) facilitate coordination with other U.S. departments and agencies. As our report demonstrates, these are important issues for future emergency response situations and we urge USAID to address these areas. In addition to USAID, we requested comments from the nine U.S. departments and agencies that responded to our questionnaire summarized in appendix III. The Centers for Disease Control and Prevention, the Department of Agriculture, and the Department of Housing and Urban Development suggested minor technical clarifications that we have incorporated into the report as appropriate. The other departments and agencies had no comments. We will send copies of this report to interested congressional committees as well as the Administrator, USAID; the Director, Office of Management and Budget; and the heads of other U.S. departments and agencies that participated in the disaster recovery assistance program in Latin America. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4128 or at FordJ@gao.gov. Other contacts and staff acknowledgments are listed in appendix VI. To determine whether the program and projects funded by USAID and the other U.S. departments and agencies addressed the intended purposes of disaster recovery and reconstruction, we conducted work at the headquarters offices of USAID and other U.S. government entities and made more than 30 trips to the countries affected by Hurricanes Mitch or Georges. In Washington, D.C., we held frequent meetings with officials of USAID’s Bureau for Latin America and the Caribbean to discuss program oversight and the status of USAID’s activities. We coordinated with USAID’s Office of the Inspector General (and its regional office in El Salvador) to minimize duplication of effort and share information. We also attended the April 1999 meeting of USAID’s mission directors from Mitch-affected countries at which they discussed their respective disaster recovery strategies and we reviewed program strategy documents. We met with officials from the other U.S. departments and agencies to discuss and document how the USAID-provided funds were being spent and the status of their programs. We coordinated with the Office of Management and Budget regarding its oversight and attended meetings it held in June 2001 with USAID and most of the other U.S. departments and agencies to review the status of their activities and the pace of their expenditures as the December 31, 2001, deadline approached. We also visited the Centers for Disease Control and Prevention in Atlanta, Georgia, and the U.S. Army Corps of Engineers in Mobile, Alabama, for the same purposes. To conduct the overseas work, we made 11 trips each to Honduras and Nicaragua, 7 trips to the Dominican Republic, 2 trips each to El Salvador and Guatemala, and 1 trip each to Costa Rica and Haiti. In each country, we reviewed USAID’s strategies, work plans, and applicable contracts, grants, and cooperative agreements and discussed with USAID and other U.S. officials how their respective programs addressed reconstruction needs. We monitored USAID’s activities in all sectors in all hurricane-affected areas, including the remote Caribbean coast regions of Honduras and Nicaragua. We also visited projects implemented by other U.S. departments and agencies. In many instances, we visited and photographed sites before the projects began, during implementation, and after completion to provide a basis for comparison. During these trips, we interviewed representatives of contractors, nongovernmental organizations, and other entities responsible for day- to-day project implementation. Our Spanish-speaking staff interviewed the intended recipients of U.S. assistance. We asked how their homes, livelihoods, and communities had been affected by the hurricanes and how the U.S.-funded projects were helping them rebuild their infrastructure, restore their livelihoods, and provide basic services. We also reviewed USAID’s procedures for oversight and financial controls and met regularly with the personal services contractors, firms, and organizations hired by USAID to provide program oversight. We followed up with USAID mission staff and the other U.S. departments and agencies to determine whether concerns raised by us and others were being addressed. To determine whether USAID coordinated with other U.S. departments and agencies and other international donors, we met with USAID officials in Washington, D.C., and at the overseas missions to discuss their procedures for incorporating the activities of the other agencies into their programs and coordinating with multilateral and other bilateral donors. We also met with officials of the other U.S. agencies involved in the program to get their perspectives on agency coordination. Through documentation provided to us and our field visits, we reviewed the activities of all the U.S. departments and agencies to ensure that they did not duplicate one another. For the other international donors, we attended the consultative group meetings for Honduras in February 2000 and for Nicaragua in May 2000 and reviewed the documentation from other key donor meetings. We met with officials from the Inter-American Development Bank and the World Bank and several donor countries. We discussed their respective programs and reviewed their documentation. Finally, we met with host government officials, including mayors and other local officials, to discuss their procedures for ensuring donor activities did not conflict or overlap and their views on donor coordination. To determine what USAID did to help the affected countries strengthen their institutional capability to resist corruption, we interviewed the Controllers General in the Dominican Republic, El Salvador, Guatemala, Honduras, and Nicaragua. We discussed the organization and resources of their offices and their relationship to other entities in the national government. Although USAID also funds other anticorruption and financial management efforts at host country institutions, we did not include these activities within our scope. We also met with officials from USAID and the Inter-American Development Bank in Honduras and Nicaragua to discuss the status of the financial inspection units. In addition to the above efforts, we sent a “pro forma” set of questions to six USAID missions and to the nine U.S. departments and agencies that were most closely tied to USAID’s program to obtain their views on the lessons learned in planning, implementing, coordinating, and overseeing the disaster recovery program. We conducted our work between April 1999 and May 2002 in accordance with generally accepted government auditing standards. As the disaster recovery assistance program was coming to a close, we asked USAID’s missions for their views on how the program proceeded. To help provide a framework for answering our questions, we developed a pro forma questionnaire and sent it to the USAID missions in the Dominican Republic, El Salvador, Guatemala, Haiti, Honduras, and Nicaragua. All six replied. We grouped their responses into five broad topics: (1) program planning and implementation, (2) staffing, (3) accountability, (4) coordination, and (5) lessons learned that could be applied in future disaster recovery and reconstruction situations. Our analysis of their responses shows that all the missions had similar experiences, but the three missions that received the largest amounts of funding—Honduras, Nicaragua, and the Dominican Republic—encountered some unique problems and issues. The following is a summary of their responses. All six USAID missions reported that they made certain planning and implementation decisions based on the December 31, 2001, expenditure deadline and took actions to reduce start-up time. These actions generally helped ensure that the program would be completed by the deadline. However, missions reported that, in some instances, nongovernmental organizations and host governments were unprepared to meet the demands of the disaster recovery program and its relatively short time frame. All six missions reached agreements with organizations that were already in these countries and with which they had previously worked or actively engaged in the mission’s regular development programs. In doing so, the missions were confident that projects would be implemented by organizations familiar with USAID and with proven capabilities and track records. However, the Nicaraguan mission entered into some agreements that called for organizations to undertake activities they had not done before. This led to some problems. For example, one nongovernmental organization agreed to rehabilitate rural roads. After some initial work, we and several others pointed out that the roads were unlikely to stand up to normal traffic and weather. The organization subsequently hired engineers and the quality of the rehabilitated roads improved substantially. Two missions—Nicaragua and Haiti—reported that they combined relatively small activities that could have been awarded separately into larger agreements. This helped streamline the start-up process because the paperwork was reduced and USAID staff only had to deal with one organization rather than several. The mission in Haiti also reported that having one grantee enhanced communication, reporting, and accountability. The Nicaraguan mission transferred $16.6 million—nearly one-fifth of its total disaster recovery funding—to USAID’s Bureau for Global Programs, Field Support, and Research. This allowed the mission to bypass the process for soliciting and reviewing proposals and negotiating agreements. The mission acknowledged, however, that while using such global agreements is faster and the program quality is high, the services provided are generally more expensive than separately funded agreements. The Honduran mission used host country contracting—a mechanism whereby USAID transfers funds to the host government, which then enters into contracts with implementing organizations—for some large infrastructure projects in an attempt to speed up implementation. However, USAID regulations for host country contracting required numerous approvals and were difficult to mesh with Honduran government regulations. The mission also said that some host country counterpart ministries were bureaucratic and inefficient. The Guatemalan mission noted that, due to the deadline, it limited its monitoring and reporting to project outputs during implementation and did not seek to measure impact as it would have for a longer-term effort. The mission added that, for its watershed rehabilitation activities, a period of more than 2 years is required to assess impact. The three USAID missions that received the largest amounts of reconstruction funding—Honduras, Nicaragua, and the Dominican Republic—reported staffing problems, primarily the absence of a contracts officer at critical times during the disaster recovery program. In contrast, the three missions receiving smaller amounts of funding—El Salvador, Guatemala, and Haiti—reported no staffing problems. Problems noted by the missions included the following. The Honduran mission reported that the absence of a permanent contracts and grants officer until October 1999 was a serious constraint due to the important role that a contracts officer plays during the life of a program, particularly during the start-up phases. The mission noted that a contracts officer is needed for negotiating and signing agreements and providing valuable advice during the design process on issues such as the selection of appropriate implementation mechanisms and acquisition instruments. The Nicaraguan mission reported that the absence of a contracts officer was a problem during the closeout phase. In particular, although temporary-duty contracts officers were sent from headquarters, their efforts did not prevent some activities from slowing down as the program approached the December 31, 2001, deadline. The mission in the Dominican Republic reported that the absence of a permanent contracts officer greatly affected its program. Some actions were delayed because the local-hire assistant contracts officer was also responsible for the mission’s regular program contracts and for contracting actions at the USAID mission in Jamaica. The mission in the Dominican Republic reported that the majority of staff hired for its reconstruction effort had no prior USAID experience. As a result, initial implementation slowed as new staff learned the USAID management system. The Honduran and Nicaraguan missions reported that getting qualified staff on board was a lengthy process. The Honduran mission noted that the process to hire staff was long and burdensome and that nearly all activities had delays or start-up difficulties due to staff shortfalls. The Nicaraguan mission reported that it experienced major delays in obtaining security clearances for staff it had hired. In one instance, the mission selected a person who eventually accepted a job elsewhere after waiting more than a year for a security clearance. The Honduran mission reported it did not have the flexibility to reassign existing mission staff to some reconstruction activities. In addition, the mission had difficulties in obtaining temporary staff for its education activities because USAID headquarters either did not have the staff available or it lacked travel funds. All USAID missions reported that they took certain actions to ensure accountability for disaster recovery assistance funds. Some missions cited minimizing the funds provided directly to host governments as an example. Missions noted that the extensive audit and oversight coverage required a substantial commitment from mission staff already heavily involved in planning and implementing reconstruction projects. The Dominican Republic mission reported that it limited funds provided directly to the Dominican government to speed up the implementation process and reduce potential misuse of funds. It noted that the host government required more time to plan its budget and disburse funds. For example, government-funded potable water and sanitation systems for several housing projects were delayed when contractors did not receive payment from government institutions. The USAID missions in Honduras and Nicaragua hired consulting and management firms to handle funds and provide program oversight. The Honduras mission used eight organizations to provide oversight and technical assistance over various components of its disaster recovery program. The Nicaragua mission also hired several firms to provide oversight but one firm encountered problems in doing so. Specifically, the Nicaragua mission hired a U.S. management and consulting firm to oversee about $3.6 million provided to the Nicaraguan government for more than 20 small municipal infrastructure projects. However, the firm’s lack of engineering expertise and experience led to substantial delays in several projects. Ultimately, about half the projects were canceled and only $2.1 million was expended. The remaining funds were reprogrammed and used for other reconstruction efforts. USAID missions noted numerous problems resulting from working with the other U.S. departments and agencies. They did not cite any problems in coordinating with other international donors. The missions in the Dominican Republic, Honduras, and Nicaragua reported that integrating the programs of the other U.S. entities was time-consuming and burdensome for USAID staff. The mission in the Dominican Republic further noted that coordination and implementation were challenging because the other departments and agencies did not have sufficient staff in country and did not spend enough time during visits. The USAID missions in Honduras and Nicaragua also reported that problems arose because some U.S. departments and agencies lacked an understanding of the complexities of working in a developing country environment and overseas missions—some agencies developed reconstruction plans without reference to local conditions. The Honduran mission further noted that providing administrative support for some agencies was particularly cumbersome and required the establishment of a separate mechanism for cost sharing, even though the program was relatively short-lived. All USAID missions reported numerous lessons learned and indicated these lessons could be applied in future disaster reconstruction situations. Some examples follow. The missions in Honduras and the Dominican Republic reported delays in getting qualified contractor staff on board and the Nicaraguan mission reported major delays in obtaining U.S. and local security clearances for its contracted staff. The Dominican Republic mission suggested that the ability to hire personal services contractors and other staff—and get them to the mission—more quickly would be a great help in rapidly designing and implementing future emergency response and disaster assistance programs. All missions emphasized that a longer implementation period would have better ensured project sustainability. In addition, the Honduran mission reported that it had avoided activities involving institutional development and other complex reforms that would have required more time to complete. It also noted that, by paying relatively little attention to policy issues and emphasizing construction, it was unable to adequately address some of the underlying issues that had prevented Honduras from being prepared to respond adequately to disasters. The mission in the Dominican Republic acknowledged that it selected some types of activities that it knew could be completed by the expenditure deadline. It did so despite recognizing that other activities might have achieved greater sustainability, especially those with more cost sharing with the host government and other implementing organizations. Reaching agreements with established organizations with an in-country permanent presence with whom USAID had previously worked was a good mechanism that generally resulted in expediting program start-up and ensuring project quality and financial accountability. According to the Honduran mission, host country contracting should be used with caution in a disaster recovery program with relatively short time frames because these projects generally took longer to be completed. The Guatemala mission reported that using fixed amount reimbursable contracts was a very efficient implementation mechanism through which the implementing organization was periodically reimbursed for activities it had successfully completed only after the activities were inspected and certified by USAID-selected personnel. The mission also noted that this mechanism limits the likelihood of corruption and increases transparency when concurrent audits are also conducted. The missions in Honduras and the Dominican Republic reported that certain types of agreements with other U.S. departments and agencies worked better than others. The Honduran mission noted that participating agency services agreements worked better than interagency agreements. Such agreements allowed the mission to define the terms of reference, which helped make other U.S. government programs more compatible with the broad objectives of USAID’s reconstruction program and local conditions. The Dominican Republic mission reported that agencies working under participating agency services agreements and interagency agreements were more receptive to coordination and teamwork than those agencies that had their funds directly transferred to them by USAID. The mission in Honduras reported that USAID needs to do a better job in immediately identifying staff with the skills needed for reconstruction activities, rather than relying on staff within the mission or region. The mission further suggested including a human resources specialist in the first response team who could also assist the mission in filling staffing needs. The Honduras mission reported that its authority to redirect reconstruction funds within its own mission program contributed to successful project implementation. The mission noted that, based on progress and the changing needs of certain projects, it moved funds from some activities into others and strongly stated that all missions should retain this flexibility and authority. As we did with USAID’s missions, we asked the other U.S. departments and agencies that implemented reconstruction activities for their views on how the program proceeded. We provided a pro forma questionnaire to nine U.S. departments and agencies. All nine replied. We grouped the responses into five broad topics: (1) program planning and implementation, (2) staffing, (3) accountability, (4) coordination, and (5) lessons learned that may apply to future disaster recovery efforts. In general, the agencies encountered a variety of problems and issues but noted that they gained valuable experience in implementing disaster recovery and reconstruction programs overseas. The following is a summary of their responses. The ability of the other U.S. departments and agencies to plan and implement their programs was affected by various factors, particularly the December 31, 2001, deadline. For some, the deadline had little effect on design and planning decisions but they could have used more time for training or to reinforce efforts to make their programs more sustainable. Other agencies reported that they designed their activities around the deadline. Other factors that affected project planning and implementation included administrative delays and host country conditions. NOAA reported that if the deadline had not been in place, it would have designed similar activities but would have included more training and sustainability-related activities. NOAA further noted that it did not have enough time at the beginning of the project to do a complete needs assessment to determine and prioritize activities. DOT also reported that, while the deadline did not affect the initial planning and designing of its program, other uncontrollable local factors, such as land acquisition and weather conditions, delayed some phases of DOT’s projects. USGS said that it could have used more time for additional feedback and to reinforce the methods and concepts of the training it provided. USDA reported that the deadline affected the design of its program technically and administratively. USDA had to identify projects that were feasible within the time frame and partners with sufficient capacity to successfully undertake the work. In addition, various administrative and bureaucratic delays, such as hiring staff to manage the program, hindered initial project implementation. HUD also reported that it would have designed its disaster mitigation activities somewhat differently if the deadline had not been imposed. Most agencies did not report staffing problems, although USDA reported that deploying permanent staff took some time and was a constraint in starting up its program. USGS, NOAA, USDA, HUD, and INL reported that they had full-time personnel in country, especially in the countries where they had larger programs. CDC, EPA, FEMA, and DOT used contractors and grantees or permanent staff that traveled on temporary duty to carry out their work. USGS had full-time staff in Honduras and Guatemala and relied on temporary duty personnel in other countries. USDA had full-time personnel in Honduras and Nicaragua—both USDA direct-hire staff and personal services contractors. USDA also used temporary duty personnel in all countries in which it worked. USAID’s Bureau for Latin America and the Caribbean and mission staff, the Office of Management and Budget, and we conducted most of the oversight and review of the other U.S. departments and agencies. DOT and FEMA were the only agencies that were audited by their respective inspectors general—both reported positive audit outcomes. Staff from USAID’s Bureau for Latin America and the Caribbean conducted most of the oversight of interagency agreements. In general, agencies reported that the oversight and reviews did not adversely affect program implementation and were, in fact, helpful. Most agencies reported adequate oversight of their programs and added that the reviews did not affect program design or the pace of implementation. An exception was FEMA, which reported that responding to inquiries, mostly from its inspector general’s office, took time away from project activities. Most agencies reported that the oversight and reviews provided valuable input. For example, USDA reported that the additional oversight by USAID and us was not overly intrusive and was welcomed by division management. USGS also noted that meetings and field visits allowed its staff to discuss expectations with auditors and comply with regulations. However, EPA noted that, although the oversight helped ensure accurate recordkeeping, it received little feedback on its performance. Eight of the nine agencies, including the six agencies that had interagency agreements with USAID, reported that they designed their program to complement and supplement USAID’s program and that USAID had provided valuable assistance in helping them formulate their strategies. The same agencies reported that they also received a significant amount of logistical and administrative support from USAID and several noted that their programs would not have been as successful without USAID’s programmatic and administrative assistance. However, one agency reported that its contractor encountered some problems in coordinating with USAID. Most agencies reported that they took into account USAID’s expertise and guidance in planning and implementing activities. USGS reported that its program was developed in consultation with USAID missions and that it had made significant changes in its initial design in response to suggestions from USAID. FEMA noted that it would not have been as successful without the support and guidance it received from USAID’s Bureau for Latin America and the Caribbean. However, CDC’s contractor for its laboratory equipment and training project reported that sometimes the missions’ priorities differed from those of the health ministries or the national laboratories and the contractor had to change its approach. Several agencies reported that they provided technical expertise that complemented USAID’s program. DOT implemented a port damage assessment project that focused on the entire transportation network supporting international trading ports in Honduras and Nicaragua. According to DOT, USAID did not have the capability to deal with transportation matters on a regional basis and DOT filled the void. Eight of the nine agencies reported that they received in-country logistical or program support from USAID missions. However, CDC’s laboratory equipment contractor reported that administrative coordination with USAID missions was sometimes difficult because each mission required different information for issues such as country clearances and this created confusion for the contractor. All nine departments and agencies reported that they learned lessons that could be applied in future disaster assistance programs. Most noted the constraints imposed by the December 31, 2001, deadline and suggested that future efforts include time for follow-on activities, such as training, to ensure more sustainability. Several also suggested that USAID develop an easier method of charging for administrative activities. In addition to working with USAID, several agencies noted the importance of good coordination among the various U.S. government entities providing disaster recovery assistance. Three agencies reported that the limited time for project implementation was a constraint, especially for follow-on activities and project sustainability efforts. HUD reported that the deadline did not allow enough time to complete efforts to train local entities in finding other sources of funding for continuing activities in a resettlement community in Honduras. According to NOAA, future projects should have follow-on activities to assess implementation of the technical guidance and training provided. NOAA further noted that more time and resources should have been devoted to training host country counterpart organizations. CDC had to obtain extensions for two training programs beyond the December 31, 2001, deadline to ensure that enough epidemiologists were trained and that laboratory equipment would be used and maintained properly. Three agencies reported that they would have preferred to have a different manner of dealing with administrative expenses in country. Each suggested that USAID create a funding citation to charge each agency for all administrative, logistical, financial, and procurement services for future emergency programs. CDC and USDA recommended that USAID keep a portion of the funding before signing interagency agreements and that USAID provide all of the logistical and administrative support for the agencies, noting that this would allow for greater transparency and less confusion. Also, FEMA was not aware that administrative expenses were additional and had not budgeted for these costs. FEMA and USDA reported that a major constraint in overall planning was that the disaster recovery funds were not available prior to signing the interagency agreement to fund diagnostic, assessment, and planning activities. According to USDA, this led to significant delays in start-up activities. USDA suggested that USAID establish a rapid assessment fund, which it could use to reimburse other U.S. government agencies for their expenses. HUD reported that local community members are invaluable in locating work sites and then determining appropriate activities. Similarly, EPA reported that it learned to identify local partners to assist with logistics and technical support. EPA, INL, and CDC reported that they learned about working with host governments. EPA noted that it was important to get the host country governments involved from the very beginning and keep them involved throughout the program to help ensure sustainability. Staff responsible for INL efforts in El Salvador noted that projects work better if based on requests from host governments rather than on ideas developed in Washington, D.C. They added that one of INL’s projects in El Salvador, which now is on track, might have avoided some initial problems if more attention had been given to country conditions. CDC noted the importance of ensuring that U.S. agency priorities do not conflict with the concerns of host governments. USGS and CDC reported that they had learned more about working with other U.S. departments and agencies. USGS conducted a multidisciplinary program both within its agency and among other U.S. entities. USGS found that working with other agencies allowed it to share data among projects and programs, leading to more efficient and cost effective use of resources. CDC noted that clear coordination and communication from the very outset was important because agencies interpreted information differently. In addition, EPA suggested that greater efforts be made to help U.S. agencies create integrated programs in the same communities. HUD reported that it learned techniques and approaches to planning construction programs in poor communities that will allow for faster and more efficient reconstruction programs in the future. The emergency supplemental appropriated $621 million to USAID and it was the primary agency responsible for carrying out the U.S. disaster recovery assistance program. In turn, USAID transferred almost $96 million to 12 other U.S. departments and agencies that, for the most part, planned and implemented their own programs. USAID transferred funds in two ways as authorized by section 632 of the Foreign Assistance Act of 1961, as amended. Under 632(a), USAID has minimal responsibility for approving how the funds will be used, and program monitoring and evaluation is the responsibility of the receiving department or agency. Under 632(b), USAID and the receiving department or agency negotiate and agree on how the funds will be used, and USAID is responsible for program monitoring and evaluation; such transfers are implemented through interagency agreements. Table 2 shows the status of the disaster recovery funds through December 31, 2001, by the department or agency implementing the activities. In addition to those named above, David Artadi, Lyric Clark, John DeForge, Francisco Enriquez, E. Jeanette Espinola, Phillip Herr, José R. Peña, and George Taylor made key contributions to this report. Janey Cohen, Martin de Alteriis, Mark Dowling, Kathryn Hartsburg, and Jim Michels also provided technical assistance. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading. | In the fall of 1998, when hurricanes Mitch and Georges struck Central America and the Caribbean, the United States and other donors responded by providing emergency relief, such as food, water, medical supplies, and temporary shelter. Also, In May 1999, Congress passed emergency supplemental legislation that provided $621 million for a disaster recovery and reconstruction fund for the affected countries, as well as reimbursement for costs incurred by U.S. departments and agencies during the immediate relief phase. The U.S. Agency for International Development (USAID) and other departments and agencies made significant achievements in helping the affected countries rebuild their infrastructure and recover from the hurricanes. USAID and others used the disaster recovery assistance to bring about economic recovery, improve public health and access to education, provide permanent housing for displaced families, and improve disaster mitigation and preparedness. To achieve these broad objectives, USAID funded infrastructure construction and repair, technical assistance and training, loans for farmers and small businesses, and some equipment. In addition to its normal controls, USAID ensured that the funds were spent for intended purposes. USAID coordinated its activities with 12 other departments and agencies that were allocated $96 million for disaster recovery efforts. USAID also coordinated with other bilateral and multilateral donors through formal consultative group meetings and informal contacts among mission staff and other donors. USAID attempted to strengthen the capacity of host government audit in situations as a means to resist corruption. However, USAID was not successful in this area, mostly due to country conditions. USAID did not begin expending the supplemental funds until January 2000, 7 months after the appropriation was enacted. Some of the factors that added time included arranging for additional program staff and contractor support; ensuring that financial controls and other oversight measures were in place; coordinating with and planning for the involvement of numerous other departments and agencies; and providing for U.S. contractors and other organizations to compete for most of the contracts, grants, and cooperative agreements that were awarded. |
In the aftermath of the terrorist attacks of September 11, 2001, responding to potential and real threats to homeland security became one of the federal government’s most significant challenges. To address this challenge, the Congress passed, and the President signed, the Homeland Security Act of 2002, which merged 22 federal agencies and organizations into DHS, making it the department with the third largest budget in the federal government, about $40 billion for fiscal year 2005. In January 2003, we designated implementation and transformation of the new Department of Homeland Security as high risk based on three factors: (1) the implementation and transformation of DHS is an enormous undertaking that will take time to achieve in an effective and efficient manner, (2) components to be merged into DHS already face a wide array of existing challenges, and (3) failure to effectively carry out its mission would potentially expose the nation to very serious consequences. As we previously reported, one of the department’s key challenges is integrating the components’ respective financial management systems, many of which were outdated and had limited functionality, as well as addressing weaknesses from the inherited components. The Homeland Security Act of 2002 states that DHS’s missions include, among other things, preventing terrorist attacks within the United States, reducing America’s vulnerability to terrorism, minimizing subsequent damage, and assisting in the recovery from attacks that do occur. To help accomplish this integrated homeland security mission, the various mission areas and associated programs of 22 federal agencies were merged, in whole or in part, into DHS. The department’s organizational structure consists of eight major components—the U.S. Coast Guard (Coast Guard), the U.S. Secret Service, the Bureau of Citizenship and Immigration Services (CIS), and five directorates, each of which is headed by an Under Secretary: Information Analysis and Infrastructure Protection, Science and Technology, Border and Transportation Security, Emergency Preparedness and Response, and Management. Within the Management Directorate is DHS’s Office of the Chief Financial Officer (OCFO), which is assigned primary responsibility for functions, such as budget, finance and accounting, strategic planning and evaluation, and financial systems for the department. OCFO is also charged with ongoing integration of these functions within the department. The CFO Act requires the agency’s CFO to develop and maintain an integrated accounting and financial management system that provides for complete, reliable, and timely financial information that facilitates the systematic measurement of performance at the agency, the development and reporting of cost information, and the integration of accounting and budget information. The act also requires that the agency’s CFO be qualified, presidentially appointed, approved by the Senate, and report to the head of the agency. FFMIA requires that CFO Act agencies implement and maintain financial management systems that substantially comply with federal financial management systems requirements, applicable accounting standards, and the U.S. Government Standard General Ledger at the transaction level. It also requires auditors to report whether the agency’s financial management systems substantially comply with the three requirements of FFMIA. While not required to comply with provisions of the CFO Act or FFMIA, the Accountability of Tax Dollars Act of 2002, requires DHS to prepare and have audited financial statements annually. The Accountability of Tax Dollars Act of 2002, however, does not require compliance with the CFO Act or FFMIA. In identifying improved financial performance as one of its five governmentwide initiatives, the President’s Management Agenda recognized that an unqualified financial audit opinion is a basic prescription for any well-managed organization and that without sound internal control and accurate and timely financial information, it is not possible to accomplish the agenda and secure the best performance and highest measure of accountability for the American people. In addition, the Joint Financial Management Improvement Program (JFMIP) Principals have defined certain measures, in addition to receiving an unqualified financial statement opinion, for achieving financial management success. These additional measures include being able to routinely provide timely, accurate, and useful financial and performance information, having neither material internal control weaknesses nor material noncompliance with laws and regulations, and meeting the requirements of FFMIA. DHS obtained a consolidated financial audit for the 7-month period from March 1, 2003, to September 30, 2003, and received a qualified opinion from its independent auditors on its consolidated balance sheet as of September 30, 2003, and the related statement of custodial activity for the 7 months ending September 30, 2003. Auditors were unable to opine on the consolidated statements of net costs and changes in net position, combined statement of budgetary resources, and consolidated statement of financing. The auditors reported 14 reportable conditions on internal control, 7 of which were considered to be material weaknesses. When DHS was created in March 2003 and merged with 22 diverse agencies, there were many known financial management weaknesses and vulnerabilities in the inherited agencies. For 5 of the agencies that transferred to DHS—Customs Service (Customs), Transportation Security Administration (TSA), Immigration and Naturalization Service (INS), Federal Emergency Management Agency (FEMA), and Federal Law Enforcement Training Center (FLETC)—auditors had reported 30 reportable conditions, 18 of which were considered material internal control weaknesses. Further, of the four component agencies—Customs, TSA, INS, and FEMA—that had previously been subject to stand-alone audits, all four agencies’ systems were found not to be in substantial compliance with the requirements of FFMIA. Most of the 22 components that transferred to DHS had not been subjected to significant financial statement audit scrutiny prior to their transfer, so the extent to which additional significant internal control deficiencies existed was unknown. For example, conditions at the Coast Guard have surfaced because of its greater relative size and increased audit scrutiny at DHS as compared to its former legacy agency, the Department of Transportation (DOT). As part of DOT’s financial statement audit, the Coast Guard had no specifically attributable reported weaknesses identified. However, newly identified weaknesses related to the Coast Guard were one of the main reasons that independent auditors issued a qualified opinion on DHS’s consolidated balance sheet and why they were unable to provide an opinion on other financial statements for the 7 months ending September 30, 2003. For fiscal year 2002 and prior to its transfer to DHS, Customs’ auditors reported nine internal control weaknesses, including weaknesses in its ability to monitor the effectiveness of its internal controls over entry duties and taxes, controls over drawback claims, security issues in information technology (IT) systems, and issues concerning the strength of its core financial systems. These weaknesses can result in inaccurate reporting of certain material elements of Customs’ financial situation, system security weaknesses that could leave Customs’ information vulnerable to unauthorized access, and the necessity of extensive manual procedures and analyses to process routine transactions. Finally, these weaknesses contributed to Customs’ systems inability to substantially comply with the requirements of FFMIA. Although TSA is a relatively new agency formed after the September 11, 2001, terror attacks, its auditors reported six internal control weaknesses, including weaknesses in the hiring of qualified personnel, financial reporting and systems, property accounting and financial reporting, financial management policies, administration of screener contracts, and maintenance of adequate information in its personnel files. These weaknesses can result in uncontrolled spending of taxpayer dollars, misplaced or unaccounted for property, and challenges in producing financial statements. In its first year audit ending September 30, 2002, TSA obtained an unqualified audit opinion on its financial statements. However, TSA’s systems did not substantially comply with the requirements of FFMIA. INS’s auditors reported four internal control weaknesses as of February 28, 2003, including weaknesses in the functionality of its financial systems; recording accounts payable and related accruals; financial reporting; and controls over its financial management system. Weaknesses such as these have existed for several years and contribute to INS’s systems continuing inability to substantially comply with the requirements of FFMIA. Although the weaknesses did not interfere with the agency’s ability to obtain an unqualified opinion on its financial statement audit, they did result in the need for extensive manual effort to prepare reliable financial information and record basic financial transactions to aid management in decision making. FEMA’s auditors reported seven internal control weaknesses for fiscal year 2002, including weaknesses in information security controls over its financial systems environment; financial system functionality; financial reporting process; real and personal property system processes; account reconciliation processes; accounts receivable processes; and the lack of a process to evaluate the accuracy of a new claims estimation methodology. These weaknesses resulted in the need for extensive manual effort to compile financial information because FEMA’s financial systems were unable to perform certain basic accounting functions efficiently. Further, FEMA’s systems were unable to accurately track basic accounting information, such as real and personal property and accounts receivable. Many of these weaknesses specifically contributed to FEMA’s systems’ failure to substantially comply with the requirements of FFMIA. Finally, FLETC’s auditors reported four internal control weaknesses for fiscal year 2002. These weaknesses resulted from FLETC not having adequate policies and procedures in place to ensure that funds obligated were proper and that costs for construction in progress were recorded properly. Further, auditors found that FLETC was not taking the steps necessary to be in compliance with certain Office of Management and Budget requirements. Many of these weaknesses lead to FLETC’s systems’ inability to substantially comply with the requirements of FFMIA. DHS has made some progress in addressing the internal control weaknesses it inherited from component agencies. Nine of the 30 internal control weaknesses identified in prior component financial statement audits have been closed as of September 30, 2003. The remaining 21 issues represent continuing weaknesses that have been reported in DHS’s first Performance and Accountability Report. Nine of these were combined and reported as 3 material weaknesses, while 5 were reported as reportable conditions. The department’s independent auditors classified the remaining 7 weaknesses as lower level observations and recommendations. Table 1 summarizes the status of the 30 weaknesses DHS inherited from component agencies as of September 30, 2003. Auditors reported 6 additional weaknesses as of September 30, 2003, bringing the total number of reportable conditions for DHS to 14 for fiscal year 2003, 7 of which were considered to be material weaknesses. A description of these weaknesses can be found in appendix II. As mentioned previously, several of the departmentwide weaknesses resulted from combining previously identified weaknesses or reclassifying them, rather than from resolving the underlying internal control weaknesses. For example, in fiscal year 2003, DHS’s auditors reported a departmentwide material weakness related to financial systems functionality and technology. This weakness resulted from combining what accounted for 7 of the inherited weaknesses—3 from Customs, 2 from FEMA, 1 from INS, and 1 from TSA. Appendix III provides detailed information on the status of each of the 30 inherited weaknesses, including how they were reported in DHS’s Performance and Accountability Report. Component agencies took various steps to resolve nine of the previously identified weaknesses inherited from component agencies. For example, Customs had a previously identified weakness related to the effectiveness of its internal controls over accurate reporting of entry duties and taxes. This weakness was resolved by reinstituting a program that Customs had in place prior to the terrorist attacks of September 11, 2001, which allows for more accurate reporting of these taxes and duties. Another weakness DHS inherited relates to FEMA’s inability to identify and record certain accounts receivable in a timely manner. FEMA’s accounts receivable processes were strengthened to ensure that accounts receivable are determined and recorded on a timely basis. In order to resolve several weaknesses at FLETC and TSA, various policies and procedures were implemented at these components to ensure that financial information was recorded and properly approved. Further, TSA has hired additional staff, thereby resolving its weaknesses of not having a sufficient number of qualified accounting personnel. In addition to the 7 material weaknesses and 7 reportable conditions reported in DHS’s 2003 financial statement audit, DHS reported 12 additional weaknesses that affect the department’s full compliance with certain objectives of 31 U.S.C. 3512(c), (d) (commonly known as the Federal Managers’ Financial Integrity Act of 1982 (FMFIA)). FMFIA requires that management ensure that it has an organizational structure that supports the planning, directing, and controlling of operations to meet agency objectives; clearly defines key areas of authority and responsibility; and provides for appropriate lines of reporting. The standards also define internal control as a key component necessary to ensure that financial reporting information is reliable. Examples of the FMFIA weaknesses reported by DHS included deficient controls over laws and regulations regarding the border entry process, nonconformance related to system security, and lack of oversight and administration of major contracts at TSA. Of the seven departmentwide material weaknesses reported by DHS’s auditors for fiscal year 2003, four were newly identified and contributed to the auditors’ inability to render an opinion on all of DHS’s financial statements. Newly identified weaknesses included the lack of procedures at DHS to verify the accuracy and completeness of balances transferred on March 1, 2003, and significant weaknesses with the number of qualified financial management personnel employed by the department. DHS’s auditors also found significant deficiencies at the Coast Guard and Secret Service, preventing them from being able to express an opinion on certain financial statements. In addition to the internal control weaknesses cited in its 2003 financial statement audit, there were other weaknesses that, while not material to DHS on a departmentwide basis, are still important weaknesses that need to be addressed. FEMA, Customs, and TSA each had weaknesses at the time of their transfer to DHS. However, in the 2003 audit report, these weaknesses were classified as observations and recommendations, a much less serious classification. Lower classification within DHS does not mean that the issues are now somehow less severe, it merely refers to the materiality of a component within DHS. Considered against operations or assets of the stand-alone entity, these issues by themselves were relatively more significant than when considered in the context of the much larger consolidated operations of DHS as a whole. Resolving all previously reported internal control weaknesses, regardless of the current designation at DHS, is key to DHS’s ability to produce relevant and reliable financial information. DHS’s CFO testified that the department is committed to resolving the remaining weaknesses and has developed a plan to do so. According to the CFO’s plans, corrective actions will be developed by each applicable bureau or directorate and submitted to the OCFO. Currently, DHS’s OCFO has compiled a summary document with the corrective action plans as submitted by the applicable bureau or directorate. According to this document, corrective action plans of varying levels of detail are in place to address 12 of the 14 internal control weaknesses, some of which are scheduled to be completed by the end of fiscal year 2004. However, 2 material internal control weaknesses—Financial Systems Functionality and Technology and Transfer of Funds, Assets, and Liabilities to DHS—do not currently have any planned corrective actions in place. Along with developing corrective action plans, the CFO testified that DHS plans to implement a departmentwide tracking system to monitor the status of corrective actions. DHS has begun working with a contractor to design and implement a tracking system for outstanding weaknesses identified during the department’s independent financial audits. While this system is still being developed by the OCFO, with assistance from contractors, it is not yet fully functional and does not include information on all reported weaknesses. Until such time that it does, it will provide limited oversight and information on the status of corrective actions to address weaknesses at DHS. While progress has been made to address the known material weaknesses, much work still remains. Follow-through with planned corrective actions is paramount. The support of top officials at the department will be key in ensuring that the necessary resources are available to address the weaknesses and to ensure that they are resolved in a timely manner. DHS intends to acquire and deploy an integrated financial enterprise solution and reports that it has reduced the number of its legacy financial systems. DHS has established the Resource Management Transformation Office (RMTO) within the Management Directorate to manage its financial enterprise solution project. However, the acquisition is in the early stages, and continued focus and follow through, among other things, will be necessary for it to be successful. RMTO has termed its financial enterprise solution project “electronically Managing enterprise resources for government effectiveness and efficiency” (eMerge), which according to the RMTO’s Strategic Framework, “establishes the strategic direction for migration, modernization, and integration of DHS financial, accounting, procurement, personnel, asset management and travel systems, processes, and policies.” DHS expects the acquisition and implementation of the financial enterprise solution to take place over a 3-year time period and cost approximately $146 million. According to the strategic framework DHS provided to us, the development of an integrated financial enterprise solution will be accomplished in three phases. Phase I includes defining, acquiring, and testing the planned solution. Phase II involves implementing the solution throughout DHS, and Phase III is ongoing maintenance of the solution. According to DHS, the eMergevendor selection to occur in April or May of 2004. However, vendor proposal requests were issued in June 2004 and selection is to be completed in July 2004. Concurrent with eMerge, DHS has issued a request for quotation (RFQ) for an interim project—the Business Automation Initiative—to be developed by contractors during 2004. The RFQ requested system proposals to automate purchase requests for the department and to streamline the employee entry/exit process. Another interim initiative was considered by the department to integrate data mining and warehousing, improve grants visibility (beginning with first responder grants), and streamline financial statement consolidation. However, instead of pursuing this interim solution, DHS plans to include it in the requirements of the eMerge initiative and obtained approval of the requirements from various high-level DHS officials. Additionally, a request for proposal (RFP) was issued by DHS for the eMerge initiative. However, these documents were not provided to us until after we completed our fieldwork. Thus, we are not providing description, analysis, or evaluation of such information in this report, and we are unable to determine if DHS, through the RMTO, is developing a financial enterprise solution that will be in alignment with departmentwide information technology plans, many of which are still under development. Nevertheless, we have found that similar projects have proven challenging and costly for other federal agencies. For example, we have reported on the efforts of National Aeronautics and Space Administration (NASA), and the District of Columbia Courts (DC Courts) to acquire new information systems. NASA is on its third attempt in 12 years to modernize its financial management process and systems, and has spent about $180 million on its two prior failed efforts. DC Courts began its system acquisition in 1998 and has struggled in its implementation. One of the key impediments to the success of integration efforts at NASA was the failure to involve key stakeholders in the implementation or evaluation of system improvements. As a result, new systems failed to meet the needs of key stakeholders. DC Courts struggled in developing requirements that contained the necessary specificity to ensure the system developed would meet its users’ needs. To avoid similar problems, it is important, among other things, that DHS ensure commitment and extensive involvement from top management and users in eMergeAlthough we did not perform audit procedures to determine the impact of these reductions, reduction of service providers prematurely, without considering the provider’s reliability, or without an overall consolidation plan, could be negative if it interferes with the enterprise approach or causes significant short-term inefficiencies for agencies that must quickly adapt to other systems. It is too early to tell whether DHS’s planned financial enterprise solution will be able to meet the requirements of relevant financial management improvement laws-–those currently applicable to DHS (such as FMFIA), as well as some not applicable that are subject to pending legislation. DHS is currently subject to most financial management improvement laws except for the CFO Act and FFMIA. The goals of the CFO Act and FFMIA are to provide the Congress and agency management with reliable financial information for managing and making day-to-day decisions and to improve financial management systems and controls to properly safeguard the government’s assets. Further, the CFO Act requires certain agencies to have a qualified, presidentially appointed, Senate-confirmed CFO who reports to the head of the agency. FFMIA requires major departments and agencies covered by the CFO Act to implement and maintain financial management systems that comply substantially with (1) federal financial management systems requirements, (2) applicable federal accounting standards, and (3) the U.S. Government Standard General Ledger at the transaction level. Although DHS is not currently subject to FFMIA, its auditors disclosed systems deficiencies in its financial management information systems, the application of accounting standards, and recording of financial transactions, all of which relate to the requirements of FFMIA. Based on these weaknesses it is likely that DHS’s systems would not have been in substantial compliance with the requirements of FFMIA. Table 3 lists relevant financial management laws and describes their relationship to DHS. DHS is currently required to have annual audits under the Accountability of Tax Dollars Act and to report on its internal controls under FMFIA. Although DHS’s CFO has testified that DHS complies with the audit provisions of the CFO Act and will continue to do so, we believe DHS should be a CFO Act agency and be subject to the requirements of FFMIA. DHS should not be the only cabinet-level department not covered by what is the cornerstone for pursuing and achieving the requisite financial management systems and capabilities in the federal government. Given its early implementation, it is too early to tell whether DHS’s planned financial enterprise solution will meet the requirements of financial management laws it is currently not subject to. While DHS systems must meet the requirements of laws they are currently subject to, it is also important that DHS be proactive and incorporate the requirements of the CFO Act and FFMIA. It would certainly make good business sense to do so given DHS’s size and mission. DHS has implemented a commercial-off-the-shelf tool called Dynamic Object Oriented Requirements System (DOORS) to track the requirements of various laws, regulations, and circulars place on the development of an integrated financial system. DOORS is intended to be DHS’s repository of all applicable system, process, technological, data, or other requirements. DHS estimated that several thousand compliance requirements will be tracked using DOORS once analysis is completed. After the repository is complete, requirements reports are to be printed directly from DOORS and attached to future RFPs to ensure that contractors are aware of the legislative requirements of the systems to be developed. A system to record, track, and link all legislative requirements as a financial management system is being developed is important. Also important is that DHS be statutorily required to comply with the CFO Act and FFMIA and that the systems DHS acquires are capable of meeting the requirements of those laws, as well as ones currently applicable. Meeting these financial management improvement requirements will help produce timely and useful financial and business information. Since its inception in March 2003, DHS has been faced with many challenges, including how to integrate its financial management processes and systems. Steps have been taken to address the 30 internal control weaknesses it inherited from its component agencies. However, to ensure financial accountability and establish an effective financial environment, DHS must address all outstanding inherited weaknesses, as well as address the newly identified department-level weaknesses. Through the eMerge initiative, DHS has plans to integrate and consolidate its financial and business systems. But without such things as continued active oversight from top-level management and systematic approaches to this integration, DHS could find itself in the same position as other federal departments— producing an ineffective and costly financial management system that does not provide the information needed by management or meet the requirements of financial management laws. Finally, we believe that it is of critical importance that DHS be statutorily required to comply with the important financial management reforms legislated in the CFO Act and FFMIA. The financial management improvements of FFMIA build on the CFO Act by emphasizing the need for agencies to have systems that can generate reliable, useful, and timely information with which to make fully informed decisions and to ensure accountability on an ongoing basis. This issue is still of foremost importance, especially as DHS continues its financial management system integration and development. In view of the size of DHS and the importance of the CFO Act and FFMIA in improving financial management and its applicability to all other cabinet departments, the Congress may wish to consider the following action: Enact legislation to designate DHS as a CFO Act agency. We are making eight recommendations for executive action at DHS that will improve financial management at the department. Specifically, we recommend that the Secretary of Homeland Security direct the Under Secretary for Management to do the following: Continue to maintain strong involvement of key stakeholders and top management throughout the acquisition and implementation of the eMerge Maintain a tracking system of all auditor-identified and management- identified control weaknesses. We obtained written comments on a draft of this report from DHS’s Chief Financial Officer. The comments DHS provided to us are reprinted in appendix IV. In commenting on a draft of this report, DHS generally agreed with the overall findings and recommendations. However, in response to our recommendation to incorporate all internal control weaknesses in the tracking system DHS is currently developing, DHS felt the recommendation was too broad and suggested that we change the language to reflect tracking of all auditor-identified and management-identified internal control weaknesses. The original intent of our recommendation was to encourage DHS to track and resolve all auditor reported material weaknesses, reportable conditions, and observations and recommendations, similar to those discussed throughout this report. We fully support DHS including all management-identified control weaknesses as well, and have updated our recommendation accordingly. Additionally, DHS commented on its commitment to full adherence to the CFO Act and FFMIA. We applaud the current leadership at DHS for voluntarily complying with some audit provisions of the CFO Act, however, we continue to strongly support passage of legislation that would statutorily make DHS a CFO Act agency, and thus guarantee future requirements to adhere to important financial management legislation. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of this report to interested congressional committees and subcommittees. We will also make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report or wish to discuss it further, please contact me at (202) 512-6906 or Casey Keplinger, Assistant Director, at (202) 512-9323. In addition, Heather Dunahoo and Scott Wrightson made key contributions to this report. To identify what were the existing weaknesses in the Department of Homeland Security’s (DHS) component agencies’ financial management systems, we reviewed relevant DHS Office of Inspector General (OIG) reports and our January 2003 report on major management challenges at DHS and looked at how such challenges are being addressed. We also reviewed DHS’s Performance and Accountability Report for the 7 months ending September 30, 2003. We reviewed prior-period component agency annual financial statement audit reports when available; Immigration and Naturalization Service’s (INS) financial statement audit report for the 5 months ending February 28, 2003; and Performance and Accountability Reports for the Federal Emergency Management Agency (FEMA) and the Departments of Transportation, Justice, and Treasury. We reviewed testimony of DHS’s current and former Chief Financial Officer (CFO) and DHS’s OIG reports related to financial management at the department. Finally, we interviewed officials from the OIG and the Office of the Chief Financial Officer (OCFO). To determine whether DHS was addressing the problems that existed in the financial management systems DHS acquired from its component agencies, we met with officials from the OCFO’s Office of Financial Management and OIG staff. In addition to items already mentioned, we reviewed planned corrective actions developed by the department to address its fiscal years 2002 and 2003 material weaknesses and reportable conditions. We also reviewed testimony of DHS’s CFO related to this issue. Further, we conducted a walk-through to review the system DHS is developing to track planned corrective actions. To determine what plans DHS has to integrate its financial management systems, we met with the Director of the Resource Management Transformation Office (RMTO) and other staff in this office. We also reviewed testimony of DHS’s current and former CFO and DHS’s OIG related to financial management at the department. We reviewed documentation detailing the reduction of financial service providers, but we did not complete audit procedures to determine if these reductions were positive or negative for the department. Finally, we reviewed the RMTO’s strategic framework. However, substantial documentation related to the eMerge initiative was not provided to us until after we completed our fieldwork. Thus, we did not include analysis or evaluation of such information in this report. To determine whether the planned systems that DHS is developing will be able to meet the requirements of relevant financial management improvement laws, we reviewed relevant laws and regulations, and relevant guidance related to financial management, financial reporting, systems implementation, and requirements. We also interviewed the Director of the RMTO and other officials. Further, we reviewed testimony relevant to this issue by DHS’s current and former CFO and DHS’s OIG. We have not reviewed system requirements or other recently developed plans because these were completed and obtained after our fieldwork was completed. We requested comments on this report from the Secretary of Homeland Security or his designee. Written comments were received from the department’s Chief Financial Officer and are reprinted in appendix IV. We performed our review from October 2003 through June 2004 in Washington, D.C., in accordance with U.S. generally accepted government auditing standards. Financial management and personnel: DHS’s OCFO needs to establish financial reporting roles and responsibilities, assess critical needs, and establish standard operating procedures (SOP) for the department. These conditions were not unexpected for a newly created organization, especially one as large and complex as DHS. The Coast Guard and the Strategic National Stockpile had weaknesses in financial oversight that have led to reporting problems. Financial reporting: Key controls to ensure reporting integrity were not in place, and inefficiencies made the process more error prone. At the Coast Guard, the financial reporting process was complex and labor-intensive. Several DHS bureaus lacked clearly documented procedures, making them vulnerable if key people leave the organization. Financial systems functionality and technology: The auditors found weaknesses across DHS in its entitywide security program management and in controls over system access, application software development, system software, segregation of duties, and service continuity. Many bureau systems lacked certain functionality to support the financial reporting requirements. Property, plant, and equipment (PP&E): The Coast Guard was unable to support the recorded value of $2.9 billion in PP&E due to insufficient documentation provided prior to the completion of audit procedures, including documentation to support its estimation methodology. The Transportation Security Administration (TSA) lacked a comprehensive property management system and adequate policies and procedures to ensure the accuracy of its PP&E records. Operating materials and supplies (OM&S): Internal controls over physical counts of OM&S were not effective at the Coast Guard. As a result, the auditors were unable to verify the recorded value of $497 million in OM&S. The Coast Guard also had not recently reviewed its OM&S capitalization policy, leading to a material adjustment to its records when an analysis was performed. Actuarial liabilities: The Secret Service did not record the pension liability for certain of its employees and retirees, and when corrected, the auditors had insufficient time to audit the amount recorded. The Coast Guard also was unable to provide, prior to the completion of audit procedures, sufficient documentation to support the recorded value of $201 million in post-service benefit liabilities. Transfers of funds, assets, and liabilities to DHS: DHS lacked controls to verify that monthly financial reports and transferred balances from legacy agencies were accurate and complete. Drawback claims on duties, taxes, and fees: The Bureau of Customs and Border Protection’s (CBP) accounting system lacked automated controls to detect and prevent excessive drawback claims and payments. Import entry in-bond: CBP did not have a reliable process of monitoring the movement of “in-bond” shipments—i.e., merchandise traveling through the U.S. that is not subject to duties, taxes, and fees until it reaches a port of destination. CBP lacked an effective compliance measurement program to compute an estimate of underpayment of related duties, taxes, and fees. Acceptance and adjudication of immigration and naturalization applications: The Bureau of Citizenship and Immigration Services’ (CIS) process for tracking and reporting the status of applications and related information was inconsistent and inefficient. Also, CIS did not perform cycle counts of its work in process that would facilitate the accurate calculation of deferred revenue and reporting of related operational information. Fund balance with Treasury (FBWT): The Coast Guard did not perform required reconciliations for FBWT accounts and lacked written standard operating procedures (SOP) to guide the process, primarily as the result of a new financial system that substantially increased the number of reconciling differences. Intragovernmental balances: Several large DHS bureaus had not developed and adopted effective SOPs or established systems to track, confirm, and reconcile intragovernmental balances and transactions with their trading partners. Strategic National Stockpile (SNS): The SNS accounting process was fragmented and disconnected, largely due to operational challenges caused by the laws governing SNS. A $485 million upwards adjustment had to be made to value SNS in DHS’s records properly. Accounts payable and undelivered orders: CIS and the Bureau of Immigration and Customs Enforcement (ICE), TSA, and the Coast Guard had weaknesses in their processes for accruing accounts payable or reporting accurate balances for undelivered orders. Reportable Condition (Drawback Claims on Duties, Taxes, and Fees) Material Weakness (Financial Systems Functionality and Technology) Material Weakness (Financial Systems Functionality and Technology) Observation & Recommendations to Management Reportable Condition (In-bond Movement of Imported Goods) Observation & Recommendations to Management Material Weakness (Financial Systems Functionality and Technology) Immigration and Naturalization Service (as of February 28, 2003) Reportable Condition (Acceptance and Adjudication of Immigration and Naturalization Applications) Reportable Condition (Accounts Payable and Undelivered Orders) Observation & Recommendations to Management Material Weakness (Financial Systems Functionality and Technology) Material Weakness (Financial Systems Functionality and Technology) Material Weakness (Financial Systems Functionality and Technology) Material Weakness (Financial Reporting) Observation & Recommendations to Management Reportable Condition (Intragovernmental Balances) Agency and Condition Reported in 2002 Federal Law Enforcement Training Center 22. Laws and Regulations (OMB Circular A-127) 24. Laws and Regulations (OMB Circular A-11) Material Weaknesses (Financial Reporting; Financial Systems Functionality and Technology) Material Weakness (Property, Plant, and Equipment) The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.” | When the Department of Homeland Security (DHS) began operations in March 2003, it faced the daunting task of bringing together 22 diverse agencies. This transformation poses significant management and leadership challenges, including integrating a myriad of redundant financial management systems and addressing the existing weaknesses in the inherited components, as well as newly identified weaknesses. This review was performed to (1) identify the financial management systems' weaknesses DHS inherited from the 22 component agencies, (2) assess DHS's progress in addressing those weaknesses, (3) identify plans DHS has to integrate its financial management systems, and (4) review whether the planned systems DHS is developing will meet the requirements of relevant financial management improvement laws. DHS inherited 30 reportable internal control weaknesses identified in prior component financial audits with 18 so severe they were considered material weaknesses. These weaknesses include insufficient internal controls, system security deficiencies, and incomplete policies and procedures necessary to complete basic financial information. Of the four inherited component agencies that had previously been subject to stand-alone audits, all four agencies' systems were found not to be in substantial compliance with the requirements of the Federal Financial Management Improvement Act (FFMIA), an indicator of whether a federal entity can produce reliable data for management and reporting purposes. Component agencies took varied actions to resolve 9 of the 30 inherited internal control weaknesses. The remaining 21 weaknesses were combined and reported as material weaknesses or reportable conditions in DHS's first Performance and Accountability Report, or were reclassified by independent auditors as lower-level observations and recommendations. Combining or reclassifying weaknesses does not resolve the underlying internal control weakness, or mean that challenges to address them are less than they would have been prior to the establishment of DHS. DHS is in the early stages of acquiring a financial enterprise solution to consolidate and integrate its business functions. Initiated in August 2003, DHS expects the financial enterprise solution to be fully deployed and operational in 2006 at an estimated cost of $146 million. Other agencies have failed in attempts to develop financial management systems with fewer diverse operations. Success will depend on a number of variables, including having an effective strategic management framework, sustained management oversight, and user acceptance of the efforts. It is too early to tell whether DHS's planned financial enterprise solution will be able to meet the requirements of relevant financial management improvement laws. As of June 2004, DHS is not subject to the CFO Act and thus FFMIA, which is applicable only to agencies subject to the CFO Act. While DHS is currently not required to report on compliance with FFMIA, its auditors disclosed systems deficiencies that would have likely resulted in noncompliance issues. |
The Postal Service, an independent establishment of the executive branch of the U.S. government, is the largest federal civilian agency, consisting of more than 38,000 post offices, branches, and stations and 350 major mail-processing and distribution facilities. As part of its strategy for better managing its procurement of goods and services, the Postal Service has centralized the procurement of commodities that were previously decentralized. For example, all office supply procurements are now managed by the Office Products and Utilities Category Management Center in Windsor, Connecticut, which is responsible for administering the national contract. Previously, office supply procurement was decentralized, with each area managing its own procurements. To demonstrate its commitment to reaching SMW businesses, the Postal Service has developed a 5-year supplier diversity plan. The plan focuses on maintaining a strong supplier base that includes SMW businesses. While it does not set specific dollar goals, the plan is intended to ensure that the Postal Service spends an increasing amount of its procurement dollars on goods and services from diverse businesses through fiscal year 2003. To monitor its progress, the Postal Service measures its prime and subcontracting spending achievements with SMW businesses. During fiscal years 1999 through 2001, Postal Service procurement of goods and services (which includes office supplies) decreased from $3.5 billion to $2.6 billion. For the same time period, office supply procurement grew from $107 million to $125 million. Postal Service officials explained that this increase does not necessarily indicate an actual increase in office supply spending, but rather it reflects improvements in the procurement system’s ability to track spending. The officials indicated that the data provided, while not perfect, are the best available information. In October 1999, the Postal Service issued a solicitation for a national-level office supply contract. Four vendors submitted proposals. The solicitation provided that the award would be made to the vendor that offered the best overall value to the government, considering nonprice and price factors. The proposals were evaluated based on several factors, including the vendors’ demonstrated understanding of the solicitation’s (1) technical requirements, including the ability to implement and maintain a Web-based procurement system, and (2) business requirements. As part of their business plan, vendors were required to demonstrate their ability to deliver items within 24 hours of receiving an order, which is considered industry standard. Other factors on which the proposals were evaluated, in descending order of importance, were the inclusion of a subcontracting plan demonstrating the vendor’s commitment to use SMW businesses, the ability to address environmental and energy conservation efforts. An explanation of the price discounts on items offered to the Postal Service, the ability to provide financial and purchasing reports that are integrated with the Postal Service’s system, and the ability to provide Postal Service items, other than office supplies, that are used in an office setting. Additional evaluation factors included past performance and Javits-Wagner-O’Day Act (JWOD) compliance. The Postal Service awarded the contract to Boise with a start date of April 3, 2000. The contract is a firm, fixed-price modified requirements contract for a 3-year base period, with up to three 2-year options. The contract requires, with a few exceptions, that the Postal Service order from Boise all of the approximately 13,000 items in Boise’s Postal Service office supply catalog. Exceptions to the mandatory requirement are where (1) the item can be found at a lower price (and it is not a JWOD item) or (2) the requirement is urgent and the supplier cannot meet the required delivery date. The Postal Service has since exercised the first 2-year option. The JWOD Act requires the Postal Service to comply with its requirements. According to Postal Service and Boise officials, Boise has ensured through its ordering process that this compliance occurs. When Postal Service employees place an order with Boise for an item that is also on the JWOD procurement list, Boise substitutes the ordered item with a JWOD item that is essentially the same. The Postal Service has not been successful in implementing its national-level contract to purchase most office supplies from Boise. As shown in figure 1, during fiscal year 2001 less than 40 percent of the $125 million in office supplies was purchased from the contract. The Postal Service has not taken sufficient actions to ensure that the contract would be used as anticipated. While fiscal year 2001 data show an improvement over the 6 months that the contract was used in fiscal year 2000, when about 75 percent of office supplies were purchased outside the contract, the Postal Service is concerned that its employees continue to spend a significant percentage of office supply dollars outside the contract. Anticipated savings were based on the assumption that almost all supplies would be purchased from the national contract. The fact that this has not occurred, together with the absence of a benchmark against which to measure savings, has contributed to the Postal Service’s failure to realize estimated savings from its supply chain initiative. Although the Postal Service conducted market research that supported the implementation of a national-level contract for office supplies, it did not take sufficient actions to ensure that the contract would be used as anticipated. Figure 2 shows that Postal Service employees buy office supplies through three mechanisms: contracts (including Boise and non-Boise contracts), purchase cards, and other methods such as cash and money orders. Postal Service officials stated that the increase in contract dollars from fiscal year 1999 to 2001 indicates that the national contract is being used more extensively. However, they have not determined why employees continue to buy their supplies outside the contract. Postal Service officials did not expect immediate compliance with the contract; they anticipated that some purchasing would occur outside the national contract during the implementation period because the cultural environment of the Postal Service has allowed local buyers to make purchases independently. However, they were unaware of the extent to which the contract is not being used because they did not sufficiently plan its implementation, nor have they adequately tracked and monitored office supply purchases. There are several indications that the Postal Service did not take sufficient action to ensure that the contract was properly implemented. First, the Postal Service continues to maintain a number of non-Boise office supply contracts. Although the number of vendors on these other contracts declined from 49 to 33 from fiscal years 1999 through 2001, the dollar value of supplies bought from these contracts has grown, as shown in figure 3. The Postal Service did not undertake a systematic review of all office supply contracts when it implemented the national contract. Such an assessment would have provided an indication of which non-Boise contracts should have been continued and which phased out. In fact, some of the items purchased under non-Boise contracts in fiscal year 2001—such as binders, paper, and measuring tape—should have been purchased from Boise, according to the terms of the national contract. According to Postal Service officials, other items—such as printed envelopes and some types of rubber bands—are purchased under separate contracts because the items are not part of the Boise catalog or they are unique and purchased in volume. Postal Service officials told us that the improved oversight they expect as a result of centralized office supply procurement will allow them to phase out some of the existing office supply contracts. Second, Postal Service employees continue to use purchase cards to buy office supplies outside the contract. Because the purchase card cannot be used to order from the Boise contract, none of the $16.8 million spent on office supplies through purchase cards in fiscal year 2001 was spent under the contract. Postal Service officials have not tracked or monitored purchase card procurements to determine why these employees are not using the contract. Postal Service managers indicated that they are able to use quarterly purchase card spending reports to identify errant purchases—office supplies that should have been purchased from the national contract. However, they acknowledge that these reports are not used consistently to monitor employee purchases of office supplies. Finally, Postal Service employees continue to use cash and money orders to buy supplies from local vendors. As with the purchase cards, cash and money orders cannot be used to buy supplies from the Boise contract. Because the Postal Service has limited information about cash and money order purchases, it was unaware that 33 percent of office supply spending in fiscal year 2001 occurred through these methods. Postal Service officials remarked that they are encouraged by the decrease (from about $66 million in fiscal year 1999 to $41 million in fiscal year 2001) in office supply purchasing using cash and money orders. However, until the Postal Service is able to better track and monitor local office spending, it will lack the information it needs to ensure that the national contract is being used as intended. Postal Service officials explained that their ability to track office supply spending—enabling them to better target those employees who are not using the contract—should improve as Boise contract use increases because the contract requires Postal Service employees to use a Web-based purchasing system referred to as e-buy. The Postal Service’s expectation is that information about e-buy purchases will be systematically and consistently collected. However, use of the contract is not being enforced, and employees continue to use other methods—such as contracts outside the national contract, purchase cards, cash, and money orders—to buy office supplies. The Postal Service’s decision to award a national-level contract to a single supplier was based, in part, on an expectation of saving up to $28 million annually. These savings would result from (1) purchasing a large quantity of items from a single supplier, thereby reducing item costs, and (2) implementing the e-buy purchasing process, which would reduce overall transaction costs. To realize the maximum benefits and cost savings under the Postal Service’s acquisition strategy, almost all office supplies must be purchased from Boise. However, the fact that employees continue to buy supplies outside the contract, combined with the lack of an established benchmark to measure savings, prevents the Postal Service from determining whether it is achieving its savings goals. The Postal Service’s reported savings are calculated using a formula established in 1999. The formula is based on market research, Postal Service Annual Report data from 1998, and spending on an office supply contract in existence at that time. This methodology predicted transaction cost savings of up to 70 percent and item price savings of up to 10 percent on a $50 million contract. The Postal Service claimed savings of up to $28 million for fiscal year 2001 using these estimates. However, when we asked for evidence of actual savings to date, the Postal Service could provide documentation for only about $1 million. This amount reflects rebates that Boise agreed to give the Postal Service on all new business and reduced prices negotiated as part of the contract. Boise and the Postal Service have not paid sufficient attention to the subcontracting goals under the national office supply contract. The subcontracting plan was carelessly constructed, and it contains obvious ambiguities. In fact, Postal Service and Boise officials do not agree on the basic subcontracting goals. Notwithstanding this disagreement, for the purposes of this report we have used the Postal Service’s position that the goal is to award 30 percent of annual revenues to SMW businesses. Boise has fallen far short of achieving the 30 percent goal. In fiscal year 2001, Boise reported achievements of only 2.6 percent. Boise has also fallen short of its specific goals for minority and woman-owned businesses. Boise and the Postal Service provided several reasons why Boise is not achieving the subcontracting goals and they have identified actions that they believe will improve performance. However, these actions will not be sufficient to enable Boise to reach its subcontracting plan goals. When Boise initially submitted its proposal, its subcontracting goal was to provide 12 percent of its Postal Service business to SMW subcontractors. This proposed subcontracting plan included 4 percent goals for minority- and woman-owned businesses. However, after Boise was selected as the intended awardee—but before the contract was awarded—the goal for SMW businesses was increased to 30 percent based on negotiations with the Postal Service. At the same time, Boise increased its goals for minority- and woman-owned business from 4 to 6 percent. The subcontracting plan contains obvious ambiguities that should have been addressed prior to contract award. For example, because the plan is not clearly written, Postal Service and Boise officials disagree on the overall SMW subcontracting goal. Postal Service officials maintain that the goal is 30 percent of overall revenue for the contract, a figure confirmed in a preaward email from Boise. A Boise official, however, asserts that there is both an overall 30 percent goal and a fixed dollar value goal of $3,300,000. Despite this disagreement, neither Boise nor Postal Service officials have taken steps to revise the plan. Further, the subcontracting plan misstates two of the three reporting categories for which there is a contractual goal. The language in the plan includes goals for “small, disadvantaged businesses” and “small, woman-owned businesses.” In practice, however, the Postal Service and Boise report achievements for “minority” and “woman-owned” firms, which may be small or large. There is no clear linkage between the categories of SMW businesses as stated in the plan and the way Boise is reporting its achievements. A Boise official explained that the subcontracting plan reflects the categories the firm typically uses when contracting with federal agencies, and it did not revise the reporting categories to reflect the Postal Service’s supplier diversity categories. In responding to our questions, the Postal Service officials acknowledged that the plan is inconsistent with the way Boise’s achievements are measured and that it needs to be revised. Despite its disagreement with the Postal Service about the subcontracting goals, Boise reports the dollars and percentages that went to SMW businesses based on the annual total revenues under the contract. Table 1 reflects reported achievements for fiscal year 2001. Postal Service and Boise officials stated that 30 percent was a stretch goal to demonstrate the Postal Service’s commitment to supplier diversity. A Boise representative stated that Boise agreed to the 30 percent goal because Boise understood the goal to be negotiable. Even though the Postal Service has no plans to renegotiate the goal before the end of the initial contract performance period of 3 years, Boise and the Postal Service have started discussions to renegotiate the subcontracting goal in the event that the Postal Service decides to exercise an option to extend the contract. Postal Service officials noted that they realize, in hindsight, that the 30 percent goal may have been unreasonable. Boise and Postal Service officials provided several reasons why the subcontracting goals have not been achieved. First, a Boise official said that Boise agreed to the 30 percent goal based on its earlier achievements under the General Services Administration’s Federal Supply Schedules program. In fiscal years 1999 and 2000, Boise awarded small businesses 24.6 percent of its Schedules program sales. In retrospect, Boise and Postal Service officials explained that the Schedules program was not a reliable source for an estimate because Boise’s contract under the Schedules program included 1,800 items, compared to about 13,000 items in the Postal Service contract. Moreover, the total dollar sales in Boise’s Schedules contract—$14.3 million in fiscal year 2000—were considerably lower than the total sales on the Postal Service contract—$47 million in fiscal year 2001. Second, while Boise has a corporate supplier diversity strategy, a Boise official stated that the company’s ability to achieve the subcontracting plan goals has been hampered by the fact that the Postal Service does not require its employees to target SMW businesses when ordering from the catalog. In fact, officials at one district we visited had the impression that by simply purchasing from the contract they were complying with the Postal Service’s SMW business initiatives. At another district we visited, employees were not aware that the Postal Service had SMW subcontracting goals in the contract. All of the district officials we spoke with stated that they base their purchasing decisions on the lowest available price and do not search the catalog for SMW businesses. Third, one of the primary reasons Boise and Postal Service officials offered for the low subcontracting achievements was that compliance with the JWOD Act is taking away dollars from small businesses. However, Boise records show that of the 47 Boise vendors whose items were replaced with JWOD items in fiscal year 2001, only 7 were small businesses. These 7 vendors supply 26 out of the 404 Postal Service office supply items that are subject to the automatic JWOD replacement. Moreover, financial data from Boise show that in calendar year 2000, while total sales on JWOD items were just over $3 million, the impact of JWOD compliance on these 7 vendors was relatively small. These vendors potentially lost $167,629 in business due to the automatic substitution of JWOD items for their items. In calendar year 2001 (representing one full year of contract sales), these 7 vendors potentially lost $297,036 of sales, while the total sales on JWOD items for the year doubled to almost $6 million. This trend continued in the first 6 months of 2002. Finally, Postal Service officials also explained that Boise could not reach its goal because it had planned to subcontract with a woman-owned enterprise that provided cash register tapes, a technology that the Postal Service decided to phase out. They stated that although Boise had relied on this business to reach its subcontracting goal, a change in technology resulted in significantly less business with this vendor than was expected. However, neither Postal Service nor Boise officials could provide us with specific estimates of expected sales. In fact, sales to this woman-owned firm increased in 2001 and 2002. Boise records show a growth in sales of the cash register tapes from this business of approximately $283,000 in 2000 to $455,000 in 2001. Sales for the first half of 2002 indicate a dollar amount in sales similar to the total sales in 2001. Moreover, Boise was notified of the changes to the new technology as far back as 1998; therefore, this was not new information received during the negotiations regarding the subcontracting goals. The Postal Service and Boise recognize that the performance on the subcontracting plan is not satisfactory and have started to take some actions to improve Boise’s achievements under the current contract. While Boise is responsible for its contract performance, the coordinated actions of the Postal Service and Boise can assist Boise’s ability to achieve the subcontracting plan goals. Although the following steps are being taken to improve performance, it is highly unlikely that these actions will enable Boise to reach its 30 percent subcontracting goal. Boise is working with the Postal Service to include additional SMW businesses as subcontractors. For example, Boise continues to work with the Postal Service to identify small business suppliers of recycled toner cartridges, who in many cases provide their products at half the price of new toner cartridges. District officials received a listing of small businesses supplying recycled toner cartridges in October 2001. However, neither the Postal Service nor Boise has determined the extent to which this information will increase Boise’s subcontracting achievements. Boise is working with the Postal Service to reflect indirect services provided to Boise by small businesses in its reporting of subcontracting plan achievements, as it is allowed to do under the Postal Service contract. Indirect services include data entry and information management services, such as invoicing and tracking sales information. However, Boise estimates that including indirect services provided by SMW businesses will have minimal impact on subcontracting plan achievements. Currently, there is no time frame for implementing this change in Boise’s reporting of its subcontracting achievements. In October 2001, the Postal Service and Boise teamed up to design a quarterly report that tracks SMW business purchases at the Postal Service districts. The Postal Service expects to finalize and distribute these reports in January 2003. The Postal Service and Boise are expanding the education of Postal Service employees on the benefits of seeking out SMW suppliers when they order office supplies from the national contract. Since initial office supply contract training was provided in the fall of 2000, Postal Service efforts to educate employees about SMW suppliers have been through informal channels, such as e-mail. Boise’s educational efforts focus on providing more information to the Boise sales representatives that work with the Postal Service. While Boise expects some improvements in its subcontracting achievements as a result of the educational efforts, their impact is unknown. Postal Service data show that office supply purchases made directly from SMW businesses—using contracts and purchase cards—decreased from about 50 to 18 percent from fiscal year 1999 through 2001. However, the extent to which the Postal Service is buying office supplies from SMW businesses is unclear because its purchase card information is unreliable and because the Postal Service has not tracked purchases by employees using mechanisms such as money orders and cash. Our review, as well as a report by the Postal Service Inspector General, found that incomplete and unreliable diversity statistics on suppliers resulted in the Postal Service overstating or incorrectly classifying dollars awarded to SMW businesses. The Inspector General’s report made nine recommendations to correct the reporting of diversity statistics. Table 2 shows the decline in the percentage of SMW purchases from fiscal years 1999 through 2001, based on Postal Service data. During the same 3-year period, SMW business participation has decreased as a percentage of contract spending (excluding spending through purchase cards, cash, and money orders), while the overall dollar value of office supplies purchased through contracts increased from $14.5 million to almost $67 million. In addition, the number of SMW vendors selling office supplies to the Postal Service decreased during this period. Postal Service district officials told us that they are no longer attempting outreach to local SMW businesses—such as participating in small business conferences or trade shows to attract new vendors—because of the emphasis on buying office supplies only through the Boise contract. Table 3 shows the decline in contract activity with small businesses from fiscal years 1999 through 2001. Similarly, the Postal Service reports that office supply procurements from SMW businesses through purchase cards decreased from fiscal years 1999 through 2001. Table 4 shows the decline in the percentage of purchases from SMW businesses using purchase cards from fiscal year 1999 through 2001. Despite the Postal Service’s reported statistics, we could not determine the extent to which the Postal Service is buying from SMW businesses. First, because the Postal Service does not track or report socioeconomic data when payments are made to vendors using cash or money orders, it is not possible to assess SMW business achievements when those payment methods are used. Second, the Postal Service, like other federal agencies, relies on reports from banks for annual purchase card transaction and vendor information. This information is ambiguous and contains numerous errors because socioeconomic categories are often inaccurate. For example, the Postal Service’s purchase card data for fiscal years 1999 through 2001 included over $40 million dollars in office supply purchases from businesses that were identified as both small and large. The Postal Service is aware of the problems with the purchase card transaction information and has been working with Visa Corporation to improve the data. Because banks and payment card associations, such as Visa, control the transaction databases, the Postal Service must rely on the information provided by these institutions. We recently reported on the issue of unreliable and incomplete socioeconomic data on purchase card merchants. The Postal Service has not achieved its goal of using a single supplier for office supplies and, as a result, has not achieved its anticipated savings. Because the Postal Service has not analyzed how its employees buy office supplies, it does not know why the national contract is not being used as extensively as planned. In fact, the Postal Service has no assurance that the national strategy is effective because it has not adequately tracked its employees’ office supply purchases. Implementing a national-level office supply contract through a single supplier makes the realistic development and measurement of Boise’s subcontracting goals and achievements critical to the Postal Service’s efforts to achieve its supplier diversity objectives. The failure to establish an effective subcontracting plan and the lack of oversight and enforcement has created an environment where participation by SMW businesses is minimal. The fact that the Postal Service and Boise cannot agree on the levels of SMW participation established in the contract is evidence of the lack of attention Boise and the Postal Service have paid to this issue. While Boise and the Postal Service have taken some actions to address SMW achievement, it is highly unlikely that Boise will be able to reach its subcontracting goal. We recommend that the Postmaster General of the United States determine why the national contract is not being used as a mandatory source of office supplies; reassess the cost effectiveness of a national office supply contract and measure actual savings from using the contract rather than applying the outdated estimating formulas initially established; develop mechanisms to track employees’ compliance with the mandatory use of the contract, if analysis indicates that the national-level contract is beneficial; and direct that the contract be modified to include a revised subcontracting plan that accurately and clearly reflects realistic goals for small, minority, and woman-owned businesses, consistent with the Postal Service’s supplier diversity program. In written comments on a draft of this report, the Postal Service agreed with our recommendations and indicated that our report will help it develop and enforce policies aimed at improving performance under the national office supply contract. Recognizing that the success of a contract such as this requires continuous management, the Postal Service has established a new supply management organization that will use our findings and recommendations to determine why the contract is not being used as fully as anticipated. The Postal Service indicated that it will continue to seek cost-effective ways to expand its oversight efforts and expects that increased use of the Web-based purchasing system will assist in these efforts. Regarding the savings from the contract, the Postal Service stated that its internal analysis has validated $5.3 million in cost reductions during fiscal year 2002. This analysis was not shared with us during our review. Finally, the Postal Service stated that it has corrected the ambiguities in the subcontracting plan and is working with Boise to establish more realistic subcontracting goals. The Postal Service’s letter appears in appendix I. We also received a written statement from Boise expressing its opinion on federal subcontracting involving SMW businesses and offering several comments on our findings. Boise stated that actual sales under the contract (approximately $50 million) far exceeded its expected contract amount of $11 million. Boise uses this information as a rationale for its failure to achieve its subcontracting goals, which it asserts were based on the $11 million expected contract amount. However, the contract did not guarantee a minimum or maximum level of sales to Boise and, as noted in our report, a 30 percent goal was confirmed by Boise in a pre-award e-mail. Further, the Postal Service based its projected savings on an estimated contract amount of $50 million. Boise also noted that sales to SMW businesses with the Postal Service increased from fiscal year 1999 to fiscal year 2001. However, Boise’s analysis relies on a comparison of sales data from a previously existing Postal Service office supply contract, for 200 high-use items, to the sales data from the current contract, which covers almost 13,000 items. Because Boise is comparing sales data from two different contracts, we do not believe that this is a legitimate comparison. Boise indicated that it is working with the Postal Service to correct the inconsistencies we noted in the subcontracting plan. In addition, Boise believes that JWOD items block sales to SMW businesses; however, Boise did not provide sufficient evidence to support this claim. As noted in our report, the potential lost sales to SMW businesses due to JWOD item replacements were relatively small. Boise also commented that because sales of a cash register tape made by a woman-owned business did not increase at the expected rate, its SMW achievements were affected. However, as discussed in our report, neither Boise nor the Postal Service could provide us with documentation on the expected sales of the IRT tapes. Finally, Boise was concerned about our selection of field sites because it was not based on a random sample. We targeted locations that, according to Boise’s data, were low users of the contract. The objective of our field visits was not to identify overall awareness of the contract. Rather, our intent was to gain an understanding of why certain locations were not using the contract as a mandatory source of office supplies. Boise’s letter appears in appendix II. To meet our objectives, we reviewed the Postal Service’s office supply spending and the related SMW achievements during fiscal years 1999 through 2001. To examine the status of the Postal Service’s implementation of its national office supply contract with Boise, we reviewed the acquisition planning, contract formation, and contract administration documentation, including market research results, the solicitation, and the contract. Total office supply spending was identified using information from the Postal Service purchasing and materials data warehouse. We determined office supply spending for fiscal years 1999 through 2001 by using the same account codes that the Postal Service used to conduct its market research to justify the national office supply contract. We reviewed the Postal Service’s total office supply spending details for all contract, purchase card, money order, and cash transactions. We did not independently verify the accuracy of the reported spending. We interviewed and obtained information from the Postal Service’s contracting officer and contract administrator. In addition, we interviewed and obtained information from three area offices and three district offices based on data that indicated these locations were not using the national office supply contract. We interviewed purchasing specialists, administrative services managers, financial system coordinators, and administrative personnel with office supply purchasing responsibility. We also held discussions with and acquired information from Boise’s federal business manager. To determine Boise’s achievement of its SMW subcontracting plan, we reviewed the contract’s subcontracting plan and Boise’s quarterly reports on its SMW achievements. We interviewed and obtained information from the Postal Service’s contracting officer, area finance officials, and district finance and purchasing officials. We also held discussions with and acquired information from Boise’s federal business manager, its minority- and woman-owned business development and supplier diversity manager, and two minority-owned subcontractors. To assess the extent to which the Postal Service is buying office supplies directly from SMW businesses, we reviewed Postal Service supplier diversity policy and guidance. We examined the Postal Service’s reported socioeconomic statistics, including the dollar amount and type of vendor for fiscal years 1999 through 2001. We interviewed and obtained information from Postal Service officials in the offices of supplier development and diversity, purchasing and materials, and the Postal Service Inspector General. We determined that the reported purchase card data were unreliable; however, we did not attempt to correct the errors in the data provided. Additionally, we met with representatives from the National Office Products Association and a small, woman-owned business to gain a better understanding of their views with regard to the national contract. We conducted our review from March 2002 to November 2002 in accordance with generally accepted government auditing standards. We are sending copies of this report to other interested congressional committees; the Postmaster General of the United States; and the Senior Vice President and Federal Business Manager, Boise Office Solutions. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-4841 or Michele Mackin at (202) 512-4309 if you have any questions regarding this report. Other major contributors to this report were Penny Berrier, Art L. James Jr., Judy T. Lasley, Sylvia Schatz, and Tatiana Winger. | Over the past 2 years, the Postal Service has experienced growing financial difficulties. In an effort to transform the organization to reduce costs and increase productivity, the Postal Service awarded a national-level office supply contract to Boise Corporation. In addition, the Postal Service required Boise to submit a subcontracting plan, which outlines how small, minority-, and woman-owned businesses will be reached through the contract. GAO was asked to assess the status of the Postal Service's implementation of the Boise contract and Boise's achievement of its subcontracting plan. GAO also reviewed the extent to which the Postal Service is buying office supplies directly from small, minority-, and woman-owned businesses. The Postal Service has not been successful in implementing its national-level contract to purchase most office supplies from Boise. Although the national contract was intended to be a mandatory source of office supplies, the Postal Service purchased less than 40 percent of its office supplies from Boise in 2001. GAO found that the Postal Service did not perform as planned under the contract because it did not take sufficient actions to ensure that the contract would be used. As a result, the Postal Service has not been able to realize its estimated annual savings of $28 million. In fact, it was only able to provide documentation for $1 million in savings for 2001. Boise and the Postal Service have not paid sufficient attention to the subcontracting plan. The plan contains obvious ambiguities, and, in fact, Postal Service and Boise officials disagree on its goals. The Postal Service maintains that the goal is 30 percent of Boise's annual revenue from the contract. Boise has fallen far short of this goal, reporting that only 2.6 percent of subcontracting dollars were awarded to small, minority-, and woman-owned businesses in fiscal year 2001. Postal Service and Boise officials recognize that the performance on the subcontracting plan is not satisfactory and are taking a number of steps to achieve the plan's goals. Nevertheless, it is highly unlikely that the current subcontracting goals will be met. The Postal Service reported that its small, minority-, and woman-owned business achievements have declined from fiscal years 1999 to 2001. Despite the Postal Service's reported statistics, we could not determine the extent to which it is buying directly from these businesses because the data are unreliable. |
DOD and the military services invest in ground radars and air-to-ground precision guided munitions. Ground radars are ground-based sensor systems used by the Army, Air Force, and Marine Corps to detect and track a variety of targets. These radars perform missions such as air surveillance, air defense, and counterfire target acquisition, among others. Ground radars that can perform multiple missions one at a time are called multi-role radars, and radars that can perform multiple missions simultaneously are called multi-mission radars. We focused on ground radars that perform the following missions, which are notionally depicted in figure 1: Air surveillance—search, detect, and track cruise missiles, fixed and rotary wing aircraft, and unmanned aircraft systems. Air defense—provide radar data that enables other weapon systems, such as air and missile defense or aircraft, to take offensive or defense actions against enemy cruise missiles, fixed and rotary wing aircraft, and unmanned aircraft systems. Counterfire target acquisition—detect and track enemy rockets, artillery, and mortars to determine enemy firing positions and impact areas for incoming fire. The military services have several active ground radar acquisition programs performing air surveillance, air defense, and counterfire target acquisition missions. These programs and their missions are presented in table 1. Appendix II provides additional information on the capabilities of these radar programs. The House Armed Services Committee has previously raised questions about potential overlap in the ground radar area. For example, in 2012, House report 112-479 accompanying the National Defense Authorization Act for Fiscal Year 2013 noted overlap with the Army and Marine Corps ground radar programs’ missions and encouraged the Army and Marine Corps to collaborate and identify overlapping requirements and determine if they could procure a single system rather than having each service procuring and maintaining separate systems. Air-to-ground precision guided munitions are weapons launched from Army, Navy, Air Force, and Marine Corps aircraft that are intended to accurately engage and destroy enemy targets on the ground. These munitions include missiles, guided rockets, and laser guided bombs. Precision guided munitions contain a seeker, warhead, and fuze. The seeker detects electromagnetic energy reflected from a target and provides commands to a control system that guides the weapon to the target. Different seekers provide targeting capabilities for different environments, such as for clear weather only or all weather. Some precision-guided munitions are made up of a guidance kit attached to an unguided or “dumb” munition. Munitions are also made up of varying warheads with different capabilities and weights that make them optimized for different types of targets. The military services’ active air-to- ground precision guided munitions are presented in table 2. DOD’s requirements and acquisition policies contain provisions to help avoid redundancy and consider existing alternatives before starting new acquisition programs. DOD’s Joint Capabilities Integration and Development System (JCIDS) guidance states that when validating key requirements documents, the chair of the group responsible for that capability area is also certifying that the proposed requirements and capabilities are not unnecessarily redundant to existing capabilities in the joint force. In some cases, redundancy may be advisable for operational reasons. The validation authority for a requirements document depends on factors such as the potential dollar value of a program, and determines the level of oversight a requirement document receives. The Joint Requirements Oversight Council (JROC) is the validation authority for documents with a “JROC Interest” designation. A military service can be the validation authority for lower level designations. DOD’s Instruction 5000.02, which establishes policies for the management of all acquisition programs, requires the military services to complete an analysis of alternatives (AOA) to assess potential materiel solutions, including existing and planned programs, which could satisfy validated capability requirements. DOD’s Office of Cost Assessment and Program Evaluation (CAPE) approves study guidance, which provides direction on what the AOA must include, for acquisition category I programs. Under DOD’s Interim Instruction 5000.02, which was effective as of November 2013, CAPE also develops and approves study guidance for programs for which the JROC is the validation authority, regardless of It also states that the Milestone the acquisition category of the program. Decision Authority can designate non-major defense acquisition programs as “special interest.” A “special interest” program is a program that meets certain criteria, such as being a potential joint acquisition program, and as a result, receives higher level oversight. The Under Secretary of Defense for Acquisition, Logistics, and Technology serves as Milestone Decision Authority for “special interest” programs. Our analysis of DOD’s active ground radar programs found evidence of overlapping performance requirements and potential duplication in certain mission areas. However, the JROC and Joint Staff have determined that any redundancies across the programs they reviewed were necessary. The JROC did not review one of the programs in our analysis. The military services pursued separate ground radar acquisition programs for several reasons: other programs did not fully meet their performance requirements; the timelines for other programs did not align with their needs; and they made different decisions on whether to pursue multi-role or single role radars. DOD has taken steps to encourage collaboration in the ground radar area by asking the services to consider joint acquisition programs, developing joint requirements, and requiring the services to include existing radar programs in their AOAs, with mixed success. Based on our analysis of program requirements documents, we found that the Marine Corps’ Ground/Air Task Oriented Radar (G/ATOR) Block I and Air Force’s Three-Dimensional Expeditionary Long-Range Radar (3DELRR) acquisition programs have some key overlapping requirements and provide similar capabilities in their air surveillance and air defense roles. However, the JROC ultimately determined that any redundancy between requirements was necessary. During the JROC validation process, the proposed performance requirements for the 3DELRR program were reduced in several areas, including range. These reductions brought the 3DELRR requirements closer to the G/ATOR Block I requirements, thus increasing the extent of overlap across the programs’ requirements and the risk of potential duplication. In other areas, 3DELRR requirements still exceeded those for G/ATOR Block I. The JROC validated the 3DELRR requirements document in 2013. The JROC approved the latest G/ATOR Block I requirements document in 2012. Lockheed Martin and Northrop Grumman also competed for this contract. Air Force’s long-range radar requirements. According to Air Force officials, each of these studies confirmed that no other existing radar could meet all of the 3DELRR requirements, and supported the decision to start a new development program. One of these studies was an AOA update that considered reductions in the 3DELRR range requirements and both studies considered the introduction of a more capable gallium Our review nitride semiconductor technology into the G/ATOR program.of these studies and a related CAPE analysis showed that G/ATOR could be capable of meeting some key 3DELRR performance requirements. In addition, a CAPE official, who reviews radar programs, stated the Air Force could use about 90 percent of the work the Marine Corps has already done to develop G/ATOR for 3DELRR and that additional research and development would primarily be required to develop software for the system. Based on our analysis of program requirements documents, we found that the Army’s AN/TPQ-53 Counterfire Radar and the Marine Corps’ G/ATOR Block II have some overlapping requirements. Both radar systems detect, track, classify, and locate the origin of enemy projectiles, including mortar, artillery, and rocket systems and are to replace existing Army and Marine Corps Firefinder radars that perform counterfire target acquisition missions. However, while many of the requirements overlap, the AN/TPQ-53 does not meet the G/ATOR Block II detection range requirements for multiple target types. In addition to some unique requirements, urgent operational needs and different acquisition approaches led the Army and Marine Corps to establish separate acquisition programs for counterfire target acquisition radars. The Army’s AN/TPQ-53 started in 2006 as an upgrade program to increase the capabilities of existing radar to meet urgent needs that had been identified in overseas operations. According to Army and Marine Corps officials, the Army’s timeframes required it to field its new capability before the G/ATOR development program would transition to production. After the Army met its urgent needs with an initial procurement of upgraded radars, the program continued through the traditional or non- urgent needs acquisition process and held a new production decision review in 2012. However, by this point, the Army and the Marine Corps had adopted different acquisition approaches for meeting their ground radar needs. The Army moved from a strategy of developing one multi- mission radar for air surveillance, air defense, and counterfire target acquisition to a strategy of buying the AN/TPQ-53 and upgrading other radars as needed. The Marine Corps, on the other hand, is developing G/ATOR as a multi-role and potentially a multi-mission radar. Despite these different approaches, the Army and Marine Corps have cooperated in certain areas related to these acquisitions. For example, according to Army and Marine Corps officials, the Army and Marine Corps have discussed using common software for the AN/TPQ-53 and G/ATOR Block II counterfire target acquisition capabilities. The JROC did not review whether the capabilities of the Army’s AN/TPQ- 53 and the Marine Corps’ G/ATOR Block II were unnecessarily redundant or duplicative as part of the requirements validation process. The JROC did not validate the Army’s AN/TPQ-53 performance requirements because it was initially an urgent wartime need and did not meet acquisition category I dollar thresholds. However, at the point the JROC could have reviewed the AN/TPQ-53 requirements, the program had transitioned to the more traditional acquisition process. Instead, the Joint Staff delegated the validation authority for the AN/TPQ-53 requirements to the Army, which validated them in 2010. The JROC had previously validated the G/ATOR Block II requirements documents in 2005, prior to the Army starting the AN/TPQ-53 program. Because the Joint Staff delegated the validation authority for the AN/TPQ-53 to the Army, the JROC may have missed an opportunity to review whether the capabilities of the AN/TPQ-53 and G/ATOR Block II were unnecessarily redundant or duplicative, or to encourage additional areas of cooperation between the Army and the Marine Corps. The Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics and CAPE has taken steps to encourage the military services to collaborate on ground radar programs with mixed success. None of the efforts resulted in a joint acquisition program, primarily due to service funding decisions, but they have led to the development of joint requirements and broader analyses of acquisition alternatives. For example: In 2009, the Under Secretary of Defense for Acquisition, Technology, and Logistics designated the Air Force’s 3DELRR program and the TPS-59 Product Improvement Program for the Marine Corps’ long- range air surveillance radar as special interest programs and encouraged both services to collaborate on a single system that addressed their long-range radar requirements. According to Air Force officials, this resulted in a 3DELRR requirements document that was developed jointly, and which still shares many of the Marine Corps’ requirements. The Marine Corps decided to discontinue the TPS-59 Product Improvement Program due to budget constraints, but DOD officials said that 3DELRR may still be able to meet the Marine Corps’ needs when it eventually decides to replace the TPS-59 ground radar. In 2009, the Under Secretary of Defense for Acquisition, Technology, and Logistics designated G/ATOR as a special interest program, and directed the Marine Corps to collaborate with the Joint Staff and Army to work towards a joint capability to meet the services’ multi-mission radar requirements. The Army later decided not to fund the Multi- Mission Radar and has instead pursued the AN/TPQ-53 radar and other radar upgrades to meet its needs. In 2011, CAPE issued AOA study guidance that required the Air Force to update a prior 3DELRR AOA and consider a broader range of alternatives, including other ground radar systems, such as G/ATOR. Because 3DELRR was proposed to address Air Force and Marine Corps long-range radar requirements, the study guidance required Marine Corps participation in the development and review of the AOA. The AOA concluded that a new radar program was the optimum solution to meet 3DELRR requirements. The Army is looking to upgrade the radar system that supports the Patriot missile system. A CAPE official responsible for reviewing radar programs said that CAPE is working with the Army to develop its AOA study guidance for the radar upgrade effort and asked the Army to include systems, such as 3DELRR and G/ATOR in the analysis. Our analysis of DOD’s active air-to-ground precision guided munitions found some munitions shared some capabilities, but after taking into consideration characteristics such as the aircraft that can launch them, we found the systems were not duplicative. To the extent overlapping capabilities exist, DOD officials said these capabilities provided needed flexibility for different military operations. However, there is potential for future duplication in the Army’s and Navy’s air-to-ground guided rocket acquisitions. While the Army and Navy have similar needs, the services’ potential procurement strategies could lead to them procuring the same or very similar systems using different programs and contracts. Based on our analysis of the target sets of DOD’s air-to-ground precision guided munitions, the seeker capabilities, the aircraft platforms that can launch them, and cost, we did not find evidence that DOD’s capabilities in this area were duplicative. Additionally, none of the DOD or military service requirements and acquisition organizations we spoke to identified unnecessary redundancy or duplication within air-to-ground precision guided munitions. In general, DOD officials described the air-to-ground precision guided munitions area as efficient in terms of the investments DOD has made. Appendix III provides a comparison of air-to-ground precision guided munition air platforms, seeker capabilities, and target sets. Our analysis of DOD’s active air-to-ground precision guided munitions found evidence of overlapping target sets among the munitions, but unique factors such as what type of aircraft a munition can be launched from, the munition’s seeker capability in varying weather conditions, and the cost of the munition for the desired effect clearly distinguish them from one another. In addition, where some overlap was found, DOD officials explained the overlap was necessary to provide flexibility for military operations. We found three illustrative examples of how platform, seeker capability, and the cost of the munition weigh into how air-to-ground precision guided munitions are used and how their capabilities complement one another: Air-to-ground precision guided munitions are suitable for different types of aircraft platforms, or behave differently when fired from different types of platforms; therefore, the method of delivery, such as from a fixed wing fighter aircraft versus a rotary wing helicopter platform, can be critical to the operation. For example, the Joint Air-to- Ground Missile (JAGM) and the Joint Standoff Weapon (JSOW) are both missiles optimized to hit moving and stationary targets. When JAGM replaces the Hellfire missile, it will, like Hellfire, be capable of launching from rotary helicopters and unmanned aircraft systems, whereas JSOW is a glide weapon with no motor, that must be launched from bomber and fighter aircraft. The JSOW also has a penetrating warhead that allows it to target deeply buried targets. Differing seeker capabilities allow for flexibility in different operating environments. An all weather seeker capability allows a munition to reach its target regardless of weather conditions or other obscurants, such as smoke. For example, the Direct Attack Moving Target Capability was developed to hit moving targets, but it does not have the all weather seeker capability that would allow it to hit all moving targets in all weather conditions. The JAGM and the Small Diameter Bomb II munitions will have the capability to address moving targets in all weather conditions. This all-weather capability requires a more expensive seeker technology. The unit cost of precision guided munitions varies and may be a determining factor in when they are used. For example, the Hellfire II Romeo missile and Advanced Precision Kill Weapon System II (APKWS), which is a guided rocket, both have the ability to hit unarmored and unhardened targets. However, the Hellfire II Romeo costs approximately $93,000 per unit and is optimized to hit armored and hardened targets, whereas the APKWS costs approximately $31,000 per unit and is only optimized for unarmored and unhardened targets. According to DOD officials, while the Hellfire II missile is effective against both armored and unarmored targets, it could be more optimal, depending on the range of the target, to use the smaller and less expensive APKWS system against unarmored targets. One of the reasons for the lack of duplication in air-to-ground precision guided munitions programs is that the military services cooperated on multiple systems and leveraged each other’s investments. Three of the seven munitions programs we reviewed are joint development programs between at least two of the military services. In other cases, the military services procured each other’s munitions systems. For example, all of the military services procure Hellfire missiles from the Army. There is potential for future overlap or duplication in the Army’s and Navy’s procurement of air-to-ground guided rockets, which could result in DOD not fully leveraging its buying power. Specifically, both the Army and Navy have validated requirements for air-to-ground guided rockets. The general requirement is for a guidance kit that attaches to the existing family of unguided Hydra-70 rockets. While the Army and Navy have similar needs, the services’ current procurement strategies could lead to them procuring the same or very similar systems using different programs and contracts. APKWS is currently the only guided rocket system that has been integrated and fully qualified for use on a DOD platform. Defense contractors have developed other guided rocket systems, and there is at least one other system that the Army is considering to meet its future guided rocket needs. Both the Army and the Navy plan to buy APKWS through fiscal year 2015 to meet their guided rocket needs, but starting in fiscal year 2016, they may pursue separate, potentially duplicative, efforts to meet their requirements. The Army plans to introduce competition for its Hydra-70 rocket guidance kit and consider other qualified systems besides APKWS. DOD acquisition policy and Better Buying Power initiatives to increase the efficiency of defense spending both emphasize the importance of sustaining a competitive environment at every stage in the acquisition process as a means to control and reduce cost.the Army is exploring various options to introduce competition for guided rockets to include an option that requires the Hydra-70 rocket prime contractor to competitively procure guidance kits and fully integrate them with Hydra-70 before delivering complete systems to the Army, which is different than the Navy’s current approach. Alternatively, the Army could still jointly buy APKWS with the Navy. The Navy procures APKWS and Hydra-70 separately and integrates the components themselves in order to, among other things, allow for the flexibility to use the different combinations of rocket components based on mission needs. The Navy’s current contract for APKWS, which was awarded on a sole source basis, expires in 2016. At that point, the Navy plans to negotiate another sole source contract because it does not believe that introducing competition to APKWS would be worth the investment of integrating and qualifying another Hydra-70 guided rocket on Marine Corps’ H-1 helicopters.develop, integrate, and qualify APKWS on the H-1. According to the Army program officials, introducing competition for the Hydra guidance kit could reduce its current cost by as much as one-third. There are costs and benefits associated with both the Army and Navy’s acquisition approaches; however, if the Army and Navy fulfill their guided rocket needs separately instead of through a single solution with a cooperative contracting strategy, it could result in the inefficient use of weapon system investment dollars and a loss of buying power. DOD will likely be at some risk for overlap and duplication in its weapon system acquisition programs, given the breadth and magnitude of its investments. While some overlap and duplication may provide necessary redundancy for military operations, in other cases, it is driven by the military services generating unique system requirements that meet similar needs, their authority to independently make resource allocation decisions, and the timing of acquisition programs. The 3DELRR program appears to be a case where the Air Force was focused on what made its requirements unique, instead of looking for ways to leverage the Marine Corps’ development program for G/ATOR. We found these programs to be potentially duplicative. DOD currently relies on its requirements and acquisition processes and decision makers to ensure that capabilities and programs are not unnecessarily redundant or duplicative. Its experiences with ground radar programs suggest ways to make these processes more effective in the future. For example, DOD may have missed an opportunity to review whether the capabilities of the Army’s AN/TPQ-53 Counterfire Radar and the Marine Corps’ G/ATOR Block II were unnecessarily redundant or duplicative because the requirements document for the AN/TPQ-53 was validated by the Army, rather than the JROC, which has a broader perspective on DOD’s capability needs. In another case, DOD was better positioned to encourage cooperation for Patriot radar upgrades. Because CAPE had visibility into the program, it was able to shape the Army’s AOA to make sure existing radars, such as 3DELRR and G/ATOR, were considered. There may be other opportunities for increased service cooperation to meet future ground radar needs, but, in order for key decision makers such as the JROC, CAPE, and the Under Secretary of Defense for Acquisition, Technology, and Logistics to take advantage of them, it is important for them to have insight into ground radar programs, including upgrade programs and programs that do not meet the dollar thresholds that trigger a “JROC Interest” designation and automatic review. A “JROC Interest” designation provides the JROC the opportunity to review ground radar performance requirements and capabilities for potential duplication and CAPE with the opportunity to develop broad AOA guidance. This type of visibility would put DOD in a better position to take the actions necessary to make the most efficient use of its resources. Unlike the ground radar programs we examined, most of the air-to-ground precision guided munitions programs were already being developed and procured jointly. This cooperation has helped DOD leverage its buying power. As new areas of potential cooperation emerge, the services should look to leverage those opportunities. Specifically, when the Army revalidated its air-to-ground guided rocket requirement, it opened up the possibility of cooperating with the Navy on jointly buying APKWS or holding a competition to find a system that can meet both services’ needs when the Navy’s current sole source contract expires in 2016. Either option seems preferable to the Army and Navy potentially procuring the same or similar systems to fill the same requirement under different program and contracts, which could lead to duplicative procurement activities in both services and a degradation in buying power. We recommend that DOD take the following two actions: To provide the JROC the opportunity to review all ground radar programs for potential duplication and CAPE with the opportunity to develop broad analysis of alternative guidance, the Vice Chairman of the Joint Chiefs of Staff should direct the Joint Staff to assign all new ground radar capability requirement documents with a Joint Staff designation of “JROC Interest.” To address potential overlap or duplication in the acquisition of Hydra- 70 rocket guidance kits, the Under Secretary of Defense for Acquisition, Technology, and Logistics should require the Army and Navy to assess whether a single solution and cooperative, preferably competitive, contracting strategy offers the most cost effective way to meet both services’ needs. We provided a draft of this report to DOD for review and comment. In its written comments, which are reprinted in full in appendix IV, DOD partially concurred with our first recommendation and concurred with our second recommendation. DOD also provided technical comments that were incorporated as appropriate. DOD partially concurred with our recommendation to assign all new ground radar capability requirement documents with a Joint Staff designation of “JROC Interest.” DOD responded that although it is likely that new ground radar capability would be given the Joint Staffing Designator of "JROC Interest," the “JROC Interest” designation should not be a required designation because it ignores the tiered Joint Staff designation system process. DOD also noted that it would lessen the impact and importance of the Functional Capabilities Boards and their role to ensure minimization of duplication across the portfolio. We acknowledge that the Joint Staff has a process for determining Joint Staff designations and for minimizing duplication across portfolios. However, as we point out in our report, DOD missed an opportunity to review whether the capabilities of the Army’s AN/TPQ-53 Counterfire Radar and the Marine Corps’ G/ATOR Block II were unnecessarily redundant or duplicative because the requirements document for the AN/TPQ-53 was given a lower-level designation. The way to ensure this does not occur in the future is to make the “JROC Interest” designation mandatory for all new ground radar programs. Hence, we still believe without this designation for all new ground radar programs, the JROC and CAPE may not have the opportunity to review programs that do not meet the dollar threshold for an automatic “JROC Interest” designation and may miss additional opportunities to encourage collaboration across the military services. DOD concurred with our second recommendation to require the Army and Navy to assess whether a single solution and cooperative, preferably competitive, contracting strategy offers the most cost effective way to meet both services’ needs if both services continue to pursue the acquisition of Hydra-70 rocket guidance kits. DOD noted that it has a process to consider redundancies across the services’ programs, but it was unclear what actions it planned to take to assess if the services could use a single contracting strategy to meet its guided rocket needs. We continue to believe that DOD should assess this option as part of its consideration of potential redundancies. We are sending copies of this report to appropriate congressional committees; the Secretary of Defense; the Under Secretary of Defense for Acquisition, Technology, and Logistics; the Vice Chairman of the Joint Chiefs of Staff; and the Secretaries of the Army, Navy, and Air Force, and the Commandant of the Marine Corps. In addition, this report also is available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or sullivanm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. To determine the extent of potential overlap or duplication across (1) ground radar and (2) air-to-ground precision guided munitions programs, we reviewed acquisition programs currently in development or production, which does not include systems only being developed or produced for foreign military sales. We reviewed and analyzed documentation on system requirements, capabilities, and other distinguishing factors to determine if potential overlap or duplication exists. We interviewed Department of Defense (DOD) officials in the Joint Staff; Offices of the Under Secretary of Defense for Acquisition Technology, and Logistics and Director, Cost Assessment and Program Evaluation; and the Army, Navy, Air Force, and Marine Corps to discuss ground radar and air-to- ground precision guided munition programs as appropriate. We also reviewed DOD analysis and interviewed DOD officials to identify instances in which DOD found potential overlap or duplication during acquisition and requirements reviews and what actions DOD took, if any, in response to any identified potential overlap or duplication. For ground radar programs, we reviewed the mission, acquisition life cycle, and basic system characteristics of the military services’ active ground radar programs, to determine which programs may have overlapping or duplicative requirements and capabilities. We focused on ground radar programs used in land operations with primarily air surveillance, air defense, and counterfire target acquisition missions. Our scope included two air surveillance and air defense ground radar systems—the Air Force’s Three-Dimensional Expeditionary Long-Range Radar (3DELRR) and the Marine Corps’ AN/TPS-80 Ground/Air Task Oriented Radar (G/ATOR) Block I Radar—and two counterfire target acquisition ground radar programs—the Army’s AN/TPQ-53 Counterfire Radar and the Marine Corps’ AN/TPS-80 G/ATOR Block II. We excluded the Army’s AN/TPQ-50, which is in production, from our analysis because unlike the other radar system we reviewed, it is a lightweight, man portable radar. We also excluded Sentinel and Patriot from our analysis because these programs are fielded systems undergoing modification. Within our scope, we compared the common Key Performance Parameters (KPP) and Key System Attributes (KSA) found in the program requirements documents across the radars primarily performing air surveillance and air defense and counterfire target acquisition missions. KPPs are the performance attributes of a system considered critical to the development of an effective military capability. KSAs are the attributes or characteristics considered to be essential, but not critical enough to be designated a KPP. The KPPs and KSAs included range, probability of detection, search volume, reliability, availability, maintainability, and transportability/mobility, as appropriate. For air-to-ground precision guided munitions, we reviewed the type, acquisition life cycle, and select system characteristics of the military services’ active air-to-ground precision guided munitions programs, to determine which programs may have overlapping or duplicative requirements and capabilities. We did not review munitions in certain specialized categories, such as anti-ship, anti-radiation, ballistic, or cruise missiles. Our scope included Advanced Precision Kill Weapon System II (APKWS), Direct Attack Moving Target Capability, Hellfire II Romeo variant, Joint Air-to-Ground Missile (JAGM), Joint Standoff Weapon (JSOW) C-1 variant, Maverick Laser variant, and Small Diameter Bomb II. Within our scope, we conducted an analysis comparing the precision guided munitions characteristics that we determined, in consultation with DOD subject matter experts, were most critical to assessing the system’s capabilities. Based on information we gathered and corroborated with the military services, we compared munitions’ air platforms, unit cost, all weather capability, and target sets: moving and stationary; armored and unarmored; hardened and unhardened. We conducted this performance audit from June 2014 to December 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Description of capabilities Highly mobile ground based radar set that automatically detects, classifies, tracks, and locates the point of origin of projectiles fired from rocket, artillery and mortar systems. The radar provides increased range and accuracy throughout a 90 degree search sector (stare mode) as well as 360 degree coverage (rotating) for locating firing positions. Three-Dimensional Expeditionary Long-Range Radar (3DELRR) Long-range, three-dimensional, ground-based radar for detecting, identifying, tracking, and reporting aerial targets. Responds to operational need to detect and report highly maneuverable, small radar cross section targets to enable battlefield awareness. AN/TPS-80 Ground/Air Task Oriented Radar (G/ATOR) Expeditionary, three-dimensional, high-mobility, short/medium range multi-role radar designed to detect cruise missiles, air breathing targets, rockets, mortars, and artillery. Provides expeditionary, day/night, adverse weather radar coverage and tracks aerial objects. Provides the baseline system for the Marine Corps short and medium range radar requirement. G/ATOR Ground Weapons Locating Radar – Block II Detects indirect fire from rockets, artillery, and mortar systems at greater range and provides greater accuracy, classification and deployability to support counterfire and counter battery missions. The Marine Corps’ G/ATOR is a multi-role radar. The Marine Corps is using an incremental approach to fielding G/ATOR capabilities. G/ATOR Block I is to develop the basic hardware for the radar in all of its potential roles. G/ATOR Block II is to be a software upgrade to provide counterfire target acquisition capabilities. JAGM is also expected to be used on unmanned aircraft systems, but this is not a threshold requirement. Michael J. Sullivan, (202) 512-4841 or sullivanm@gao.gov. In addition to the contact named above, the following individuals made key contributions to this report: Ronald E. Schwenn, Assistant Director; Danielle Greene; Laura Holliday; Heather Krause; John Krump; Zina Merritt; Paige Muegenburg; Erin Preston; Sylvia Schatz; Roxanna Sun; Hai Tran; and Oziel Trevino. | Over the past five years, GAO has found potential overlap or duplication in DOD weapon system investments. Overlap occurs when multiple agencies or programs are engaged in similar activities. Duplication occurs when two or more agencies or programs are engaged in the same activities. Senate Report 113-44 accompanying the fiscal year 2014 National Defense Authorization Act mandated that GAO examine the military services' ground radar and air-to-ground precision guided munitions programs for potential duplication. Ground radars are sensors used to detect and track targets, and precision guided munitions are weapons intended to accurately engage and destroy enemy targets. This report examines the extent to which potential overlap or duplication exists across the military services' (1) ground radar and (2) air-to-ground precision guided munitions programs. GAO analyzed program documentation on system performance requirements and capabilities and interviewed DOD officials about potential duplication. Several of the Department of Defense's (DOD) active ground radar programs have overlapping performance requirements and two are potentially duplicative. In these instances, the military service pursued separate acquisition programs because other programs did not fully meet their performance requirements, among other reasons. Specifically, GAO found: The Marine Corps' Ground/Air Task Oriented Radar (G/ATOR) Block I and the Air Force's Three-Dimensional Expeditionary Long-Range Radar (3DELRR) have some overlapping key requirements, such as range, and are potentially duplicative. The Joint Requirements Oversight Council (JROC), which validates requirements for DOD's largest acquisition programs, did not find unnecessary redundancy, and Air Force officials stated that G/ATOR could not meet all of the 3DELRR's requirements. The Army's AN/TPQ-53 Counterfire Radar and the Marine Corps' G/ATOR Block II have some overlapping requirements, but the AN/TPQ-53 does not meet certain key G/ATOR Block II requirements, therefore reducing the risk that the programs are duplicative. In this case, urgent operational needs and different acquisition approaches also led the Army and Marine Corps to establish separate acquisition programs. As a result of reviews conducted by the JROC and DOD's Office of Cost Assessment and Program Evaluation (CAPE), which develops guidance for analyzing alternative ways to fulfill capability needs, the Air Force made positive changes to the 3DELRR program, such as reducing some requirements to improve program affordability. CAPE also expanded the alternatives considered on acquisition programs to minimize potential duplication. DOD missed an opportunity to assess whether the capabilities of the AN/TPQ-53 and G/ATOR Block II were unnecessarily redundant. The JROC did not review the AN/TPQ-53 requirements because it was initially fielded to meet an urgent need and did not meet the dollar threshold to automatically trigger a review. However, the AN/TPQ-53 transitioned to the traditional, non-urgent needs acquisition process at which point the JROC could have reviewed it. Ensuring that the JROC and CAPE review new ground radar acquisitions could help DOD avoid duplication. DOD's active air-to-ground precision guided munitions programs are not duplicative, but potential for duplication exists in the future. The active programs share some capabilities, but characteristics such as the aircraft that can launch them distinguish them from one another. To the extent that overlapping capabilities exist, DOD officials said these capabilities provided needed flexibility for military operations. Cooperation among the military services contributed to the current lack of duplication. GAO found one example of potential future duplication. Both the Army and the Navy plan to buy the Advanced Precision Kill Weapon System through fiscal year 2015 to meet their guided rocket needs, but starting in fiscal year 2016, they may pursue separate, potentially duplicative, efforts. There are costs and benefits associated with both the Army and Navy's acquisition approaches; however, if the Army and Navy fulfill their guided rocket needs separately instead of cooperatively, it could result in the inefficient use of weapon system investment dollars and a loss of buying power. To address potential duplication, GAO recommends that DOD ensure that new ground radar acquisitions are reviewed by the JROC and CAPE and require the Army and Navy to jointly assess the possibility of using a single solution and a cooperative, preferably competitive, contracting strategy to meet their guided rocket needs. DOD partially agreed with GAO's first recommendation, but stated it should not be mandatory. GAO believes the recommendation remains valid as discussed in its report. DOD agreed with the second recommendation. |
Interior’s ongoing reorganization of bureaus with oil and gas functions will require time and resources, and undertaking such an endeavor while continuing to meet ongoing responsibilities may pose new challenges. Interior has begun implementing its restructuring effort, transferring offshore oversight responsibilities to the newly created BOEMRE and revenue collection to ONRR. Interior plans to continue restructuring BOEMRE to establish two additional separate bureaus—the Bureau of Ocean Energy Management, which will focus on leasing and environmental reviews, and the Bureau of Safety and Environmental Enforcement, which will focus on permitting and inspection functions. While this reorganization may eventually lead to more effective operations, we have reported that organizational transformations are not simple endeavors and require the concentrated efforts of both leaders and employees to realize intended synergies and accomplish new organizational goals. In that report, we stated that for effective organizational transformation, top leaders must balance continued delivery of services with transformational activities. Given that as of December 2010 Interior had not implemented many recommendations we made to address numerous weaknesses and challenges, we are concerned about Interior’s ability to undertake this reorganization while (1) providing reasonable assurance that billions of dollars of revenues owed to the public are being properly assessed and collected and (2) maintaining focus on its oil and gas oversight responsibilities. We have reported that Interior has experienced several challenges in meeting its obligations to make federal oil and gas resources available for leasing and development while simultaneously meeting its responsibilities for managing public lands for other uses, including wildlife habitat, recreation, and wilderness. In January 2010, we reported that while BLM requires oil and gas operators to reclaim the land they disturb and post a bond to help ensure they do so, not all operators perform such reclamation. In general, the goal is to plug the well and reclaim the site so that it matches the surrounding natural environment to the extent possible, allowing the land to be used for purposes other than oil and gas production, such as wildlife habitat. If the bond is not sufficient to cover well plugging and surface reclamation, and there are no responsible or liable parties, the well is considered “orphaned,” and BLM uses federal dollars to fund reclamation. For fiscal years 1988 through 2009, BLM spent about $3.8 million to reclaim 295 orphaned wells, and BLM has identified another 144 wells yet to be reclaimed. In addition, in a July 2010 report on federal oil and gas lease sale decisions in the Mountain West, we found that the extent to which BLM tracked and made available to the public information related to protests filed during the leasing process varied by state and was generally limited in scope. We also found that stakeholders—including environmental and hunting interests, and state and local governments protesting BLM lease offerings—wanted additional time to participate in the leasing process and more information from BLM about its leasing decisions. Moreover, we found that BLM had been unable to manage an increased workload associated with public protests and had missed deadlines for issuing leases. In May 2010, the Secretary of the Interior announced several departmentwide leasing reforms that are to take place at BLM that may address these concerns, such as providing additional public review and comment opportunity during the leasing process. Further, in March 2010, we reported that Interior faced challenges in ensuring consistent implementation of environmental requirements, both within and across MMS’s regional offices, leaving it vulnerable with regard to litigation and allegations of scientific misconduct. We recommended that Interior develop comprehensive environmental guidance materials for MMS staff. Interior concurred with this recommendation and is currently developing such guidance. Finally, in September 2009, we reported that BLM’s use of categorical exclusions under Section 390 of the Energy Policy Act of 2005—which authorized BLM, for certain oil and gas activities, to approve projects without preparing new environmental analyses that would normally be required in accordance with the National Environmental Policy Act—was frequently out of compliance with the law and BLM’s internal guidance. As a result, we recommended that BLM take steps to improve the implementation of Section 390 categorical exclusions through clarification of its guidance, standardizing decision documents, and increasing oversight. Since 2009, BLM has taken steps to address our recommendations, but it has not yet completed implementing all of our recommendations. We have reported that BLM and MMS have encountered persistent problems in hiring, training, and retaining sufficient staff to meet Interior’s oversight and management responsibilities for oil and gas operations on federal lands and waters. For example, in March 2010, we reported that BLM and MMS experienced high turnover rates in key oil and gas inspection and engineering positions responsible for production verification activities. As a result, Interior faces challenges meeting its responsibilities to oversee oil and gas development on federal leases, potentially placing both the environment and royalties at risk. We made a number of recommendations to address these issues. While Interior’s reorganization of MMS includes plans to hire additional staff with expertise in oil and gas inspections and engineering, these plans have not been fully implemented, and it remains unclear whether Interior will be fully successful in hiring, training, and retaining these additional staff. Moreover, the human capital issues we identified with BLM’s management of onshore oil and gas continue, and these issues have not yet been addressed in Interior’s reorganization plans. Federal oil and gas resources generate billions of dollars annually in revenues that are shared among federal, state, and tribal governments; however, we found Interior may not be properly assessing and collecting these revenues. In September 2008, we reported that Interior collected lower levels of revenues for oil and gas production in the deep water of the U.S. Gulf of Mexico than all but 11 of 104 oil and gas resource owners whose revenue collection systems were evaluated in a comprehensive industry study—these resource owners included other countries as well as some states. However, despite significant changes in the oil and gas industry over the past several decades, we found that Interior had not systematically re-examined how the U.S. government is compensated for extraction of oil and gas for over 25 years. GAO recommended Interior conduct a comprehensive review of the federal oil and gas system using an independent panel. After Interior initially disagreed with our recommendations, we recommended that Congress consider directing the Secretary of the Interior to convene an independent panel to perform a comprehensive review of the federal system for collecting oil and gas revenue. More recently, in response to our recommendation, Interior has commissioned a study that will include such a reassessment, which, according to Interior officials, the department expects will be complete in 2011. The results of the study may reveal the potential for greater revenues to the federal government. We also reported in March 2010 that Interior was not taking the steps needed to ensure that oil and gas produced from federal lands was accurately measured. For example, we found that neither BLM nor MMS had consistently met their agency goals for oil and gas production verification inspections. Without such verification, Interior cannot provide reasonable assurance that the public is collecting its share of revenue from oil and gas development on federal lands and waters. As a result of this work, we identified 19 recommendations for specific improvements to oversight of production verification activities. Interior generally agreed with our recommendations and has begun implementing some of them. Additionally, we reported in October 2010 that Interior’s data likely underestimated the amount of natural gas produced on federal leases, because some unquantified amount of gas is released directly to the atmosphere (vented) or is burned (flared). This vented and flared gas contributes to greenhouse gases and represents lost royalties. We recommended that Interior improve its data and address limitations in its regulations and guidance to reduce this lost gas. Interior generally agreed with our recommendations and is taking initial steps to implement these recommendations. Furthermore, we reported in July 2009 on numerous problems with Interior’s efforts to collect data on oil and gas produced on federal lands, including missing data, errors in company-reported data on oil and gas production, and sales data that did not reflect prevailing market prices for oil and gas. As a result of Interior’s lack of consistent and reliable data on the production and sale of oil and gas from federal lands, Interior could not provide reasonable assurance that it was assessing and collecting the appropriate amount of royalties on this production. We made a number of recommendations to Interior to improve controls on the accuracy and reliability of royalty data. Interior generally agreed with our recommendations and is working to implement many of them, but these efforts are not complete, and it is uncertain at this time if the efforts will fully address our concerns. In October 2008, we reported that Interior could do more do encourage the development of existing oil and gas leases and proposed a recommendation. Our review of Interior oil and gas leasing data from 1987 through 2006 found that the number of leases issued had generally increased toward the end of this period but that offshore and onshore leasing had followed different historical patterns. Offshore leases issued peaked in 1988 and in 1997 and generally rose from 1999 through 2006. Onshore leases issued peaked in 1988, then rapidly declined until about 1992, and remained at a consistently low level until about 2003, when they began to increase moderately. We also analyzed 55,000 offshore and onshore leases issued from 1987 through 1996 to determine how development occurred on leases that had expired or been extended beyond their primary terms. Our analysis identified three key findings. First, a majority of leases expired without being drilled or reaching production. Second, shorter leases were generally developed more quickly than longer leases but not necessarily at comparable rates. Third, a substantial percentage of leases were drilled after the initial primary term following a lease extension or suspension. We also compared Interior’s efforts to encourage development of federal oil and gas leases to states’ and private landowners’ efforts. We found that Interior does less to encourage development of federal leases than some states and private landowners. Federal leases contain one provision–– increasing rental rates over time for offshore 5-year leases and onshore leases—to encourage development. In addition to using increasing rental rates, some states undertake additional efforts to encourage lessees to develop oil and gas leases more quickly, including shorter lease terms and graduated royalty rates—royalty rates that rise over the life of the lease. In addition, compared to limited federal efforts, some states do more to structure leases to reflect the likelihood of oil and gas production, which may also encourage faster development. Based on the limited information available on private leases, private landowners also use tools similar to states to encourage development. Accordingly, we recommended that the Secretary of the Interior develop a strategy to evaluate options to encourage faster development of oil and gas leases on federal lands. Recently, Interior has stated its intent to pursue legislation establishing a per acre fee on non-producing leases to encourage development of federal leases. In conclusion, Interior’s oversight of federal oil and gas resources is in transition. Our past work has found a wide range of material weaknesses in Interior’s oversight of federal oil and gas resources. These findings and related recommendations were the results of years of intensive evaluation of how Interior oversaw the oil and gas development functions. While Interior may shift responsibilities around, many of these weaknesses remain key challenges to address as Interior works through the implementation of its reorganization. For the reorganization to be most effective, it is important that Interior remains focused on efforts to implement our past recommendations and incorporate them into the new oversight bureaus. We remain hopeful that the structural changes made to Interior’s bureaus, coupled with a concerted effort to implement the many recommendations we have made should provide greater assurance of effective oversight of federal oil and gas resources. Chairman Issa, Ranking Member Cummings, and Members of the Committee, this concludes our prepared statement. We would be pleased to answer any questions that you or other Members of the Committee may have at this time. For further information on this statement, please contact Frank Rusco at (202) 512-3841 or ruscof@gao.gov. Contact points for our Congressional Relations and Public Affairs offices may be found on the last page of this statement. Other staff that made key contributions to this testimony include, Glenn C. Fischer, Jon Ludwigson, Kristen Massey, Alison O'Neill, Kiki Theodoropoulos, and Barbara Timmerman. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Department of the Interior oversees oil and gas activities on leased federal lands and waters. Revenue generated from federal oil and gas production is one of the largest nontax sources of federal government funds, accounting for about $9 billion in fiscal year 2009. Since the April 2010 explosion on board the Deepwater Horizon, Interior has been in the midst of restructuring the bureaus that oversee oil and gas development. Specifically, Interior's Bureau of Land Management (BLM) oversees onshore federal oil and gas activities; the Bureau of Ocean Energy Management, Regulation, and Enforcement (BOEMRE)--created in May 2010--oversees offshore oil and gas activities; and the newly established Office of Natural Resources Revenue (ONRR) is responsible for collecting royalties on oil and gas produced from both onshore and offshore federal leases. Prior to BOEMRE, the Minerals Management Service's (MMS) Offshore Energy and Minerals Management Office oversaw offshore oil and gas activities and revenue collection. In 2011, GAO identified Interior's management of oil and gas resources as a high risk issue. GAO's work in this area identified challenges in five areas: (1) reorganization, (2) balancing responsibilities, (3) human capital, (4) revenue collection, and (5) development of existing leases. Reorganization: Interior's reorganization of activities previously overseen by MMS, which Interior expects to be completed in October 2011, will require time and resources and may pose new challenges. While this reorganization may eventually lead to more effective operations, GAO has reported that organizational transformations are not simple endeavors. GAO is concerned with Interior's ability to undertake this reorganization while meeting its revenue collection and oil and gas oversight responsibilities. Balancing Responsibilities: GAO has reported that Interior has experienced several challenges with meeting its responsibilities for providing for the development of oil and gas resources while managing public lands for other uses, including wildlife habitat. For example, in September 2009, GAO reported that BLM's use of categorical exclusions under Section 390 of the Energy Policy Act of 2005 was frequently out of compliance with the law and BLM's internal guidance. As a result, GAO recommended that BLM take steps to improve the implementation of Section 390. BLM has taken steps to address these recommendations, but it has not yet implemented all of them. Human Capital: GAO has reported that BLM and MMS have encountered persistent problems in hiring, training, and retaining sufficient staff to meet their oversight and management responsibilities for oil and gas operations. For example, in March 2010, GAO reported that BLM and MMS experienced high turnover rates in key oil and gas inspection and engineering positions responsible for production verification activities. As a result, Interior faces challenges meeting its responsibilities to oversee oil and gas development on federal leases, potentially placing both the environment and royalties at risk. Revenue Collection: While federal oil and gas resources generate billions of dollars in annual revenues, past GAO work has found that Interior may not be properly assessing and collecting these revenues. In September 2008, GAO reported that Interior collected lower levels of revenues for oil and gas production in the deep water of the U.S. Gulf of Mexico than all but 11 of 104 oil and gas resource owners whose revenue collection systems were evaluated in a comprehensive industry study. As GAO recommended, Interior is undertaking a comprehensive assessment of its revenue collection policies and processes--the first in over 25 years. Interior expects to complete this study later this year. Development of Existing Leases: In October 2008, GAO reported that Interior could do more to encourage the development of existing oil and gas leases. Federal leases contain one provision--increasing rental rates over time for offshore 5-year leases and onshore leases--to encourage development. In addition to escalating rental rates, states undertake additional efforts to encourage lessees to develop oil and gas leases more quickly, including shorter lease terms and graduated royalty rates. Recently, Interior has stated its intent to pursue legislation establishing a per acre fee on non-producing leases to encourage development of federal leases. |
The nation’s UI system is a joint federal/state partnership originally authorized by the Social Security Act and funded primarily through federal and state taxes on employers. Under this arrangement, states administer their own programs, generally known as regular UI benefits, according to certain federal requirements and under the oversight of DOL’s Office of Unemployment Insurance. The primary objectives of this partnership are to provide temporary, partial compensation for lost earnings to individuals who have become unemployed through no fault of their own and to help stabilize the economy during economic downturns. Federal law sets forth broad provisions for the categories of workers who must be covered by the program, some benefit provisions, the federal tax base and rate, and program administration, such as how states will repay any funds they borrow from the federal government to pay benefits when state reserves are depleted. States have considerable flexibility to set benefit amounts and their duration, or the maximum period of time that the state pays benefits, and establish eligibility requirements and other program details. Regarding duration, for example, most states provided up to 16 weeks in 1938, after the program was first established. More recently, states provided up to 26 weeks.reduced benefit duration, with 20 weeks—representing a reduction of In the wake of the recession, according to DOL, 9 states over 20 percent—the most common new maximum (see table 1). Among these states, 4 enacted new maximum durations that vary according to the state’s unemployment rate (referred to in this report as “variable duration”), while the other 5 states’ new maximum durations do not vary in this way (referred to in this report as “flat duration”). All 9 states established these new durations through changes in state law. States also have flexibility to establish eligibility requirements, which can affect duration. There are two kinds of eligibility requirements. Monetary eligibility typically refers to an earnings threshold and employment history that applicants must meet in order to qualify for benefits. Based on such eligibility criteria, individuals may qualify for less than a state’s maximum duration. The other category of eligibility requirements is non-monetary, which according to DOL, refers to states’ criteria to determine if an individual’s job loss is through no fault of his or her own, and that the individual is able to work, available for work, and actively seeking work. According to DOL, each state’s law sets both monetary and non- monetary requirements for eligibility. In addition to the state UI benefits, federal emergency and extended UI programs may sometimes provide additional weeks of benefits under certain economic conditions, such as rising unemployment or economic downturns. Temporary Federal UI Programs. During economic downturns, Congress has sometimes passed legislation to provide temporary unemployment compensation. Most recently, such a temporary program was created by the Supplemental Appropriations Act, 2008 (Emergency Unemployment Compensation; “emergency benefits” for the purposes of this report). According to CRS, this represented the eighth time a federal temporary unemployment compensation program was created since the inception of the UI program. This program provided up to 53 additional weeks of emergency benefits to qualifying claimants and expired in December 2013. After establishing the recent emergency benefits program in 2008, Congress amended the program 11 times, extending it and in some cases adding weeks of benefits, according to information provided by DOL. From December 2008 until the program expired in December 2013, emergency benefits were available in all states, including the states that reduced duration, with the exception of North Carolina (see app. V). Extended UI Benefits. In addition, states and the federal government provide “extended” benefits to workers. This program, which has no expiration date, provides up to 13 additional weeks of benefits to workers who have exhausted state unemployment insurance benefits during periods of high unemployment. According to CRS, some states have also utilized an option to pay up to 7 additional weeks of extended benefits when unemployment reaches certain levels. While financing for extended UI benefits is typically shared between the states and the federal government, the American Recovery and Reinvestment Act of 2009, as amended, provided for temporary full federal funding of the extended benefits program through December 2013. According to DOL, from January 2009 to May 2013, 39 states met the criteria for extended benefits at various times, including all of the states that reduced duration, and since May 2013, no state has met the criteria for federal extended benefits (see app. V). During the recovery, the maximum period for all combined benefits—state, emergency, and extended—to qualifying claimants reached 99 weeks in some states (see fig. 1). Under both emergency and extended benefit programs, the duration of an individual’s federal UI benefits has depended, in part, on the duration of his or her state UI benefits. If a claimant was entitled to fewer than 26 weeks of state benefits, the duration of any available federal benefits to the claimant would be reduced proportionally. Under the most recent rules for the emergency program, the formula specified that, for the first level or “tier” of benefits, benefits were payable for up to 54 percent of the duration of an individual’s total state benefits.provide up to 50 or 80 percent of the duration of an individual’s total state benefits, depending in part on the rate of unemployment in the claimant’s state. For the 80 percent level to be applicable, a state must have a provision in its laws causing extended benefits to become available when the unemployment rate reaches a certain level. Both federal emergency and extended benefits were available in 2011, when duration reductions were first enacted in 6 states (see fig. 2). The UI program was designed to be forward funded and self-financed by states through a trust fund that the federal government maintains on behalf of the states.accounts through revenue from employer taxes during periods of economic expansion in order to pay UI benefits during economic downturns. Because unemployment can vary substantially during a business cycle, it is important that states build sufficient reserves so trust fund balances remain solvent during recessions. Ideally, states build reserves in their trust fund The UI program is financed primarily by state taxes levied on employers, as well as a federal tax—the Federal Unemployment Tax Act (FUTA) tax—also levied on employers. depending on the extent to which their state UI programs comply with federal criteria. Specifically, states set a taxable wage base—the maximum amount of an employee’s wages subject to UI employer taxes—and any wages above this amount are not subject to taxation. In addition, states determine the employer tax rate levied on the taxable wage base. In order for employers in a state to qualify for the full FUTA tax credit, the state’s taxable wage base must at least be equal to the FUTA wage base—currently $7,000. In addition, the state’s tax rate for each employer may vary according to the employer’s layoff records—a practice known as experience rating. Experience rating results in lower tax rates for employers with fewer layoffs and higher tax rates for those with more layoffs. States can also levy other taxes on employers, known as surtaxes or surcharges, for various purposes. This discussion of state tax provisions is based on DOL’s Comparison of State Unemployment Insurance Laws, 2014 (Washington, D.C.: 2015) and UWC— Strategic Services on Unemployment & Workers’ Compensation, Highlights of State Unemployment Compensation Laws 2014. Although these taxes are paid by the employer, economists generally have concluded that their cost is likely to be borne by workers. Additionally, 3 states—Alaska, Pennsylvania, and New Jersey—directly levy UI taxes on workers. In addition to their ability to change tax rates, states can make changes to program benefits to help ensure that funds are available to pay future benefits. For example, as previously mentioned, states can change program eligibility provisions to limit or expand the population who qualify for benefits. States can also change benefit amounts directly. Figure 3 shows the various tools states generally use to balance program revenue and benefits. Although states have flexibility to change both revenues and benefits, they can exhaust their UI reserves during periods of exceptional unemployment. In such times, states may borrow from the federal government. If a state satisfies certain conditions, loans taken from January 1 through September 30 and repaid before October 1, are interest free as long as the state does not borrow again during the fourth quarter of the calendar year. In states that do not repay their loans within a specified period, employers lose a portion of the FUTA tax credit. However, states with outstanding loans can still seek relief from these loan provisions in the form of a limit to the reduction of the FUTA tax credit and the opportunity to delay interest payments. During the recent recession, most states opted to borrow from the federal government: 36 states had federal trust fund loans, and the total borrowed reached $48.5 billion in March 2011.Islands had trust fund loan balances, totaling about $14 billion. As of March 2015, 9 states and the U.S. Virgin Measures of UI solvency are expressed as a percentage of wages, typically total annual wages earned by employees who are potentially eligible for receiving UI benefits. Among the measures that DOL reports are reserve ratios and the high cost multiple. A high cost multiple measure of 1.0 corresponds to sufficient reserves to pay benefits at the high cost rate for 1 year without taking in additional revenue, according to DOL. A similar measure is the average high cost multiple (AHCM), which divides a trust fund’s reserve ratio by the average high cost rate, which uses a multi-year average. An AHCM measure of 1.0 is the target for solvency, recommended by the Advisory Council on Unemployment Compensation and is specified in DOL regulations providing for interest- free loans. States also monitor their own trust fund balances. We have previously reported that almost all states measure their trust fund balances and make tax rate changes once per year. According to our previous report, the majority of states have trust fund balance targets written into their state laws, with triggers built in to adjust the tax rates according to the balance. Most states impose higher tax rates when their trust fund balances are low and lower rates when their balances are high, according to DOL. In our analysis of the 9 duration reduction states, we found that as a group they exhibited several characteristics that tended to distinguish them from other states.reduce duration, the states that reduced duration had: Overall, as compared to the states that did not weaker trust fund balances before the recession; lower total taxable resources; federal loans to a greater degree; higher unemployment rates; lower union membership rates; and greater political homogeneity. In addition, while state officials cited a range of considerations in reducing benefit durations, we found that most duration reduction states, like most of the states we selected for comparison that did not reduce duration, raised taxes and made other changes to their programs. Overall, our interviews with state officials could not establish the degree to which any characteristics or considerations affected the decisions to reduce durations. Duration reduction states were more likely than other states to enter the recession with trust fund balances that were inadequate to pay historical benefit levels. Specifically, 8 of the 9 duration reduction states (89 percent) had an AHCM below 1.0 in the last quarter of 2007—indicating an inadequate trust fund balance—as compared to 25 of the 42 states (60 percent) that did not reduce duration. Duration reduction states had a median AHCM of .33, which was less than half the median AHCM of .79 among states that did not reduce duration (see fig. 4). Consequently, the trust fund balances of the duration reduction states were particularly vulnerable to recessionary pressures and these states faced a greater risk of depleting their trust fund balances than states with more adequate trust fund balances. Among the states that reduced duration and among those that did not, there was variation in AHCM values: for example, as shown in figure 4, 1 duration reduction state among the 9 did have an adequate AHCM. In contrast, 17 of the 42 states that did not reduce duration had an AHCM greater than 1. The inadequacy of trust fund balances may have been a factor in the decision to reduce duration for some states. Officials from 5 of the 7 duration reduction states with whom we had interviews cited the condition of their state’s trust fund balance as having been a likely consideration in the decision to reduce duration. One state UI director said it had been a driving factor in his state’s deliberation. The weak or inadequate trust fund balances may have been partially a result of relatively low employer UI tax rates. Specifically, 5 of the 9 reduction states had average UI tax rates on total wages that were lower than the U.S. average for the 5 years preceding reduction. Additionally, stakeholders in 4 reduction states told us that there were periods (prior to the recession) of up to several years in which employer UI taxes were held to minimal levels through means such as tax holidays, tax cuts, actions to suppress automatic tax adjustment mechanisms, and actions to distribute some trust fund revenues to employers. We have previously reported that long-standing UI tax policies and practices in many states have eroded trust fund reserves, leaving states in a weak position prior to a previous recession. Additionally, duration reduction states had weaker overall fiscal capacity than other states. The total taxable resources of duration reduction states were generally lower than those of states that did not reduce duration, according to a measure of states’ overall fiscal capacity calculated by the Department of the Treasury. In 2010 (the year before any duration reductions occurred), 8 of the 9 duration reduction states had measures of total taxable resources below the median per capita indexed value for the U.S. overall, and 3 of these states had measures that were among the lowest in the country (see app. II for more information). States that reduced their benefit durations were more likely to have received a federal trust fund loan since 2010 (see app. II for more information). Specifically, all 9 duration reduction states took such loans at some point during the recession; whereas 61 percent of nonreduction states had trust fund loans (see app. II for the maximum trust fund loan balances for each state). While the size of loans varied among reduction states, 2 of them (Michigan and North Carolina) ranked among the top 4 states nationwide with the largest debt per covered employee. States that reduced UI benefit duration also tended to have higher unemployment rates. Before they adopted duration reduction, almost all— 7 of 9—duration reduction states had total unemployment rates of 9 percent or more.duration—one-third—had unemployment rates of 9 percent or more in 2010. Higher unemployment rates increase the pressure on UI trust funds because they reflect the population of those who could qualify for and receive UI benefits. Duration reduction states also had lower rates of union membership. In 2010, 7 of the 9 duration reduction states had rates of union membership below the median for states that did not reduce duration—13.2 percent— and 3 of them had rates that were among the 5 states with the lowest rates in the country—5.6 percent or lower. Low union membership has been associated with lower benefits and wages in the economics literature. Finally, duration reduction states also exhibited fairly homogenous political composition of their legislatures and governorships, which may have facilitated development and adoption of the state laws that included duration reduction. We found that 8 of the 9 duration reduction states had a single party in control of both the legislature and the governorship when In contrast, 45 percent of nonreduction states reductions were enacted.were politically homogeneous in 2010, although by 2013, this was the case for 71 percent. Beyond the characteristics that tended to distinguish the states that reduced duration from other states, officials in some duration reduction states suggested other considerations as influential. Specifically, officials cited a federal program requirement and the availability of federal benefits, among other reasons for reducing duration. In 4 of the 7 states where we interviewed UI officials, officials cited the federal nonreduction requirement as a possible factor. This requirement made states that directly reduced UI benefit amounts ineligible for federal UI emergency funds, thereby limiting the range of options available to states to reduce benefit costs.suggested that “no other effort to reduce benefits [beyond reducing duration] would be acceptable.” As one state official said, this rule Also, although the duration reductions will continue regardless of the availability of federal benefits, in 4 of the 7 states where we interviewed UI officials, officials said the availability of federal benefits may have played a role in the decision to reduce the maximum duration of state- funded benefits. The availability of federal benefits meant that claimants would generally continue to receive benefits, albeit federal benefits, beyond the new maximum state duration. One state UI official told us the state’s reduction was “not significant,” in part because federal benefits were then available, and another told us that the legislature likely anticipated minimal impact on claimants. In 7 of the 9 duration reduction states, the total maximum duration of all available benefits, including federal benefits, was at least 93 weeks at the time of duration reduction (see app. V). During our interviews, officials in 2 states cited the state’s economic health and the need to encourage claimants’ reemployment efforts as reasons for reducing duration. Specifically, officials in Kansas said that duration in that state is tied to the health of the state’s economy so that longer durations are available when the unemployment rate is high. Kansas is 1 of 4 states, along with Florida, Georgia, and North Carolina, that adopted variable maximum durations. They provide more weeks of benefits when the total state unemployment rate is high, and fewer weeks when it is low (see app. III). States reduced maximum benefit duration in the context of other changes to their UI programs, according to information provided by state officials and our review of selected state laws. Some of these changes, such as tax increases, played a role in repaying federal loans and improving their States also made other changes to their programs trust fund balances.that can reduce benefit payments, such as changes related to eligibility and program integrity. Increasing Tax Revenues. Of the 9 duration reduction states, 7 adopted increases in employer taxes, some of them temporary, according to information provided by state officials. changed the taxable wage base, employer tax rates, supplemental taxes, or some combination of these (see app. III). Of the 9 duration reduction states, 2 did not respond to our written questions. duration reductions as providing such a balance. To understand the relative role of duration reductions and tax increases, we asked the states whether they had estimated any cost savings associated with duration reductions. Of the 7 duration reduction states we contacted, 4 provided estimates of cost savings equivalent to 3.4 percent, 4.6 percent, 9.8 percent, and 44.9 percent of their maximum loan balances (see app. III).In addition, officials in the 3 other duration reduction states we contacted identified tax changes as among the actions that contributed most to repayment of the trust fund loan, and one of these states identified duration reduction as a major contributor to repaying the loan. One official estimated that state tax rate changes accounted for about 65 percent of the funds needed to restore the trust fund balance. Other Changes. Two duration reduction states issued bonds to repay their loans. On the benefits side, one state lowered its benefit amount, according to CRS. In addition, 8 of the 9 duration reduction states made changes to program eligibility. These included changes to requirements for individuals regarding earnings or employment history, as well as changes to rules addressing conduct for which an individual could be disqualified. In one state, eligibility changes were enacted to reverse previous expansions of eligibility under federal law, such as eligibility for those who claimed UI on the basis of part-time employment. Actions taken to address program integrity were reported by all 7 of the 9 duration reduction states that provided information, and included activities to address overpayments, detect fraud, and impose penalties for noncompliant employers. State officials reported that these actions were taken in response to both federal and state initiatives. The 4 states we examined that did not reduce duration also reported making similar program changes in raising employer taxes, tightening eligibility, and strengthening program integrity. Specifically, 3 of the 4 comparison states reported increasing employer taxes. Of the 3 comparison states that had loans, two— Indiana and Tennessee— reported increasing employer taxes (see app. III). The third state with a loan, Ohio, was considering changes to its UI program, including to employer taxes. The fourth state, Washington, did not require a loan, and officials told us that the state’s taxable wage base—the maximum amount of an employee’s wages subject to tax—was raised through a provision in state law that took effect automatically, while employer taxes were reduced. Regarding eligibility, 3 of the 4 comparison states reported recent actions to tighten eligibility, such as by strengthening work requirements. states reported taking actions to strengthen program integrity, such as efforts to address overpayments. The 4 comparison states varied in the extent to which they considered reducing maximum duration. Indiana, Ohio, and Tennessee were similar to the duration reduction states, for example, in terms of having had weak trust fund balances before the recession. However, UI officials in Indiana told us that benefit amounts for some claimants had already been reduced in 2011, and no further actions have been taken on the benefit side. In Ohio, which has a large outstanding loan balance, state UI officials told us that a bipartisan group of legislators was considering changes to the program, including a potential reduction in maximum duration. Officials in Tennessee and Washington told us that duration reduction had not been considered in their states. According to information provided by an official in one comparison state, eligibility changes were enacted to reverse previous expansions of eligibility under federal law, such as eligibility based on part-time employment. In the duration reduction states, those UI claimants who would have been eligible to receive benefits beyond the new maximum receive less in total benefits in the absence of federal UI programs. The foregone benefit for those individuals who would have exhausted benefits under the previous duration can be estimated as the product of the number of weeks of the reduction and the average weekly benefit amount. For example, Michigan had an average weekly benefit of $273 in the third quarter of 2014 and the maximum benefit duration was 20 weeks (a reduction of 6 weeks from the previous maximum). A claimant in Michigan who would have been eligible for 26 weeks of benefits absent the reduction could receive 20 weeks of benefits. The claimant’s foregone benefit can be estimated as $1,638 (or 6 times the average weekly benefit amount). Benefits foregone by individuals in the states whose durations do not fluctuate with unemployment rates (flat maximums) ranged from $289 to $1,638 (see table 2) As expected, foregone benefits would vary for individuals who would have exhausted benefits under the previous duration in the 4 states where maximum durations are tied to the unemployment rates —Florida, Georgia, Kansas, and North Carolina. The benefit amount foregone by individuals in these states ranged from $1,370 to $2,926 (see table 3.) One potential rationale for tying maximum durations to the unemployment rate is that a lower unemployment rate signals that more jobs are available and, consequently, a shorter UI duration may be sufficient to find employment. In such a scenario, UI claimants may find jobs sooner and may not be affected by the decreased maximum duration. On the other hand, improvement in the unemployment rate is not the only factor that affects unemployment levels—lower unemployment rates can be caused in part by individuals giving up the job search altogether and dropping out of the labor force. Some state UI directors told us their states examined the average duration on UI when legislators were considering where to set the new maximum durations. For example, Georgia officials told us that average duration had been below 14 weeks for years, and was recently closer to between 11 and 13 weeks. On the other hand, average duration on UI does not always reflect the average length of unemployment for several reasons—for example, an unemployment spell can exceed the maximum weeks of UI benefits. When we examined the average length of unemployment for persons in these states, we found that it is generally longer than the state’s new maximum benefit duration.reduced duration, the average length of unemployment for all In the states that unemployed persons—not just those receiving UI—for 2014 ranged from almost 24 weeks to nearly 44 weeks. The maximum durations in these states ranged from 14 to 25 weeks, as of October 2014 (see table 4). When federal UI benefits were in effect (most recently generally from 2009 until the end of 2013), those individuals who were eligible to receive UI benefits for the maximum total state and federal duration would have received substantially less benefits following reduction, since duration in each federal benefit program depends, in part, on the duration of state benefits. Specifically, in a state that reduced benefits from 26 to 20 weeks, those claimants who would have received state and federal benefits for up to 93 weeks before the reduction would receive benefits for up to 72.4 weeks after the reduction, as illustrated in figure 5. However, UI benefit durations would not change for those claimants who would have received benefits for less than 72.4 weeks before the reduction. For example, prior to any reductions in maximum benefit durations, an individual who found employment after 36 weeks of receiving UI benefits would have received 26 weeks of state benefits and 10 weeks of federal emergency unemployment benefits. All else equal, such an individual would have still received a total of 36 weeks of benefits in a scenario in which the state reduced maximum duration, for example, to 20 weeks, although the source of the benefit would change: 20 weeks of state benefits and 16 weeks of federal emergency unemployment benefits (see fig. 6). To illustrate how reductions in state benefit durations could have affected UI claimants who were eligible to receive benefits for the maximum state and federal durations, we calculated a hypothetical foregone benefit amount to illustrate how the reduction in state maximum duration could impact benefits paid through the federal programs. For this hypothetical scenario (for example in May 2013), we assume that a claimant was eligible for the maximum benefit duration available, and that 20 weeks of federal extended benefits and all four tiers of emergency benefits were in effect, although this was not the case in all states. In such a scenario, hypothetical foregone benefits could range from around $700 to over $20,000 (see table 5). In light of our findings that some individuals may have received less total UI benefits due to duration reduction, we also examined research on the impact of decreases in UI benefits on individuals.reviewed considered the theoretical basis for and empirical research on the implications for individuals of changes in UI benefits (amounts and duration) in terms of labor market behavior, poverty, and enrollment in social safety net programs. The literature we Theories of economic efficiency generally propose that the ideal UI benefit amount and duration would prevent sharp reductions in the claimant’s household spending without creating incentives for an unnecessarily lengthy job search. Benefits that are too high prolong job searches and can elevate unemployment, while benefits that are too low reduce spending during jobless spells and cause workers to accept sub- optimal employment. According to the research, some claimants facing shorter UI benefit durations may find employment, while others may leave the labor force. Some models show that a longer benefit period—and thus a longer job search—can result in better job offers that enable workers to be more productive, although empirical support for this possibility is limited. Nevertheless, with low or unavailable benefits, research has found that some people give up on seeking employment and leave the labor force altogether. Unemployment benefits promote labor force participation, both on the “front end’ by reducing the layoff risk in work covered by UI, and on the “back end” in the event of layoff, by reducing labor force exit. In our own work, some of the state UI directors and stakeholders we interviewed told us that they hoped reduced benefit duration would increase reemployment. However, the available research on UI duration and claimants’ reemployment offers little support to the premise that reducing duration increases reemployment. In an economy with few jobs relative to the number of job seekers—as was the case during the recession and the slow recovery when the ratio of job seekers to job openings rose to as much as six to one—a shorter benefit period is not likely to return individuals to the labor force sooner. One study estimated that as many as one-third of claimants who exhaust benefits are unable to find work. many people do or do not find jobs once they stop receiving benefits because of limited data on individuals who have left the program. David Grubb, Assessing the Impact of Recent Unemployment Insurance Extensions in the United States, Working Paper (Paris: Organization for Economic Cooperation and Development, May 2011). Empirical research indicates longer durations prolong spells in the program. However, the implication of longer spells for the overall unemployment rate is ambiguous. Shortened durations could lead to reemployment, but the reemployed UI claimant might have taken a job otherwise obtained by someone outside the program. Alternatively, shorter durations could also lead to earlier exit from the labor force. Lengthened durations could lead to longer spells in the program, but also, with the benefit of more search time, to the program participant finding a job for which they are better suited. Studies that specifically consider net employment for the entire labor force indicate little effect from shorter benefit durations. For the same reason, research finds that shorter durations have a negative effect on consumption in families of unemployed workers, since shorter benefit durations do not necessarily result in rapid reemployment. Some research finds that reduced benefits—including those resulting from reduced durations—lead to a greater incidence of poverty among those eligible for UI benefits. UI has been shown to reduce poverty; the Census Bureau estimated that UI benefits kept 1.2 million people out of poverty in 2013. rate among unemployed workers from 22.5 percent to 13.6 percent. Carmen DeNavas-Walt and Bernadette D. Proctor, Income and Poverty in the United States: 2013, (U.S. Census Bureau Current Population Reports, September 2014). Nutrition Assistance Program (formerly known as food stamps). For example, a 2010 study found that UI extensions reduce SSDI claims, while a 2013 study found no relation between benefit exhaustion and disability claims.applications that would eventually be made to other programs, such as SSDI. However, longer benefit durations may delay We examined UI program and economic indicators, and found that, on average, individuals in reduction states were less likely to participate in UI and more likely to leave the labor force than individuals in nonreduction states. Additionally, while the rates at which claimants exhausted their UI benefits increased during the recession, and declined since 2010, these rates were consistently higher in duration reduction states on average as compared to states that did not reduce duration. Although duration reductions may have affected some of these indicators, it is difficult to attribute causation, given the many other program changes made by the states and federal government, as well as the changes occurring in the economy. For more information on our analysis of individual states, see appendix V. In the presence of federal UI benefit programs, reducing maximum state UI benefit durations affects federal program costs in two opposing ways. First, at the front end, claimants use federal benefits earlier than they would have absent a state reduction, so the federal government pays some costs that states otherwise would have paid. For example, in states that reduced the maximum duration of state UI benefits from 26 weeks to 20 weeks, those claimants who were eligible for benefits for more than 26 weeks transitioned to federal benefits 6 weeks earlier. In addition, federal benefits were paid to some claimants who would not have received any federal benefits absent the reduction. Specifically, in states that reduced duration to 20 weeks, federal benefits were paid to any eligible claimants who exited the program during weeks 21 through 26. Second, at the back end, federal benefits were not available for as long as they would have been absent a reduction in state durations, which potentially led to some federal savings. For example, claimants could receive up to 67 weeks of federal benefits when the maximum state benefit duration was 26 weeks. After a reduction to 20 weeks, those claimants could receive a maximum of 52.4 weeks of federal benefits. As a result, some of the upfront costs that were shifted to the federal government in weeks 21 through 26 are offset by shorter federal benefit durations at the back end, as shown in table 6. The net cost to the federal government due to the reduction in state benefit durations is difficult to measure because the amount and duration of federal program benefits depend on both a claimant’s state benefits For example, we found that before and how long he or she is eligible.duration reduction, a claimant who received benefits for 75 weeks—26 weeks of state benefits and 49 weeks of federal benefits—would receive fewer total weeks of benefits after reduction in state duration to 20 weeks, but more weeks of federal benefits. Specifically, the claimant would receive 20 weeks from the state, and then 52.4 weeks of federal benefits, providing a total of 72.4 weeks of UI benefits. (See fig. 7.) Over time, some claimants find jobs or exit the program for other reasons. As a result, there are likely to be fewer claimants receiving benefits at the back end of the program than during the front end. Therefore, even though state duration reductions may have resulted in fewer weeks of federal benefits, costs to the federal government may have increased. In other words, the front end cost caused by an earlier transition to federal programs by more claimants may exceed any savings generated by paying fewer weeks of federal benefits to those claimants still receiving benefits at the back end of the federal programs. It is difficult to measure the magnitude of the front end costs and back end savings because, while DOL collects aggregate data on benefits paid, it does not collect data on weekly benefits at the individual level. Nevertheless, using data provided by two states, Missouri and Georgia, we found that the earlier transition to federal benefits shifted some costs from these states to the federal government. In order to illustrate the front end cost shift to the federal government for one calendar year quarter for the weeks from the end of state benefits through week 26, for example, we analyzed data from Missouri and Georgia on the number of claimants who received weekly benefits, by benefit week. For each benefit week beyond the new state maximum duration, we multiplied the number of claimants by the average weekly benefit amount in each state. Missouri reduced its maximum duration from 26 weeks to 20 weeks effective April 2011. Using the average weekly benefit amount for claimants in Missouri, we calculated the federal government would have paid about $23.7 million in benefits for one calendar year quarter for the claimants who received benefits for weeks 21 through 26. See figure 8 for estimated costs to federal programs for the first 6 weeks after reduction. Whether front end costs to the federal government were offset by back end savings generated by paying fewer weeks of benefits depends on the amount the federal government would have paid in the weeks eliminated by the reduction in state benefit durations. Specifically, if the federal cost of UI benefits for weeks 72.4 through 93 exceeds $23.7 million, then the federal government would have realized cost savings by paying benefits for weeks 21 to 26 while eliminating benefits in weeks 72.4 through 93. Conversely, if the federal cost of benefits for weeks 72.4 through 93 is less than $23.7 million, then the federal government would have realized cost increases as a consequence of the state duration reduction. Similarly, Georgia reduced its benefit duration from 26 weeks to a variable duration of 14 to 20 weeks, effective July 1, 2012. Using the average weekly benefit amount for claimants in Georgia, and assuming a maximum duration of 18 weeks, we calculated that the federal government would have paid about $27.2 million in benefits that the state would previously have paid for the claimants who received benefits for weeks 19 through 26, as shown in figure 9. Our analysis of these 2 states shows that there could be a net cost shift to the federal government, perhaps unintended, as a result of state duration reductions. As we have previously reported on state policy changes in past recessions, knowledge of the unintended consequences of such changes—including estimates of the impact on federal costs—can inform federal assistance to states in future recessions. Additionally, in our report on best practices in estimating costs, we have noted that cost estimates should identify and reflect budgetary uncertainties. However, DOL has not assessed the extent to which state duration reductions, adopted by states in the wake of the recent recession, affected costs to the federal government. To do so would require an analysis of weekly benefit data for individuals, which are collected by the states, and not by DOL. Without an analysis of the cost implications of duration reductions, DOL and Congress lack information needed to plan for future economic downturns and the equitable role of the federal government in the federal/state UI partnership. The relevant economic literature on UI that we reviewed, including analysis by the Congressional Budget Office (CBO), considers the benefits to be a source of economic stabilization, by increasing aggregate demand through a “multiplier effect” during downturns. The multiplier effect is derived from claimants’ tendency to spend a high proportion of their benefits. The maximum duration of state benefits has not varied substantially since the 1960s, according to CRS. We reviewed the research that focuses on maximum durations of benefits (federal and state benefits combined). To the extent benefits are reduced, such as by a shortened benefit period, the effects on gross domestic product (GDP) and employment are likely to be negative, although the precise magnitude would be difficult to determine. effect captures the effects of this initial spending, as well as the subsequent stream of spending by other parties. The UI claimant spends benefits, and those who benefit from this spending, in turn, increase their own spending, and so on. The sum total of all such ripple effects is embodied in estimates of a multiplier associated with the initial spending increase. Effects of changes in individual states’ duration could have less pronounced effects than national changes. Experts provide varying estimates of the extent to which an increase in government spending causes those whose incomes are directly benefited, such as the unemployed, to increase their own spending, and by extension increase aggregate demand and GDP. However, estimates of the multipliers in the short term are almost always positive. Responding in 2011 to a proposed extension of UI benefits, CBO estimated multipliers for UI of between .4 and 1.9 using a model that draws from multiple schools of thought, including leading models used by other institutions, related to proposed extensions of UI benefits. A private sector estimate also found a positive effect. In 2012, investment firm Moody’s estimated the multiplier for UI to be 1.55, also in relation to proposed extensions of benefits. Both estimates found that the multiplier for UI is generally higher than those for other types of spending, such as reductions in payroll taxes for workers and employers. Moreover, traditional models used by CBO and others have found that multipliers are estimated to be greater during downturns than at other times, because there is greater potential for stimulus at such times. However, there is professional disagreement regarding these models.(See fig. 10.) Similarly, some recent studies have estimated the effect on GDP and employment of UI benefit terminations, as decreases in benefits can also have ripple effects. In 2013, for example, the Council of Economic Advisors and DOL used private-sector and CBO estimates to determine that discontinuation of federal emergency benefits could reduce GDP by .2 to .4 percentage points. This report also estimated that terminating these federal benefits could result in a potential loss of 240,000 jobs. Additionally, in December 2011, using a CBO multiplier, the Joint Economic Committee estimated that continuing federal benefits could generate up to 400,000 jobs overall. Identifying any effects from 9 states’ duration reductions on the economy as a whole would be complicated by potentially offsetting factors. For example, the effect of a spending decrease in one program, in this case, UI, could be mitigated by a spending increase in another program, because multipliers work in both directions. Furthermore, experts disagree on the extent to which an increase in government spending may provoke offsetting behavior by other entities, based on our review of relevant literature. For example, UI claimants may spend their benefits, but taxpayers may reduce their spending in anticipation of higher taxes to service government debt, canceling the intended stimulus effect. With regard to the role UI plays in stabilizing the economy during economic downturns, the program stands out as an important component of the federal government’s automatic stabilizers. It avoids the shortcomings of other types of fiscal stimulus because it is highly targeted to individuals with low income and a high likelihood of spending the benefits, and it is timely because it promptly increases in periods of rising unemployment and falls as the economy recovers. The joint federal and state unemployment partnership provides temporary financial relief to individuals who have become unemployed through no fault of their own, and stabilizes the economy during economic downturns. While states determine the amount and duration of benefits paid to individuals through unemployment insurance, their decisions have an impact on the federal government’s role and costs. In the recent recession and slow recovery, 9 states chose to reduce the maximum duration of benefits paid to individuals and, as previously mentioned, none of the states that we interviewed reported any restoration of the previous maximum duration. As we have shown, these states’ actions will lead to reductions in total benefits for some claimants. If total benefits were reduced, the UI program’s objectives to provide relief to unemployed individuals and help stabilize the economy during downturns would be adversely affected. The states’ decisions to reduce benefit durations reflect the flexibility afforded states that can help them make adjustments appropriate to their particular circumstances during challenging economic times. In addition, a larger federal role during downturns is consistent with the part that UI plays as an economic stabilizer. Yet the state duration reductions also had the unintended consequence of recasting the federal role—causing the federal programs to fund weeks of benefits that were formerly the responsibility of states. Further, these costs were shifted to the federal government without necessarily providing more weeks of total benefits for the individual. DOL does not have information about the costs shifted to the federal government and about the changes in total durations resulting from the states’ actions. As we have previously reported on state policy changes in past recessions, knowledge of the unintended consequences of such changes—including estimates of the impact on federal costs— can inform federal assistance to states in future recessions. Without an analysis of the extent to which costs were shifted to the federal government as a result of state duration reductions, the agency and Congress lack information needed to plan for future economic downturns and the equitable role of the federal government in the federal/state unemployment insurance partnership. To inform the design of any future federal UI programs, the Secretary of Labor should examine the implications of state reductions in maximum UI benefit duration on federal UI costs, for example, by modeling the net effect of paying federal benefits earlier to more beneficiaries, albeit for a possibly shorter period of time, and develop recommendations for the program, if appropriate. We provided a draft of this report to DOL for review and comment. In its written comments, reproduced in appendix VI, DOL agreed with our recommendation. Specifically, DOL noted that additional study would be useful, and indicated it will begin to assess an approach for determining the implications of reductions in maximum duration on federal costs. DOL noted that measuring the net cost to the federal government is difficult, and we acknowledge this difficulty. However, understanding the federal cost associated with state duration reductions will inform any proposed modifications to the UI federal-state partnership and the balance of costs. DOL noted that it has proposed incentives to states to maintain maximum durations of 26 weeks, included in the President’s fiscal year 2016 budget. For example, one proposal would make several changes to the extended benefits program, including providing 100 percent federal funding for states with 26-week maximum durations. DOL also provided technical comments, which we incorporated as appropriate. Additionally, we provided selected state UI agencies with a draft of pertinent sections and incorporated their technical comments as appropriate. We also asked an external expert to review the report, and made technical changes based on this review as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Labor, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. Please contact me on (202) 512-7215 or at sherrilla@gao.gov if you or your staff have any questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VII. To address the objectives of this request, we used a variety of methods. Specifically, we reviewed relevant federal laws and regulations and state laws, and confirmed information regarding state laws with relevant state officials; interviewed federal unemployment insurance (UI) program officials and state UI officials in 7 states that reduced duration and 4 states that did not reduce duration, and in 4 of the 7 states that reduced duration, we also interviewed other stakeholders with an interest in UI duration, such as employer groups and advocates; conducted a cluster analysis using data from the Department of Labor’s (DOL) Office of Unemployment Insurance, the Bureau of Labor Statistics (BLS), and other sources; analyzed data on a range of variables using data from DOL and BLS; calculated survival rates (the probability that a claimant will continue receiving benefits after a given week) based on data provided to us; conducted an economic literature review on key implications of UI benefits for individuals; and conducted an economic literature review that focused primarily on the stimulative effects of UI, and identified reasonable conclusions about the likely economic effects of duration reduction. To identify the circumstances in which states reduced the maximum duration of state benefits, we interviewed federal UI program officials and state UI officials in 7 of the 9 states that reduced duration and 4 states that did not reduce duration (Indiana, Ohio, Tennessee, and Washington). We selected Indiana, Ohio, and Tennessee based on their similarity to the duration reduction states on certain criteria, including presence of a trust fund loan and the size of the loan; geographic location; average high cost multiple; total taxable resources; and total unemployment rate. In addition, we selected Washington on the basis of expert recommendation and mention in selected studies. Washington was among a minority of states that did not require a federal UI trust fund loan at any point during the recession and recovery. The UI officials in Florida and North Carolina did not respond to our questions. We also conducted site visits to two duration reduction states—Georgia and Michigan—where we interviewed a wide range of stakeholders, including employer groups (such as the state Chambers of Commerce and affiliates of the National Federation of Independent Business), legislators who supported and opposed duration reduction, academic experts, governor’s workforce policy staff, and advocates (such as the National Employment Law Project and similar state-level organizations). We selected these states based on the magnitude of the duration reduction and the structure of the duration reductions (i.e., a mix of flat and variable maximum durations), timing of duration reduction, and geographic diversity. In the 2 duration reduction states where we were unable to interview UI officials—Florida and North Carolina—we interviewed advocates and employer groups. See table 7 for the range of stakeholders interviewed for each state. In our interviews we asked questions about the circumstances that led to duration reduction, trust fund solvency, other recent changes to the UI program, estimates of cost savings or individuals affected, broader economic effects, and reemployment programs. We also interviewed academic experts regarding these topics, including Dr. Jeffrey Wenger, University of Georgia; Dr. Christopher J. O’Leary, W.E. Upjohn Institute for Employment Research; Dr. Patrick Conway, University of North Carolina; Dr. Michael Leachman and Dr. Chad Stone, Center on Budget and Policy Priorities; Dr. H. Luke Shaefer, University of Michigan; and Dr. Wayne Vroman, The Urban Institute. In addition, Dr. Vroman reviewed a draft of the report. Additionally, we conducted a cluster analysis using data from DOL’s UI program, the Bureau of Labor Statistics (BLS), and other sources. Our analysis included numerous variables, including industry composition, population over 55, unemployment rates, and trust fund loans. Cluster analysis methods assessed the degree to which these variables simultaneously were similar within various possible groups of states but were different across the groups. We used these methods to identify characteristics that were shared among states that reduced duration. Cluster analysis allowed us to identify broad, shared patterns among states across multiple variables at once, which yielded insights that can be more difficult to discern by comparing states on individual characteristics one at a time. In this way, cluster analysis can discover patterns in data, but they cannot explain why they exist or confirm cause- and-effect relationships. We used a particular form of cluster analysis, known as hierarchical agglomerative methods, to identify potential clusters of states and their decisions to reduce UI benefits. We selected variables related to program benefits and financing, as well as variables exogenous to the program, such as states’ capacity to tax, selected state demographic information, and state industry composition. After collecting the variables above for all 50 states and the District of Columbia, we standardized the scales of all variables such that each variable’s mean was equal to 0 and its variance was equal to 1. Because the natural scales of the variables were generally percentages, the specific method of standardization should not strongly influence our results. After standardizing the scales, we calculated a multivariate Euclidean distance matrix for all variables and states. We then applied a hierarchical agglomerative clustering algorithm to this distance matrix, which used average linkage methods to form various clusters at increasing distances. We examined the results of the clustering algorithm to identify the individual variables that appeared to influence the results strongly. We further assessed, in concert with the political homogeneity of the state legislature and governorship, the degree to which states in various possible clusters reduced UI benefit duration. This allowed us to identify a group of benefit reduction states (Arkansas, Florida, Georgia, Missouri, North Carolina, and South Carolina) that were similar on the characteristics we analyzed, as well as benefit reduction states (Illinois, Kansas, and Michigan) that were not similar to this group. In addition, we identified comparison states (Indiana, Ohio, and Tennessee) that the algorithm clustered with the benefit reduction states but which had not reduced benefit duration. The comparison states helped us to examine why states in similar circumstances chose not to reduce benefits. For more information on the variables we examined, see table 8. In addition, we used data from the National Conference of State Legislatures to identify the partisan composition of state legislatures and governorships when duration reductions were adopted. These data were analyzed separately from the cluster analysis, because cluster analysis requires continuous, rather than categorical, variables. To identify the individual implications of duration reduction, we analyzed UI program data on a range of variables using data from DOL’s Office of Unemployment Insurance and BLS, including the value of the foregone state and federal benefits for individuals who reach the maximum duration. We used data from DOL’s Employment and Training Administration 5159 (Claims and Payment Activities) and 218 (Benefit Rights and Experience) reports, which states submit to DOL on a monthly basis. These reports include information on average weekly benefit amounts, initial claims, recipiency rates, and exhaustions. For selected variables, we analyzed data from 2006, before the recession began, to 2014, focusing on the 9 duration reduction states and the 4 states we selected that did not reduce duration. We also analyzed data from the Bureau of Labor Statistic’s Current Population Survey—such as length of unemployment—and Local Area Unemployment Statistics—such as seasonally adjusted employment rates. The Current Population Survey is the nation’s source of official government statistics on employment and unemployment, and it is conducted on a monthly basis with about 60,000 households. The Local Area Unemployment Statistics program provides monthly estimates of employment and unemployment for approximately 7,300 areas. We calculated survival rates (the probability that a claimant will continue receiving benefits after a given week) based on data provided to us by Georgia and Missouri. We obtained aggregate data for each quarter on the number of claimants receiving state benefits, by benefit week, for approximately a year prior to duration reduction through two quarters following reduction. We then analyzed data from the quarter closest to each state’s policy change that did not appear seasonally inflated, in order to estimate a baseline survival function prior to the policy change. Michigan provided data from a sample of claims drawn from the time period we requested. We used the estimated survival functions to estimate the possible impact of reducing maximum benefit duration in three ways. First, we calculated the probability that claimants would be affected by a maximum duration reduction, equal to the estimated survival probability at the new maximum duration. Second, we calculated the total number of claimants affected by the policy change, equal to the survival probability at the new maximum duration multiplied by the population of claimants receiving benefits at the beginning of the quarter. (We did not calculate this quantity for Michigan, due to uncertain population sizes from which our sample was drawn.) Finally, we calculated the benefit that a claimant receiving the average benefit amount in the period shortly before the policy change could have expected to lose due to a shorter maximum duration, equal to the survival probability at the new maximum duration multiplied by the total benefits received by the average recipient over the weeks exceeding the new maximum duration. We also asked Washington’s state Employment Security Department to project the implications of reducing benefits from 26 weeks to 20 weeks, using the state’s Benefit Financing Model. According to Washington officials, the model was originally developed by Wayne Vroman of the Urban Institute as part of an earlier analysis of program solvency conducted for Washington in the mid-1990’s. The Washington model has continued to be used and supported by the Employment Security Department since 2000 with a review of the model completed by Dr. Vroman in June 2007, and the department also conducts quarterly benchmarking on the results. The model was developed to model current law projections and legislative changes impacting Washington’s trust fund account. According to Dr. Vroman, it is an actuarial model, and actuarial projections always have an element of uncertainty with the degree of uncertainty increasing as the projection extends further into the future. Macro factors such as the unemployment rate, the inflation rate and the level of statewide employment present major uncertainties. We also reviewed selected economic literature on key implications of UI benefits for individuals, such as labor force attachment, job search behavior, poverty reduction, and participation in other federal programs such as Social Security Disability Insurance, specifically on the effects of changing benefit levels or changing duration of eligibility for benefits. We obtained recommendations for studies from internal GAO and external UI researchers and policy experts, including DOL officials; searched various databases for peer-reviewed journal articles and other publications; and reviewed policy and research organization websites for relevant studies. Based on this research, we identified reasonable conclusions about the likely implications of duration reductions for individuals. As noted in this report, research on the questions discussed reaches different conclusions. To identify what is known about the economic effects of reductions in benefit duration, we conducted an economic literature review that focused primarily on the stimulative effects of UI, and based on that research, we identified reasonable conclusions about the likely economic effects of duration reduction. In addition, we interviewed researchers from the Congressional Budget Office to understand its economic multiplier model. As noted in this report, research on the questions discussed reaches different conclusions. We have not done an exhaustive review of the voluminous literature on this topic. Because external data were significant to each of our research objectives, we assessed the reliability of the publicly and privately held data obtained from federal agencies and an association. To assess the reliability of DOL data sets, we administered a survey form that was specifically tailored to the system in question and addressed data uses, internal controls, and data entry practices. Once each survey was completed, we reviewed responses to assess the adequacy of the internal controls and processes in place. We determined that each data set was sufficiently reliable for the analytical purposes of this report. For data on partisan composition, we used data from the National Conference of State Legislatures. For data on total taxable resources, we obtained data from the Department of the Treasury. For these data sources beyond DOL, we obtained information on how the relevant data were generated and maintained and determined they were sufficiently reliable for the cluster analysis. We conducted our work between November 2013 and April 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Financial Characteristics of Selected States This figure represents the average loan per covered employee for the non-duration reduction states that incurred loans. This is the median total taxable resources per capita index for all states that did not reduce duration. Maximum trust fund loan (in millions) When DOL data showed the same maximum trust fund loan balance in 2 or more successive quarters, the most recent quarter is shown. According to DOL, South Carolina qualified for avoidance of its FUTA tax credit reduction. See 26 U.S.C. § 3302(g). Maximum duration available (weeks) Maximum duration following duration reduction (weeks) Maximum duration as of October 2014 (weeks) increase in the unemployment rate above 5 percent triggers an additional week 23 10.5 percent or more 14 6.5 percent or less increase in the unemployment rate triggers an additional week 20 9 percent or more 16 Less than 4.5 percent 20 4.5 percent to less 26 6 percent or more 12 5.5 percent or less increase in the unemployment rate triggers an additional week 20 More than 9 percent Florida and North Carolina did not provide information about the maximum durations that were in effect following duration reduction or as of October 2014. We imputed these durations based on the applicable unemployment rates, as calculated by BLS. Illinois’ duration reduction was only applicable to claims filed in 2012. State officials described these changes as temporary. In some cases, automatic readjustments may be tied to the condition of the trust fund balance. For example, according to state officials, in Michigan, if the trust fund reaches $2.5 billion for 2 consecutive quarters, the taxable wage base will revert from $9,500 to its previous level of $9,000. Michigan issued bonds in the amount of $3.3 billion to repay its loan. Although employers must pay a special assessment to repay the bond, the interest rate was more favorable to the state than the federal loan would have been, according to state officials. Additionally, officials state that, by repaying the federal loan in this way, the state avoided FUTA tax credit reductions for its employers. These states reported changes to the definition of misconduct among the non-monetary eligibility changes. In February 2013, North Carolina enacted legislation that reduced benefit amounts as of July 2013, thereby losing eligibility for the emergency benefits program, according to CRS. However, states that made changes to their benefit amounts before March 1, 2012 are not subject to the nonreduction requirement. Pub. L. No. 112-96, § 2144, 126 Stat. 156, 171 (2012). We asked Washington State’s UI agency, the Employment Security Department, to project the implications of reducing benefits from 26 weeks to 20 weeks, using the state’s Benefit Financing Model. The model was originally developed by Wayne Vroman of the Urban Institute as part of an earlier analysis of program solvency conducted for Washington in the mid-1990s. According to state officials, the Washington model has continued to be used and supported by the state since 2000 with a review of the model completed by Dr. Vroman in June 2007, and the state also conducts quarterly benchmarking on the results. The model was developed to model current law projections and legislative changes impacting the UI trust fund account. Table 16 below shows the impact of a reduced maximum duration on key UI benefit variables in 2013 under two scenarios. The first scenario reflects current law assumptions while the proposal shows the same benefit variables assuming a 20 weeks maximum duration. Assuming a reduction in the maximum duration from 26 weeks to 20 weeks would have led to a 9.5 percent reduction in total benefit payments in 2013. According to state officials, Washington has a system with two taxes. The first tax is experience-based and indexed to the benefit ratio. The other is a “social tax” that is assessed yearly and is mandated by statute to collect the amount required to make sure the UI trust fund remains solvent. The state UI agency applied the 9.5 percent reduction in weeks compensated that was calculated in 2013 to the UI model starting in 2015. As a result, UI tax collections would be reduced by $381 million from 2016 through 2020 and tax rates in all years from 2016 forward would also be decreased. The Employment Security Department estimated that by 2020, duration reduction would increase the UI trust fund balance by $257 million. The economic recovery following the recession affected trends such as The average duration on UI and average length of unemployment.national unemployment rate decreased from a high of 10 percent in October 2009 to 5.6 percent in September 2014. In addition, the magnitude of the recession in terms of the substantial numbers of private- sector jobs lost, and more modest gains in jobs during the recovery, had an impact on the trends we analyzed (see fig. 11). Furthermore, despite the decline in the unemployment rate, the participation rate for the civilian labor force continues to decline to historically low levels, reflecting departures from the labor force altogether. The effects of the changes in state UI benefit durations were mitigated by the duration of federal benefits, which were largely available when state duration reductions went into effect. For example, the total number of weeks available, including federal benefits, reached 46, 53, and 99 weeks at different points in time in various states, according to the Congressional Research Service. The availability of these federal benefits would affect any impact of duration reductions. UI program changes, such as restrictions on eligibility, could also have affected participation in the UI program. For example, according to state officials, South Carolina’s new definition of “gross misconduct” now results in immediate, full disqualification for any claimant engaging in certain behaviors, such as illegal drug use during either work or non-work hours. Other states, including Georgia, implemented stronger work search requirements, according to their UI directors. Florida mandated that applications only be done online and that applicants complete a skill assessment intended to help develop a reemployment plan, which, according to advocates with whom we spoke, could deter some individuals from completing their applications. In addition to the contact named above, Nagla’a El-Hodiri (Assistant Director), Chris Morehouse (Analyst-in-Charge), Susan Aschoff, James Bennett, Susan Bernstein, Jesse Elrod, Alex Galuten, Susan Offutt, Kirsten Lauber, Kathy Leslie, Max Sawicky, Linda Siegel, Amy Sweet, Jeff Tessin, and Frank Todisco made significant contributions to this report. Unemployed Older Workers: Many Experience Challenges Regaining Employment and Face Reduced Retirement Security. GAO-12-445. Washington, D.C.: April 25, 2012. Unemployment Insurance: Economic Circumstances of Individuals Who Exhausted Benefits. GAO-12-408. Washington, D.C.: February 17, 2012. State and Local Governments: Knowledge of Past Recessions Can Inform Future Federal Fiscal Assistance. GAO-11-401. Washington, D.C.: March 31, 2011. Unemployment Insurance Trust Funds: Long-standing State Financing Policies Have Increased Risk of Insolvency. GAO-10-440. Washington, D.C.: April 14, 2010. Unemployment Insurance: Low-Wage and Part-Time Workers Continue to Experience Low Rates of Receipt. GAO-07-1147. Washington, D.C.: September 7, 2007. Unemployment Insurance: More Guidance and Evaluation of Worker- Profiling Initiative Could Help Improve State Efforts. GAO-07-680. Washington, D.C.: June 14, 2007. Unemployment Insurance: States’ Tax Financing Systems Allow Costs to Be Shared among Industries. GAO-06-769. Washington, D.C.: July 26, 2006. Unemployment Insurance: Enhancing Program Performance by Focusing on Improper Payments and Reemployment Services. GAO-06-696T. Washington, D.C.: May 4, 2006. Unemployment Insurance: Factors Associated with Benefit Receipt. GAO-06-341. Washington, D.C.: March 7, 2006. Unemployment Insurance: Better Data Needed to Assess Reemployment Services for Claimants. GAO-05-413. Washington, D.C.: June 24, 2005. Unemployment Insurance: Increased Focus on Program Integrity Could Reduce Billions in Overpayments. GAO-02-697. Washington, D.C.: July 12, 2002. Unemployment Insurance: Role as Safety Net for Low-Wage Workers Is Limited. GAO-01-181. Washington, D.C.: December 29, 2000. Unemployment Insurance: Program’s Ability to Meet Objectives Jeopardized. GAO/HRD-93-107. Washington, D.C.: September 28, 1993. Unemployment Insurance: Trust Fund Reserves Inadequate to Meet Recession Needs. GAO/HRD-90-124. Washington, D.C.: May 31, 1990. | As part of the nation's UI system, overseen by DOL, states provide benefits to eligible unemployed workers, with additional weeks of benefits sometimes provided by the federal government in times of economic stress. Since the 1960s, states have had maximum UI benefit durations of 26 weeks or longer. However, since 2011, nine states have reduced their maximum benefit durations: Arkansas, Florida, Georgia, Illinois, Kansas, Michigan, Missouri, North Carolina, and South Carolina. GAO was asked to review the states' reductions. GAO examined (1) the circumstances in which states reduced the maximum duration of UI benefits, (2) the implications of these reductions for individuals, (3) the effects on federal UI costs, and (4) their broader economic effects. GAO reviewed relevant federal and state laws; visited Georgia and Michigan, which had different approaches to reducing durations; analyzed UI program data from 2006 (before the recession) to 2014; and reviewed relevant economic research. The unemployment insurance (UI) system, a federal and state partnership that provides benefits to eligible workers who have lost their jobs, was under financial pressures during the recent recession and recovery. Since 2011, nine states reduced the maximum length of time (duration) individuals could receive state benefits. These states reduced duration from 26 weeks to as few as 12 weeks, with 20 weeks being the most common new maximum. Compared to states that did not reduce duration, those that did generally had higher unemployment rates and weaker UI trust fund balances and were more likely to have federal loans as their UI reserves became depleted. Officials in five of the nine states said that replenishing their trust fund balance was a key rationale for reducing benefit duration. GAO found that most of the nine states, like other states, also increased employer taxes for their UI program and made other benefit reductions such as by changing UI eligibility rules. Reductions in state benefit durations resulted in some individuals receiving substantially less in total UI benefits. During the period from 2009 through 2013, individuals who exhausted their state benefits could receive additional weeks of benefits from the federal government. The duration of federal benefits was based on the duration of state benefits; shorter maximum state benefit periods resulted in shorter maximum federal benefit periods. As a result, some individuals received substantially less in total UI benefits because the durations of both their state and federal benefits were reduced. For example, in 2013, an individual in a state that had shortened its maximum benefit duration to 20 weeks could have received up to 52.4 additional weeks of federal benefits, for a total of 72.4 weeks. However, had the state maximum duration remained at 26 weeks, that individual could have received up to 67 weeks of federal benefits, for a total of 93 weeks. In contrast, individuals eligible for UI benefits for relatively short periods of time were unaffected by the reduced durations. The effects of these reductions on federal UI program costs are unclear. Although GAO's prior work on past recessions found it can be useful for federal agencies to assess the unintended consequences of state policy responses, the Department of Labor (DOL) has not assessed the extent of any cost shift to the federal government. The net impact on federal UI costs would depend on how reductions in the duration of state benefits affect the number of people receiving federal benefits and for how long. On the one hand, federal costs are increased to the extent that state duration reductions shift individuals to federal benefits earlier. On the other hand, federal costs are decreased to the extent that fewer weeks of federal UI benefits are available. However, because DOL has not analyzed state data on individuals' weekly benefits, it remains unclear whether the federal government incurred a net cost due to the states' duration reductions. Relevant research suggests that reductions in benefit duration may reduce the positive effects of UI on the economy. The economic literature that GAO reviewed, including analysis by the Congressional Budget Office, generally indicates positive macroeconomic effects from the UI program, based on the likelihood that benefits are spent, thus providing a stimulus to the economy. GAO recommends that the Secretary of Labor examine the implications of state duration reductions for federal UI program costs and develop recommendations, if warranted. DOL agreed with GAO's recommendation and indicated it will begin to assess an approach for studying the implications of reductions in maximum duration on federal costs. |
Chief Acquisition Officers provide a focal point for acquisition in agency operations. The SARA legislation requires that CAOs: be noncareer employees; have acquisition management as their primary duty; and have the agency’s Senior Procurement Executive (SPE) report directly to them without intervening authority, or serve as both CAO and SPE. The SARA legislation outlined seven acquisition management functions CAOs are expected to perform within their agencies. Subsequent to the enactment of SARA, governmentwide directives and guidance have assigned CAOs responsibility for additional functions, such as internal control reviews of the acquisition function under OMB Circular A-123 and ensuring the quality of federal procurement data. The key functions of the CAO we reviewed are listed below; additional information on these functions is also available in appendix II: monitoring and evaluating agency acquisition activities; increasing the use of full and open competition; increasing performance-based contracting; making acquisition decisions; managing agency acquisition policy; acquisition career management; acquisition resources planning; and conducting acquisition assessments under OMB Circular A-123. The SARA legislation also established a Chief Acquisition Officers Council that is chaired by OMB’s Deputy Director for Management, and whose activities are led by the OFPP Administrator.principal interagency forum for monitoring and improving the federal acquisition system. Its activities include developing recommendations for the Director of OMB on acquisition policies and requirements; sharing best practices; and helping to address the hiring, training, and professional development needs of the acquisition workforce. Our prior work has emphasized the need for strong, effective leadership and the appropriate placement of the acquisition function within agencies among many key factors needed in order to facilitate efficient, effective, and accountable acquisition processes. Clear, strong, and ethical executive leadership, including a CAO, is key to obtaining and maintaining organizational support for executing the acquisition function. Most of the agencies required to appoint a CAO spend a substantial amount of funding each year through contracts to acquire goods and services in support of their missions, as shown below in table 1. Yet, acquisition management challenges persist among many of these agencies. Among the 16 agencies, 11 had acquisition-related issues identified as a major management challenge by their respective Inspector General (IG) in its most recent report on agency management challenges. Additionally, our high-risk list includes a number of areas related to acquisition management. GAO, Framework for Assessing the Acquisition Function at Federal Agencies, GAO-05-218G (Washington, D.C.: Sept. 2005). The agencies within the scope of our review generally have established CAOs in a way that satisfies two of three key aspects of the legislation. The CAOs in place at these agencies are generally political appointees situated at top levels in their organization, and at most agencies, the Senior Procurement Executive reports directly to the CAO. However, very few agency CAOs have acquisition management as their primary duty, the third key requirement of the SARA legislation. Most of these CAOs have other significant management responsibilities within their agencies, such as serving as the Chief Financial Officer (CFO). Additionally, some CAOs and acquisition officials said it was a challenge in determining how to fill the position within their agency, because the SARA legislation did not provide an additional leadership slot specifically for the CAO position. Tenure in the CAO position also has been relatively short, as the average CAO tenure was about 2 years, and several agencies have had frequent turnover in CAOs. As shown below in figure 1, most agency CAOs are political appointees and have the Senior Procurement Executives report directly to them, but few have acquisition management as their primary duty. Twelve of the 16 agencies had a permanent CAO in place at the time we administered our questionnaire. Three agencies (Education, Department of Veterans Affairs (VA) and Department of Housing and Urban Development (HUD)) had an acting CAO, and the position was vacant at Energy, which is currently relying on the Senior Procurement Executive as its lead acquisition official. All 12 permanent CAOs were political appointees, and 1 of the 3 acting CAOs was a political appointee. At 13 agencies, the Senior Procurement Executive reports directly to the Chief Acquisition Officer without intervening authority. The Senior Procurement Executive does not report directly to the CAO at 2 agencies—HHS and NASA. Officials at these agencies told us there is an informal reporting relationship between the two positions. HHS also noted that despite the indirect organizational relationship between the two positions, the CAO and Senior Procurement Executive communicate frequently on the department’s acquisition policies, priorities, and programs. Only 3 of the CAOs in place during our review (DHS, GSA, and VA) reported that acquisition management was their primary duty, another requirement of the SARA legislation. When asked to estimate the amount of time spent on their CAO duties relative to their other responsibilities, the average among the 14 agencies that provided a response was about 27 percent. Furthermore, only 3 of the 12 permanent CAOs in place during our review had prior experience in acquisition or procurement prior to serving as CAO. Although SARA does not require the CAO to have a background in acquisition, this is one of many factors that could affect the CAO’s success in the position. As shown below in table 2, almost all of the CAOs in our review had additional management responsibilities and few had an official title of Chief Acquisition Officer. For example, at the Departments of State, Agriculture, and Commerce, the Assistant Secretary for Administration serves as the CAO. These officials’ additional areas of responsibility, among other things, include financial management, information management, equal employment opportunity, and emergency preparedness. Although acquisition management is supposed to be a CAO’s primary duty, several CAOs we met with told us that having responsibility for additional management functions was not a detriment and often helped them positively influence acquisition management across their agency: At half of the 16 agencies, the Chief Acquisition Officer also serves in at least one additional “Chief” officer position. Similar to the SARA legislation, the legislation that created the Chief Human Capital Officer (CHCO) and Chief Information Officer (CIO) positions required that those respective functions be the primary duty of each position. We have raised concerns in prior work about those positions having additional significant responsibilities and whether an individual serving in these positions can deal effectively with an agency’s management challenges. Although this could be a concern with respect to CAOs who do not have acquisition management as their primary duty, the Office of Federal Procurement Policy noted that an agency’s Senior Procurement Executive provides high-level attention to the management of the acquisition function. Some CAOs and acquisition officials also pointed out that the SARA legislation did not provide agencies an additional position specifically for the CAO, which created a challenge for agencies to determine how to fill the CAO position. For example, the NASA CAO noted in her questionnaire response that the agency has a low allocation of politically appointed positions. As a result, NASA gave the CAO duties to the CFO. NASA’s CAO stated that because the agency spends such a large amount of its budget through obligations on contracts, her role as the CFO is closely connected with her additional role as the CAO to effectively conduct acquisition management at NASA. Furthermore, the NASA CAO thought that having these two functions integrated was a positive aspect of her current position and helped her be an effective CAO, as opposed to having acquisition operate in a separate stovepipe. The CAO at Commerce emphasized the positive aspects of the agency’s organizational structure and approach to implementation of the CAO position. At Commerce, one individual serves in a number of roles that includes the CFO, CHCO, and Chief Performance Officer as well as the CAO. The CAO noted that this structure gave him the ability to integrate planning, budgeting, risk management, human resources, as well as acquisition to achieve the agency’s mission. As the individual who ties these functional areas together, he indicated he has the authority to get other groups within Commerce to work together. The Commerce CAO also stated that while he oversees the department’s budget as the CFO, he uses his CAO role to look at whether components have demonstrated a sound acquisition management approach in evaluating their budget requests. He also stated that if he were only the agency CAO he would not have as much authority in other functional areas to effectively manage the agency’s acquisition function. Likewise, the CAO at DHS said that he has oversight of many different management functions such as finance, budgeting, human resources, as well as acquisition. While this arrangement may appear to be in conflict with the statutory requirement that acquisition management be the CAO’s primary duty, he stated that having a larger area of responsibility gives him a fuller view of the entire acquisition cycle from requirements development and contract funding to service delivery. As a result, he reports that he spends a majority of his time on acquisition management issues because integrating the different management functions has a positive impact on the CAO’s ability to effectively manage acquisitions across DHS. While the SARA legislation does not specify where CAOs should be located within their agency’s organization, as shown below in figure 2, we found that almost all of the 16 CAOs were positioned at their agency’s top management levels, reporting to either to the agency head or to an official one level removed from the agency head. The CAO at Energy reports to the Director of the Office of Management, who is more than one level removed from the agency head. The location of CAOs at high levels within their agencies may be by virtue of their official titles described above in table 2 rather than being specifically related to the CAO position. Nevertheless, several CAOs and acquisition officials we met with stressed the value of the CAO position in having access to agency leadership and other peers in ensuring that acquisition issues are being considered at top levels within the agency. Fourteen CAOs reported that they had at least sufficient access to their agency head, and that the CAO position was appropriately located for ensuring proper authority over their agency’s acquisition activities. Acquisition officials at the Department of Energy, where the CAO position has been vacant for several years, and whose questionnaire response noted that the CAO had neither sufficient nor insufficient access to the agency head, said that it would have been helpful to have a political appointee in the CAO role who could have high level interactions with agency leadership, better communicate acquisition related issues, and build effective working relationships with the CFO, CIO, and other senior agency officials. Additionally, acquisition officials with the Department of the Interior noted that as a political appointee, the CAO can work closely with other assistant secretaries in the department as well as with peers at other agencies and OMB. They added that with the CAO placed at the assistant secretary level, the position can be more focused on strategic decisions, and can make final decisions on how resources will be deployed to achieve goals. Similarly, the HHS CAO said that by virtue of her position, she is able to interact as a peer with the leaders of the agency’s operating divisions and communicate the acquisition priorities of the agency and administration. She added that being CAO affords her a “seat at the table” to discuss acquisition issues when the agency is making mission decisions. Twelve of the agencies have had a CAO serving in a permanent capacity more than two-thirds of the time since enactment of SARA, as shown below in figure 3. Education and VA have had a CAO serving in a permanent capacity less than 50 percent of the time. The remaining time the CAO position has been vacant or held by an official in an acting capacity. Despite most agencies’ ability to fill the position with a permanent CAO, turnover in the CAO position varied among agencies, as evidenced by the number of acting and permanent CAOs in place since SARA’s enactment. Half of the agencies have had four or fewer CAOs in place, while other agencies have had higher turnover in the CAO position. For example, GSA and Treasury have each had nine CAOs in place since creation of the CAO requirement. The high turnover at GSA and Treasury equate to an average tenure for each CAO of about 10 months at GSA and about 11 months at Treasury since late 2003. In contrast, Commerce and HHS have had only two CAOs over the same timeframe, with an average CAO tenure at each agency of more than 3.5 years. Since enactment of SARA, the average tenure of permanent CAOs has been 2.1 years. This is fairly consistent with a recent GAO review that found an average tenure of about 2.6 years for CIOs at 30 federal departments and agencies. While short tenures in the CAO position may be expected given the political nature of the position, this may work against an individual CAO’s ability to effectively implement needed changes in the acquisition function or new acquisition initiatives: Our prior work has noted that it can take 5 to 7 years to fully implement major change initiatives in large public and private sector organizations and to transform cultures in a sustainable manner, yet frequent turnover of political leadership in the federal government can make it difficult to obtain sustained attention to make needed changes. Among the 76 permanent and acting CAOs that have been in place since the enactment of SARA, only 3 served in the position for 5 years or more. CAOs reported they have differing levels of involvement in the management of their agency’s acquisition activities. For example, most CAOs indicated they were extremely or very involved in managing acquisition policy, but only somewhat or not at all involved in making acquisition decisions or conducting acquisition assessments. Generally, CAOs saw their role as providing high-level oversight of the acquisition function as opposed to day-to-day management, for which they typically relied on the Senior Procurement Executive and other senior procurement officials. Many CAOs told us that the amount of their involvement is related to several factors, such as the nature of goods and services that the agency buys and the extent the agency has a centralized or decentralized acquisition function. For example, in some agencies, CAOs are less involved because agency units and bureaus operate more autonomously with respect to acquisition management. Our review of acquisition regulations and policies found that the roles and responsibilities of the CAO position are not described in detail across all the 16 agencies within the scope of our review. Without clearly defined roles and responsibilities within each federal agency, it will be challenging for these agencies to more permanently institutionalize the CAO position within their organizational structure and realize the benefits from the added attention it brings to acquisition management. The SARA legislation broadly outlined acquisition management functions for CAOs and left it up to each agency how to implement them. Overall, CAOs reported varying levels of involvement in the various acquisition management functions we reviewed, as shown below in figure 4: CAOs reported being most involved in managing the direction of acquisition policy and least involved in two activities—making acquisition decisions and conducting assessments of the acquisition function under OMB Circular A-123. Only three CAOs (Agriculture, Labor, and DHS) reported being extremely or very involved in all eight acquisition management functions. In contrast, officials at four agencies (Education, Energy, HUD, and State) who were either serving as the acting CAO, recently appointed as the new permanent CAO, or serving as the senior procurement official while the CAO position was vacant, reported being somewhat or not at all involved in seven or more of the acquisition management functions. Many CAOs see their role as providing high-level acquisition oversight rather than the day-to-day acquisition management that is more typically provided by other career procurement officials such as the Senior Procurement Executive and heads of contracting activities. As shown below in figure 5, a majority of CAOs reported that they delegate day-to- day responsibility for all eight CAO acquisition management functions to the Senior Procurement Executive and/or other senior procurement officials such as heads of contracting activities and competition advocates. The SARA legislation does not preclude CAOs from delegating these functions, and it is not surprising that there is a high degree of delegation given that CAOs have other significant management responsibilities and few had extensive prior experience in acquisition management. Several CAOs we met with stated that they delegated acquisition management functions to others to ensure that these duties are performed by highly experienced procurement officials. Additionally, they could focus on other acquisition issues such as program management and rely on the agencies’ acquisition professionals to manage the agency’s contract award process and acquisition workforce. For example, the DHS CAO reported delegating seven of the eight CAO acquisition management functions to the Senior Procurement Executive and others, and said that he must take a larger view of the acquisition function that includes program management while the Senior Procurement Executive is more focused on the contract award process and management of contracting officers and contracting specialists. CAOs’ delegation of their responsibilities may also be expected given the roles of other agency officials in acquisition management. The Senior Procurement Executive position had been in place at federal agencies for many years before the CAO position was established. This position is typically filled by a career employee who is responsible for the management direction of the agency’s procurement system, including implementation of agency unique procurement policies, regulations, and standards. In addition, while increasing the use of full and open competition is one of the CAO responsibilities outlined in SARA, each executive agency is also required to designate a competition advocate who is responsible for promoting full and open competition, among other things. Similarly, CAOs are responsible for acquisition career management, but the Office of Federal Procurement Policy also requires civilian executive agencies to designate an acquisition career manager who is responsible for, among other things, managing the development and identification of the acquisition workforce and providing input regarding short term and long term human capital strategic planning for the acquisition workforce. CAOs we spoke with stated there is no “one-size fits all” solution for how best to structure the CAO position and integrate the acquisition management responsibilities outlined by SARA. Many CAOs emphasized that the level of acquisition management oversight they provide is based upon several factors, which include the nature of the goods and services that the agency buys and the amount of decentralization in the agency’s acquisition function. For example, the CAO at HHS said that she is very involved in acquisition policy issues but the oversight of day-to-day acquisition management issues is handled by other officials because much of what HHS buys through contracts is done to support their operating divisions rather than acquisitions of major systems. The CAOs at both HHS and Interior reported that their agencies have a decentralized acquisition management structure where heads of operating divisions and bureaus execute most acquisition authority within their two agencies. HHS also stated that although the CAO does not approve acquisition decisions, acquisition management is achieved through the CAO’s roles in financial management, performance measurement, and acquisition and grants policy and accountability. In comparison, several CAOs at other agencies play a greater role in the acquisition process. These agencies also tended to have major acquisition programs and projects. The CAO at DHS reported having approval authority for individual acquisitions and since assuming the position in 2010 has revised the acquisition oversight structure. The CAO stated that these changes in the oversight structure at DHS are intended to decrease acquisition program risk and provide better insight into budget, schedule and performance information for approximately 135 major acquisition programs for which the CAO serves as the Acquisition Decision Authority. CAOs at other agencies who said they are more involved in acquisition management also reported having some form of decision authority over certain acquisitions. For example, the CAO at Commerce serves as co-chair of the agency’s Investment Reviews, which provide oversight, review, and advice to the Secretary and Deputy Secretary on both information technology (IT) and non-IT investments that meet certain criteria. This advice includes recommendations for approval or disapproval of funding for new systems and investments, or major modifications to existing systems or investments. Similarly, at the Department of Labor, a Procurement Review Board recommends to the CAO approval or disapproval of various acquisition decisions that meet certain thresholds or conditions and serves as a senior-level clearinghouse to review proposed noncompetitive acquisitions. At many agencies, the CAO position was not clearly defined in documents that would form the basis for more permanently institutionalizing the CAO within their organizational leadership structure. Clearly defined roles and responsibilities for each stakeholder in the acquisition process is a key element of an effective acquisition function, as outlined in GAO’s framework for assessing the acquisition function within federal agencies. We found that the amount of detail on a CAO’s agency-specific authorities and responsibilities varies greatly based on the agency’s Federal Acquisition Regulation (FAR) supplement and other policy documentation we collected. As shown in table 3, at some agencies, the CAO position is described in detail while for others the only information about the CAO’s authorities and acquisition management responsibilities under SARA is a passing reference to the legislation that established the position. For example, the CAO position is defined or designated in FAR supplements or acquisition manuals by just 6 of the agencies. Detail on the CAO’s specific acquisition management responsibilities was listed in other policy documentation for only 6 of the agencies. At 7 agencies, the CAO position is not defined in their FAR Supplement or acquisition manual, nor are the acquisition management responsibilities listed in other policy documentation. Additionally, we found that agencies varied in how their acquisition policy guidance delegates authority for procurement matters with respect to the CAO. At half of the agencies, authority for procurement matters is delegated from the agency head through the CAO position to other agency officials. In contrast, at the other 8 agencies, this authority is delegated from the agency head directly to other agency officials such as the Senior Procurement Executive and/or bureau heads, bypassing the CAO. This may be due to agencies neglecting to update their acquisition policies and regulations since creation of the CAO position or to reflect a more recent organizational change. For example, the GSA Organizational Manual still refers to an Office of the CAO that reports to the Administrator, which, according to the CAO in place during our review, did not reflect the organizational reporting structure in the agency. This lack of fully defined CAO roles and responsibilities, and at some agencies, outdated policies, may be an obstacle to ensuring that the CAO position is more permanently institutionalized within the agencies’ acquisition management and senior leadership structures. CAOs at the 16 agencies generally did not report facing significant challenges related to the CAO position, such as the level of influence they have in their agency’s acquisition process, amount of control over acquisition budget resources, and access to agency leadership. However, most CAOs reported that not having enough staff to manage acquisitions was moderately to extremely challenging. As GAO and others have reported in recent years, the capacity and capability of the federal government’s acquisition workforce to oversee and manage contracts has been a challenge. Most CAOs did not believe any changes were needed to improve their effectiveness and also felt that they had the appropriate degree of authority to effectively fulfill their acquisition management functions. We asked agency CAOs to indicate how much six management and resource issues that we identified challenged them in carrying out their responsibilities. As shown below in figure 6, CAOs generally answered that most areas we identified were not challenges for them. No CAOs reported being very or extremely challenged by their employment status (career official versus political appointee) in fulfilling their acquisition management functions or in having sufficient access to agency leadership. The CAOs at DHS, HHS, and State reported five of these areas as being not at all challenging. In contrast, the CAO at GSA and the career acquisition official at Energy reported being moderately to extremely challenged in most of the areas. Despite the lack of challenges reported by CAOs related to most areas, 11 CAOs reported the sufficiency of staff to manage acquisitions as a moderate to extreme challenge. These responses echo concerns from our prior work that the capacity and the capability of the federal government’s acquisition workforce to oversee and manage contracts have not kept pace with increased spending for increasingly complex purchases. Additionally, 6 of the 16 agencies’ IGs have identified the acquisition workforce as a source of serious management challenge in their most recent management challenge reports issued during 2011. However, none of the CAOs at these 6 agencies reported the sufficiency of acquisition staff as extremely or very challenging. When asked if any other changes were needed to improve their effectiveness, 10 out of 16 CAOs reported that no changes were needed. Six CAOs did provide some suggestions. For example, Energy’s response to our questionnaire stated that the CAO position needs improved resource support and full engagement with the agency’s senior leadership team. At EPA, the CAO responded that it would be helpful if there were a better understanding of the contracting process by agency management. The GSA CAO, who left the position during our review, believed that returning the CAO position to a direct report to the GSA Administrator would improve the position’s effectiveness at her agency. Following the completion of our CAO questionnaire, GSA appointed an Acting CAO who reports to the Acting GSA Administrator. CAOs at Transportation, Interior, and HUD reported that more budgetary resources and acquisition workforce staff are needed to improve their effectiveness. Despite these responses and other issues raised in our report, almost all the CAOs believed that they had the appropriate authority to fulfill their acquisition management responsibilities. More than 8 years after the enactment of the SARA legislation, there is wide variation in how agencies have implemented the CAO position. On one hand, agencies have generally filled the CAO position with political appointees who sit at relatively high levels within their agencies in a position to ensure that acquisition is receiving attention from agency leadership. Many CAOs and acquisition officials we met with cited this as a key benefit of the CAO position. On the other hand, there are inconsistencies in the implementation of the law across agencies, with very few CAOs having acquisition management as their primary responsibility, although many CAOs cited the benefits to integrating acquisition management with their additional responsibilities. The CAO position is only one factor of many in an efficient, effective, and accountable acquisition function in agencies. Having an experienced Senior Procurement Executive is another. There is no one-size-fits-all approach to how to organize an effective acquisition function, and a CAO’s role should be suited to the nature and volume of an agency’s acquisition activities. Yet, agencies should ensure that they are maximizing their chances for success by having CAOs that are in a position to influence agency leadership and serve as a strong advocate for acquisition management, which includes having clearly defined roles and responsibilities for the CAO. Not all agencies have these, however, and may be missing an opportunity to ensure that the CAO position is fully institutionalized within agencies’ acquisition management and senior leadership structures. Given CAOs’ short tenures, a lack of defined roles and responsibilities could hinder a CAO’s ability to maximize time in the position and serve as an effective advocate for acquisition management. To strengthen the functions of CAOs in acquisition management, we recommend that the Administrator of the Office of Federal Procurement Policy, working with the CAO Council, issue guidance to agencies directing them to ensure that CAO roles and responsibilities are more clearly defined in accordance with law and regulations, tailored to suit the agency’s acquisition activities, and documented as appropriate. We sent copies of a draft of this report to OMB and the 16 agencies within the scope of our review. OMB’s Office of Federal Procurement Policy provided comments via e-mail, in which it concurred with our recommendation. The office also suggested that the report further highlight the role of the Senior Procurement Executive in providing day-to- day leadership of an agency’s acquisition function. We considered this suggestion and made changes to the report as appropriate. We received communications from each of the 16 agencies, with 15 providing no substantive comments. HHS provided additional information on the roles and responsibilities of the CAO, which we incorporated into the draft. HHS’s written comments are reproduced in appendix III. We are sending copies of this report to other interested congressional committees, the Director of the Office of Management and Budget, and the Secretaries of Agriculture, Commerce, Education, Energy, Health and Human Services, Homeland Security, Housing and Urban Development, the Interior, Labor, State, Transportation, the Treasury, and Veterans Affairs; the administrators of the Environmental Protection Agency and the National Aeronautics and Space Administration, and the Acting Administrator of General Services. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or by e-mail at woodsw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs are on the last page of this report. Key contributors to this report are listed in appendix IV. Our objectives were to assess: (1) how agencies have filled the Chief Acquisition Officer (CAO) position; (2) the extent to which CAOs are involved in performing the acquisition management functions set forth in the Services Acquisition Reform Act of 2003 (SARA) legislation and Office of Management and Budget (OMB) guidance, and (3) what challenges, if any, agency CAOs report in fulfilling their responsibilities for acquisition management. Our review did not assess the effectiveness of individual CAOs or individual agencies’ acquisition functions. To address our objectives, we reviewed the SARA legislation and directives from OMB’s Office of Federal Procurement Policy to identify the key roles and responsibilities of the CAO position. We also reviewed previous GAO work on assessing the acquisition function and the implementation of other chief officer positions in the federal government. To learn more about CAOs’ characteristics, as well as CAOs’ involvement in acquisition management functions and challenges faced in fulfilling their responsibilities, we developed and administered a questionnaire by e-mail in an attached Microsoft Word form to the 16 civilian agencies within the scope of our review. We pretested the questionnaire to ensure that the questions were relevant, clearly stated, and easy to understand. We also solicited comments on the draft questionnaire from members of the Chief Acquisition Officers Council. The questionnaire requested information on, among other things, the CAOs’ reporting relationships, involvement in acquisition management functions within the agency, the extent to which the CAO had delegated their acquisition management responsibilities to other officials, and challenges identified by GAO that CAOs may have experienced in fulfilling their responsibilities. We sent the questionnaire to agencies in November 2011. All questionnaires were returned by March 2012. We received responses from all 16 agencies, though not all agencies provided responses to each question. To provide additional information on CAOs’ characteristics, involvement in acquisition management functions and challenges faced, as well as to corroborate information provided in the questionnaire responses, we collected and reviewed agencies’ organizational charts that showed the CAO’s position relative to the head of the agency and other senior officials; letters of delegation or other documents that formally designate the appointment of the CAO, the CAO’s resume or curriculum vitae describing their qualifications and experience related to the CAO position; applicable policies, guidance, position descriptions or functional statements for both the CAO and Senior Procurement Executive positions; applicable policies or orders that delegate the CAO’s responsibilities to other acquisition officials; agency acquisition function assessments performed under OMB Circular A-123; Acquisition Human Capital Plans or similar documents; agency strategic plans and performance reports; agency-specific acquisition regulations and acquisition manuals; and descriptions of acquisition metrics or performance measures the agency tracks. We also asked each agency to supply the name, time in office, and circumstances (whether they were in an acting or permanent position and whether they were a career employee or political appointee) of each of the individuals who had served as agency CAO and Senior Procurement Executive since enactment of the SARA legislation in November 2003. To complement information gathered through the questionnaire and agency documentation, we conducted follow-up interviews to discuss the CAO’s roles and responsibilities with CAOs and acquisition officials at seven agencies: Commerce, Department of Homeland Security (DHS), Department of Health and Human Services (HHS), Interior, Energy, GSA, and the National Aeronautics and Space Administration (NASA). We used a nongeneralizable sample of agencies based upon the following criteria: review of the questionnaire responses, the amount of procurement spending as a portion of the agency’s fiscal year 2010 budget, and whether the agency’s Inspector General had identified acquisition-related issues as a major management challenge. We also met with officials from OMB’s Office of Federal Procurement Policy to discuss the roles and responsibilities of agency CAOs. We conducted this performance audit from October 2011 to July 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Description Monitoring the performance of acquisition activities and acquisition programs of the executive agency, evaluating the performance of those programs on the basis of applicable performance measurements, and advising the head of the executive agency regarding the appropriate business strategy to achieve the mission of the executive agency Increasing the use of full and open competition in the acquisition of property and services by the executive agency by establishing policies, procedures, and practices that ensure that the executive agency receives a sufficient number of sealed bids or competitive proposals from responsible sources to fulfill the Government’s requirements at the lowest cost or best value considering the nature of the property or service procured. Increasing appropriate use of performance-based contracting and performance specifications Making acquisition decisions consistent with all applicable laws and establishing clear lines of authority, accountability, and responsibility for acquisition decision-making within the executive agency Managing the direction of acquisition policy for the executive agency, including implementation of the unique acquisition policies, regulations, and standards of the executive agency Developing and maintaining an acquisition career management program in the executive agency to ensure that there is an adequate professional workforce As part of the strategic planning and performance evaluation process, assessing the requirements established for agency personnel regarding knowledge and skill in acquisition resources management and the adequacy of such requirements for facilitating the achievement of the performance goals established for acquisition management; developing strategies and specific plans for hiring, training and professional development to rectify any deficiency in meeting such requirements; and reporting to the head of the executive agency on the progress made in improving acquisition management capability. In addition to the contact named above, John Oppenheim (Assistant Director); Matthew Drerup; Kristine Hassinger; Lauren Heft; Jean McSween; Roxanna Sun; and Robert Swierczek made key contributions to this report. | Federal agencies spent more than half a trillion dollars in fiscal year 2011 through contracts to acquire goods and services in support of their missions, but have historically faced significant acquisition management challenges preventing them from getting the best return on their investments. The SARA legislation requires 16 federal civilian agencies to appoint a Chief Acquisition Officer to advise and assist agency leadership to help ensure that the agencys mission is achieved through the management of its acquisition activities. GAO was asked to examine: (1) how agencies have filled the CAO position; (2) the extent to which CAOs are involved in performing the acquisition management functions set forth in the SARA legislation and Office of Management and Budget (OMB) guidance; and (3) what challenges, if any, agency CAOs report in fulfilling their responsibilities. GAO administered a questionnaire to 16 CAOs, reviewed documentation on CAOs roles and responsibilities, organizational placement, and backgrounds, and interviewed a number of CAOs and other acquisition officials. Most agencies have appointed Chief Acquisition Officers (CAO) in accordancewith two of the three key requirements in the Services Acquisition Reform Act of2003 (SARA): that the CAOs be political appointees and have agency SeniorProcurement Executives report directly to them. However, few CAOs haveacquisition management as their primary duty; other areas of responsibilityincluded financial, information, and human capital management. Several CAOs noted that their additional responsibilities were not a detriment. Rather, they believe that performing multiple roles helps them positively influence acquisition management across their agencies. For example, the CAO at the Department of Commerce stated that his additional responsibilities gave him the ability to integrate planning, budgeting, risk management, human resources, and acquisition to achieve the agencys mission. CAOs reported varying levels of involvement in the acquisition management functions for which they are responsible. Generally, CAOs see their role as providing high-level oversight of the acquisition function as opposed to day-today management, which they typically delegated to the Senior Procurement Executive or other officials as permitted by the legislation. Many CAOs said that the amount of their involvement is related to several factors, such as the nature of goods and services that the agency buys and whether the agency has a centralized or decentralized acquisition function. Having clearly defined roles and responsibilities of stakeholders in the acquisition process is a key element of an effective acquisition function. Yet at many agencies, the statutory roles and responsibilities of the CAO position are not described in detail in acquisition regulations, policies, or other documentation. These agencies may be missing an opportunity to fully institutionalize the CAO position within their senior leadership structures. CAOs at the 16 agencies generally did not report facing significant challenges related to the CAO position, such as the level of influence they have in their agencys acquisition process, amount of control over acquisition budget resources, and access to agency leadership. Consistent with our prior work on the acquisition workforce, however, most CAOs reported that not having enough staff to manage acquisitions was moderately to extremely challenging. GAO recommends that the Administrator of OMBs Office of Federal Procurement Policy work with the CAO Council to issue guidance directing agencies to more clearly define CAOs roles and responsibilities. The Administrator agreed with the recommendation. |
DI and SSI are the two largest federal programs providing cash assistance to people with disabilities. Established in 1956, DI provides monthly payments to workers with disabilities (and their dependents or survivors) under the age of 65 who have enough work experience to be qualified for disability benefits. Created in 1972, SSI is a means-tested income assistance program that provides monthly payments to adults or children who are blind or who have other disabilities and whose income and assets fall bellow a certain level. To be considered eligible for either program as an adult, a person must be unable to perform any substantial gainful activity by reason of a medically determinable physical or mental impairment that is expected to result in death or that has lasted or can be expected to last for a continuous period of at least 12 months. Work activity is generally considered substantial and gainful if the person’s earnings exceed a particular level established by statute and regulations.In calendar year 2001, about 6.1 million working age individuals (age 18- 64) received about $59.6 billion in DI benefits, and about 3.8 million working-age individuals received about $19 billion in SSI federal benefits. To obtain disability benefits, a claimant must file an application at any of SSA’s offices or other designated places. If the claimant meets the nonmedical eligibility criteria, the field office staff forwards the claim to the appropriate state DDS office. DDS staff—generally a team comprised of disability examiners and medical consultants—review medical and other evidence provided by the claimant, obtaining additional evidence as needed to assess whether the claimant satisfies the program requirements, and make the initial disability determination. If the claimant is not satisfied with the DDS determination, the claimant may request a reconsideration within the same DDS. Another DDS team will review the documentation in the case file, as well as any new evidence the claimant may submit, and determine whether the claimant meets SSA’s definition of disability. In 2001, the DDSs made 2.1 million initial disability determinations and over 514,000 reconsiderations. If the claimant is not satisfied with the reconsideration, the claimant may request a hearing by an ALJ. Within SSA’s OHA, there are approximately 1,100 ALJs who are located in 140 hearing offices across the country. The ALJ conducts a new review of the claimant’s file, including any additional evidence the claimant submitted since the DDS decision. The ALJ may also hear testimony from medical or vocational experts and the claimant regarding the claimant’s medical condition and ability to work. The hearings are recorded, and claimants are usually represented at these hearings. In fiscal year 2001, ALJs made over 347,000 disability decisions. SSA is required to administer its disability programs in a fair and unbiased manner. However, in our 1992 report, we found that, among ALJ decisions at the hearings level, the racial difference in allowance rates was larger than at the DDS level and did not appear to be related to severity or type of impairment, age or other demographic characteristics, appeal rate, or attorney representation. We recommended, and SSA agreed, to further investigate the reasons for the racial differences at the hearings level and act to correct or prevent any unwarranted disparities. Following our report, SSA undertook an extensive effort to study racial disparities in ALJ decisions at the hearings level, but weaknesses in available documentation preclude conclusions from being drawn. The study involved 4 years of data collection, outside consultants, and many staff who collected and analyzed data from over 15,000 case files. Although the results were not published, SSA officials told us that their statistical analyses of these data revealed no evidence of racial disparities. On the basis of our review of SSA’s internal working papers and other available information, we identified several weaknesses in sampling and statistical methods. Presently, SSA has no further plans to study racial disparities but, if it did, its ability to do so would likely be hampered by data limitations. In response to our 1992 report, SSA initiated a study of racial disparities at the ALJ level that involved several components of the agency. SSA obtained help in designing and conducting the study from staff in its Office of Quality Assurance and Performance Assessment; the Office of Research, Evaluation and Statistics; and the Office of Hearings and Appeals. SSA also created a new division within the Office of Quality Assurance—the Division of Disability Hearings Quality—to spearhead the collection of data needed to study racial disparities and to oversee ongoing quality assurance reviews of ALJ decisions. Data collection for this study was a large and lengthy effort. In order to construct a representative sample of cases to determine whether race significantly influenced disability decisions, SSA selected a random sample each month from the universe of ALJ decisions, stratifying by race, region, and decisional outcome (allowance or denial). This sample of over 65,000 cases was drawn over a 4-year period—from 1992 to 1996. Then, for each ALJ decision that was selected to be in the sample, SSA requested the case file and a recording of the hearing proceedings from hearing offices and storage facilities across the country. Obtaining this documentation was complicated by the fact that files were stored in different locations, depending on whether the case involved an SSI or DI claim, and whether the ALJ decision was an allowance or denial. In addition to obtaining files and tapes, the data collection effort included a systematic review of each case—the results of which SSA used, in part, for its analysis of racial disparities. Specifically, each case used in the analysis received three reviews: a peer review by an ALJ, a medical evaluation performed by one or more medical consultants (depending on the number and type of impairments alleged by the claimant), and a general review of the documentation and decisions by a disability examiner. In total, a panel of 10 to 12 ALJs, whose composition changed every 4 months, worked full-time to review cases. In addition, over a 4-year period, 37 to 55 staff, including disability examiners, worked full- time reviewing case files that were used for this study. Ultimately, about 15,000 cases received all three reviews necessary for inclusion in this study. During and after the 4-year data collection effort, SSA worked with consultants to analyze the data in order to determine the effect of race on ALJ decisions. SSA used descriptive statistics to show that overall application and allowance rates of African Americans differed from whites. In addition, SSA used multivariate analyses to examine the effect of race on ALJ decisions while controlling for other factors that influence decisions. One of SSA’s consultants—a law professor and recognized expert in disability issues—reviewed SSA’s analytical approach and evaluated initial results. In his report to SSA, this consultant expressed overall approval of SSA’s data collection methods, but made several recommendations on how the analysis could be improved—some of which SSA incorporated into later versions of its analysis. SSA subsequently hired two consulting statisticians to review later versions of the analysis. These statisticians expressed concerns about SSA’s methods and offered several suggestions. According to SSA officials, these suggestions were not incorporated into the analysis because they were perceived to be labor intensive and SSA was not sure the effort would result in more definitive conclusions. According to SSA officials, the agency’s final analysis of the data revealed no evidence of racial disparities, but the results were considered to be not definitive enough to warrant publication. Specifically, SSA officials told us that, by 1998, they found no evidence that race significantly affected ALJ decisions for any of the regions. However, these officials also told us that, due to general limitations of statistical analysis, especially as applied to such complex processes as ALJ decision making, they believed that they could not definitively conclude that no racial bias existed. Given the complexity of the results and the topic’s sensitive nature, SSA officials told us the agency decided not to publish the conclusions of this study. From our review of SSA’s internal working papers pertaining to the study, and information provided verbally by SSA officials, we identified several weaknesses in SSA’s study of racial disparities. These weaknesses include: using a potentially nonrepresentative final sample of cases in their multivariate analyses, performing only limited analyses to test the representativeness of the final sample, and using certain statistical techniques that could lead to inaccurate or misleading results. Although SSA started with an appropriate sampling design, its final sample included only a small percentage of the case files in its initial sample in part because staff were unable to obtain many of the associated case files or hearing tapes. SSA was not able to obtain many files and tapes because they were missing (i.e., lost or misplaced) or they were in use and were not made available for the study. For example, according to SSA officials, files for cases involving appeals of ALJ decisions to SSA’s Appeals Council—about half of ALJ denials—were in use and, therefore, excluded from the study. In addition, SSA officials told us that, due to resource constraints, not all of the obtained files underwent all three reviews, which were necessary for inclusion in SSA’s analysis of racial disparities. In the end, less than one-fourth of the cases that were selected to be in the initial sample were actually included in SSA’s final sample. With less than one-fourth of the sampled cases included in the final sample, SSA took steps to determine whether the final sample of cases was still representative of all ALJ decisions. While the investigation SSA undertook revealed no clear differences between cases that were and were not included in the final sample, we found no evidence that SSA performed certain analyses that could have provided more assurance of the sample’s representativeness. For example, SSA made some basic comparisons between claimants who were included in the final sample and those that were in the initial sample but not the final sample. SSA’s results indicate that these two groups were fairly similar in key characteristics such as racial composition, years of education, and years of work experience. However, we found no indication in the documentation provided to us that SSA tested whether slight differences between the two groups were or were not statistically significant. Further, we found no indication that SSA compared the allowance rates of these two groups. This is an important test because, in order to be statistically representative, claimants in the final sample should not have had significantly different allowance rates from claimants who were not included in the final sample. In addition, although children were not included in SSA’s analysis of racial disparities, SSA’s tests to determine the representativeness of the final sample included children in one group and excluded children from the other. By including children in one of the comparison groups, SSA could not assess whether characteristics of the adults in the two groups were similar. Another weakness, as documented in internal working papers available for our review, was the inclusion of certain variables in the multivariate analyses of ALJ decisions, which could lead to biased results. SSA guidelines clearly define the information that should be considered in the ALJ decision, and SSA appropriately included many variables that capture this information in its multivariate analysis. However, SSA also included several variables developed during the review process that reflected the reviewer’s evaluation of the hearing proceedings. For example, SSA included a variable that assessed whether the ALJ, in the hearing decision, appropriately documented the basis for his or her decision in the case file. This variable did not influence the ALJ’s decision, but evaluated the ALJ’s compliance with SSA procedures and should not have been included in the multivariate analysis. This and other variables that reflected a posthearing evaluation of ALJ decisions were included in SSA’s multivariate analysis. If these variables are associated with race or somehow reflect racial bias in ALJ decision making, including such variables in multivariate analysis will reduce the explanatory power of race as a variable in that analysis. For example, if a model includes a variable that may reflect racial bias—such as one that indicates the reviewer believed that the original ALJ decision was unfair or not supported—then that variable, rather than the race of the claimant, could show up as a significant factor in the model. The statisticians hired by SSA as outside consultants also expressed concern about the inclusion of these variables in SSA’s analyses. Finally, in its internal working papers, SSA used a statistical technique— stepwise regression—that was not appropriate given the characteristics of its analysis. Specifically, SSA researchers first identified a set of variables for potential use in their multivariate analysis—variables drawn mostly from data developed during the case file review process. Then, to select the final set of variables, SSA used stepwise regression. Stepwise regression is an iterative computational technique that determines which variables should be included in an analysis by systematically eliminating variables from the starting variable set that are not statistically significant. Using the results from this analysis, SSA constructed a different model for each of SSA’s 10 regions, which were used in SSA’s multivariate analysis to test whether African Americans were treated differently than whites in each region. Stepwise regression may be appropriate to use when there is no existing theory on which to build a model. However, social science standards hold that when there is existing theory, stepwise regression is not an appropriate way to choose variables. In the case of SSA’s study, statutes, regulations, rulings and SSA guidance establish the factors that ALJs should consider in determining eligibility, and thus indicate which variables should be included in a model. By using the results of stepwise regression, SSA’s regional models included variables that were statistically significant but reflected the reviewer’s evaluation of the hearing proceedings—which an ALJ would not consider in a hearing—and therefore were not appropriate. As mentioned earlier, including these variables may have reduced the explanatory power of other variables— such as race; this, in conjunction with the use of stepwise regression, may explain why race did not show up as statistically significant in the regional models. Had SSA chosen the variables for its model on the basis of theory and its own guidelines, race may have been statistically significant. The statisticians hired by SSA as consultants also noted this as a concern. According to an SSA official, the analysts directly responsible for or involved in the study conducted other analyses that were not reflected in the documentation currently available and provided to us. For example, this SSA official told us that the analysts involved in the study would have tested the statistical significance of slight differences between the cases included and not included in the final sample. This official also said that the analysts used multiple techniques in addition to stepwise regression— and ran the models with and without variables that reflected posthearing evaluations—and still found no evidence of racial bias. However, due to the lack of available documentation, we were unable to review these analyses or corroborate that they were performed. Since the conclusion of its study of racial disparities, SSA no longer analyzes race as part of its ongoing quality review of ALJ decisions, and SSA officials told us they have no plans to do so in the future. SSA still samples and reviews ALJ decisions for quality assurance purposes. However, since 1997, SSA no longer stratifies ALJ decisions by race before identifying a random sample of cases—a practice that had helped to ensure that SSA had a sufficient number of cases in each region to analyze decisions by race. Although the dataset used for SSA’s ongoing quality assurance review of ALJ decisions still includes information on race, SSA no longer analyzes these data to identify patterns of racial disparities. Even if SSA decided to resume its analysis of racial disparities in ALJ decisions, it would encounter two difficulties. First, SSA collects files for only about 50 percent of sampled cases in its ongoing review of ALJ decisions for quality assurance purposes, such that its final samples may be nonrepresentative of the universe of ALJ decisions. SSA uses this review data to produce annual and biennial reports on ALJ decision making. Data in these reports are also used to calculate the accuracy of ALJ decisions—a key performance indicator used in SSA’s 2000–03 performance plans pursuant to the Government Performance and Results Act. The reasons for obtaining only half of the files are the same, potentially biasing reasons as for the racial disparities study—files are either missing or not made available if the cases are in use for appeals or pending decisions. However, SSA’s annual and biennial reports do not cite the number or percentage of case files not obtained for specific reasons. In addition, SSA officials told us that they do not conduct ongoing analyses to test the representativeness of samples used for quality assurance purposes, and SSA’s annual and biennial reports do not address whether the final sample used for quality assurance purposes and for calculating the performance indicator for ALJ accuracy is representative of the universe of ALJ decisions. In addition to not obtaining about 50 percent of the case files, SSA officials told us that medical consultants and disability examiners only review a portion of cases for which a file was obtained due to limited resources. Second, future analyses of racial disparities at either the DDS or hearings level is becoming increasingly problematic because, since 1990, SSA no longer systematically collects race data as part of its process in assigning Social Security Numbers (SSN). For many years, SSA has requested information on race and ethnicity from individuals who complete a form to request a Social Security card. Although this process is still in place, since 1990 SSA has been assigning SSNs to newborns through its Enumeration at Birth (EAB) program, and SSA does not collect race data through the EAB program. Under current procedures, SSA is unlikely to subsequently obtain information on race or ethnicity for individuals assigned SSNs at birth unless those individuals apply for a new or replacement SSN (due to change in name or lost card). As of 1998, SSA did not have data on race or ethnicity for 42 percent of SSI beneficiaries under the age of 9. As future generations obtain their SSNs through the EAB program, this number is likely to increase. Concurrent with SSA’s study of racial disparities, SSA’s Office of Hearings and Appeals took several steps to address possible racial bias in disability decision making at the hearings level. These steps included providing diversity training, increasing recruitment efforts for minority ALJs, and administering a new complaint process for the hearings level to help ensure fair and impartial hearings. The complaint process was intended, in part, to help identify patterns of possible racial and ethnic bias and other misconduct; however, this process lacks mechanisms to help OHA easily identify patterns of possible racial or ethnic bias for further investigation or corrective action. SSA’s OHA adopted a mandatory diversity sensitivity program in 1992. All of SSA’s incumbent ALJs were required to attend a 2- or 1-1/2-day course immediately after its development. In addition, the course (now 1 day in length) is included in a 3-week orientation for newly hired ALJs. The course was designed and is conducted by an outside contractor. The course addresses topics such as cultural diversity, geographic diversity, unconscious bias, and gender dynamics through a series of exercises designed to help the ALJs understand how their thought processes, beliefs, and past experiences with people influence their decision-making process. OHA also increased its efforts to recruit minorities for ALJ and other legal positions, although the impact of these efforts on the racial/ethnic mix of SSA’s ALJ workforce has been limited. According to OHA officials, OHA has attended conferences held by several minority bar organizations, to raise awareness about the opportunities available at SSA to become an ALJ. In addition to having information booths and distributing information on legal careers at OHA, OHA presented a workshop called “How to Become an Administrative Law Judge at OHA,” at each conference. Despite these efforts, there have not been significant changes in the racial/ethnic profile of SSA ALJs. In addition to these efforts, in 1993 SSA instituted a complaint process under the direction of OHA that provides claimants and their representatives with a new mechanism for voicing complaints specifically about bias or misconduct by ALJs. The ALJ complaint process supplements and is coordinated with the normal appeals process. All SSA claimants have the right to appeal the ALJ decision to the Appeals Council and, in doing so, may allege unfair treatment or misconduct. According to OHA officials, the vast majority of allegations of unfair hearings are submitted by claimants or their representatives in connection with a request for Appeals Council review. Under the 1993 process, claimants or their representatives may also file a complaint at any SSA office, send it by mail, or call it into SSA’s 800 number service. Any complaints where there is a request for Appeals Council review are referred to the Appeals Council for its consideration as part of its normal review. For complaints where the complainant did not request an Appeals Council review, the complaint is reviewed by the appropriate Regional Chief ALJ, and the findings are reported to the Chief ALJ. Regardless of how the complaint was filed or which office reviewed the complaint, OHA’s Special Counsel Staff is notified of all claims and any findings from either the Appeals Council or the Regional Chief ALJ. On the basis of these findings, OHA may decide to take remedial actions against the ALJ, such as a counseling letter, additional training, mentoring or monitoring, an official reprimand, or some other adverse action. OHA’s Special Counsel Staff may also decide to conduct a further investigation. Regardless of which office handles the complaint, OHA acknowledges the receipt of each complaint in writing, notifies the complainant that there will be a review or investigation (unless to do so would disrupt a pending hearing or decision), and notifies the complainant concerning the results of the investigation. Officials from the Special Counsel Staff told us OHA receives about 700 to 1,000 complaints (out of 400,000 to 500,000 hearings) per year. About 90 percent of these are notifications from the Appeals Council that involve an allegation of bias or misconduct. Officials from the Special Counsel Staff also said that few complaints are related to race. For example, officials noted that, in response to a special request, Special Counsel Staff reviewed all 372 complaints filed during the first 6 months of 2001, and found that only 19 (5.1 percent) were in some way related to race. While the ALJ complaint process provides a mechanism for claimants to allege discrimination, it lacks useful mechanisms for detecting patterns of possible racial discrimination. In SSA’s public notice on the creation of this process, it was stated that SSA’s Special Counsel would “collect and analyze data concerning the complaints, which will assist in the detection of recurring incidences of bias or misconduct and patterns of improper behavior which may require further review and action.” However, OHA’s methods of collecting, documenting, and filing complaints make this difficult to do. For example, in its instructions to the public, SSA directs complainants to describe, in their own words, how they believe they were treated unfairly. This flexible format for filing complaints may make it difficult for OHA to readily identify a claim alleging racial bias. In contrast, SSA’s Office of General Counsel’s complaint form specifically asks complainants to categorize their claim as being related to such factors as race or sex. Similarly, OHA does not use a standardized internal cover-sheet to summarize key aspects of the review, such as whether the complaint involved racial or some other type of bias or misconduct, and whether the complaint had merit and what action, if any, was taken. The lack of a cover-sheet makes it difficult to quickly identify patterns of allegations involving race that have merit. In order to determine whether patterns exist, OHA staff would have to reread each complaint. Additionally, OHA staff do not record key information about complaints— such as the nature of the complaint—in an electronic database so that patterns of bias can be easily identified. OHA’s Special Counsel Staff files complaints and related documents manually, and in chronological order by hearing office. According to OHA officials, this filing system was developed in 1993 when the process was created and complaint workloads were much lower. Today, SSA receives and reviews 700 to 1,000 complaints a year. In order to identify patterns of bias, Special Counsel Staff must not only reread each file, it must tabulate patterns by hand—a time-consuming process that it does not perform on a routine basis. Finally, OHA does not currently obtain demographic information (such as race, ethnicity, and sex) on complainants, which are important in identifying patterns of bias. These data are important for identifying patterns of possible racial bias because complainants—aware only of their own circumstances and lacking a basis for comparison—may not specifically allege racial bias when they file a complaint about unfair treatment. Without demographic data, it is impossible to discern whether certain types of allegations are disproportionately reported by one race (or sex) and whether further investigative or corrective action is warranted. Although SSA is currently obtaining less race data in its process of assigning SSNs, OHA staff could still obtain data on race and sex for most complainants from the agency’s administrative data. The steps SSA has taken over the last decade have not appreciably improved the agency’s understanding of whether or not, or to what extent, racial bias exists in its disability decision-making process. SSA’s attempt to study racial disparities was a step in the right direction, but methodological weaknesses evident in SSA’s remaining working papers prevent our concluding, as SSA did, that there is no evidence of racial bias in ALJ decision making. SSA does not have an ongoing effort to demonstrate the race neutrality of its disability programs. Moreover, the continuing methodological weaknesses in SSA’s ongoing quality assurance reviews of ALJ decisions hamper not only its ability to ensure the accuracy of those reviews but also its ability to conduct future studies to help ensure the race neutrality of its programs. Furthermore, in the longer term, SSA’s ability to analyze racial differences in its decision making will diminish due to a lack of data on race and ethnicity. Finally, SSA’s complaint process for ALJs lacks mechanisms—such as summaries of key information on each complaint, an electronic filing system, and information on the race and ethnicity of complainants—that could help identify patterns of possible bias. SSA is not legally required to collect and monitor data to identify patterns of racial disparities, although doing so would help SSA to demonstrate the race neutrality of its programs and, if a pattern of racial bias is detected, develop a plan of action. To address shortcomings in SSA’s ongoing quality assurance process for ALJs—which would improve SSA’s assessment of ALJ decision-making accuracy—we recommend the agency take the following steps: conduct ongoing analyses to assess the representativeness of the sample used in its quality assurance review of ALJ decisions, including testing the statistical significance of differences in key characteristics of the cases included in the final sample with those that were not obtained; include the results of this analysis in SSA’s annual and biennial reports on ALJ decision making; and use the results to make appropriate changes, if needed, to its data collection or sampling design to ensure a representative sample. To more readily identify patterns of misconduct, including racial bias, in complaints against ALJs, we also recommend that SSA’s Office of Hearing and Appeals: adopt a form or some other method for summarizing key information on each ALJ complaint, including type of allegation; use internal, administrative data, where available, to identify and document the race and/or ethnicity of complainants; and place the complaint information in an electronic format, periodically analyze this information and report the results to the Commissioner, and develop action plans, if needed. We provided a draft of this report to SSA for comment. SSA concurred fully with our recommendations and agreed to take steps to implement them. In its general comments, SSA expressed concern that the title of the report might foster the perception that its disability decision making, particularly at the OHA level, is suspect. Although we believe the draft report’s title accurately reflected the report’s content and recommendations, we have modified the title to ensure clarity. SSA also cited a number of reasons for the low percentage of cases included in its final sample, as well as steps it took to ensure the representativeness of its final sample. Nevertheless, we continue to believe that SSA could have performed additional analyses to provide more assurance of the sample’s representativeness. SSA also provided technical comments and clarifications, which we incorporated in the report, as appropriate. SSA’s general comments and our response are printed in appendix I. We are sending copies of this report to the Social Security Administration, appropriate congressional committees, and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please call me or Carol Dawn Petersen, Assistant Director, at (202) 512-7215. Staff acknowledgments are listed in appendix II. 1. Although we believe the draft report’s title accurately reflected the content of our report and recommendations, we have modified the title to ensure clarity. This report and its recommendations are not restricted to a discussion of only two races. Although we referred to race and ethnicity in the second objective and the conclusion section of the draft reviewed by Social Security Administration (SSA), we added the word “ethnicity” to the recommendations and the body of the report to further clarify this issue. 2. We added language to a note in the report regarding the litigation SSA mentions and that SSA has increased the number of Regional Chief Administrative Law Judges (ALJs) who are members of a racial minority group from 1 to 3 since 1992. 3. We agree with SSA that the low proportion of cases included in the final sample is due to several factors. In our report, we cited several reasons for cases not being included in the final sample that are significant in terms of the number of affected cases and that we believe have the potential for being nonrandom in nature. On the basis of a subsequent discussion with SSA officials, we added a note in our report that a small proportion of cases were excluded because they were later identified as being cases that were not intended to be included in the sample. 4. Although SSA noted that it used “holdout samples and cross modeling” to ensure that the group of cases sampled for this study was essentially free of sampling bias, SSA officials explained to us that these techniques were not used to test for the representativeness of the final sample. 5. We agree with SSA that, with large sample sizes, even small differences generally are statistically significant, and that such statistically significant differences are not always substantively significant. We do not believe, however, that a large sample is sufficient reason to forego significance tests. Moreover, our report cited additional analyses that SSA could have performed to provide more assurance of the sample’s representativeness. Another approach that we do not cite in the report—but which SSA may wish to consider—is multivariate analysis of nonresponse. SSA performed bivariate comparisons of samples to determine whether they contained different proportions of cases with various characteristics. However, two samples can have very similar percentages of, for example, women and African Americans, but be very different with respect to the percentage of African American women. In contrast, multivariate analysis would allow SSA to look systematically and rigorously at different characteristics simultaneously. In addition to those named above, the following individuals made significant contributions to this report: Erin Godtland, Michele Grgich, Stephen Langley, and Ann T. Walker, Education, Workforce and Income Security Issues; Doug Sloane and Shana Wallace, Applied Research and Methods; and Jessica Botsford and Richard Burkard, General Counsel. | The Social Security Administration (SSA) is responsible for administering the Social Security Disability Insurance and the Supplemental Security Insurance programs--the nation's two largest disability programs. SSA is required to administer its disability programs in a fair and unbiased manner. Nevertheless, the proportion of African American applicants allowed benefits has been historically lower than the proportion of white applicants. These allowances rate differences have occurred with respect to disability determinations made by state Disability Determination Service offices and in decisions made at the hearings level by Administrative Law Judges (ALJ). In response to GAO's 1992 report, SSA initiated an extensive study of racial disparities in ALJ decisions, but methodological weaknesses preclude conclusions being drawn from it. The study--the results of which were not published--set out to analyze a representative sample of cases to determine whether race significantly influenced disability decisions, while simultaneously controlling for other factors. SSA officials told GAO that, by 1998, they found no evidence that race significantly influenced ALJ decisions. However, GAO was unable to draw these same conclusions due to weaknesses in sampling and statistical methods evident in the limited documentation still available for GAO's review. Concurrent with SSA's study of racial disparities, SSA's Office of Hearings and Appeals (OHA) took some limited steps at the hearings level to address possible racial bias in ALJ decision-making. OHA instituted a mandatory diversity sensitivity training course for ALJs. Additionally, OHA increased its efforts to recruit minorities for ALJ and other legal positions by attending conferences for minority bar associations, where SSA distributed information and gave seminars on how to become an ALJ. Finally, in keeping with its commitment to provide fair and impartial hearings, SSA established a new process under the direction of OHA for the review, investigation, and resolution of claimant complaints about alleged bias or misconduct by ALJs. |
Congress created USPS’s letter delivery monopoly as a revenue protection measure so that USPS can meet its universal mail service obligation, which includes service to all communities and uniform rates for some mail. As a practical matter, mail covered by this monopoly primarily consists of First-Class Mail and USPS Marketing Mail. Since USPS’s original establishment, its letter delivery monopoly has been both broadened and reduced at various times through statutory and regulatory changes that have redefined which types of correspondence and other materials are covered. For example, the enactment of the 2006 Postal Accountability and Enhancement Act (PAEA) resulted in several changes to USPS’s letter delivery monopoly, including establishing price and weight limits on mail covered by the monopoly. With regard to USPS’s mailbox monopoly, legislation enacted in 1934 prohibited the delivery of unstamped mail into mailboxes, essentially granting exclusive access to mailboxes (“mailbox monopoly”) to USPS which remains in place to this day. The U.S. Supreme Court upheld the constitutionality of the mailbox monopoly in 1981, stating that mailboxes are an essential part of national mail delivery and that postal customers agree to abide by laws and regulations that apply to their mailboxes in exchange for USPS agreeing to deliver and pick up mail. In addition, USPS regulations restrict which types of items may be placed upon, supported by, attached to, hung from, or inserted into a mailbox. The Postal Inspection Service, a part of USPS, is responsible for enforcing postal laws, including the restriction on placing mail without postage in mailboxes and laws that prohibit mail theft, obstruction of mail, and mail fraud. PAEA also set forth reporting requirements on USPS’s letter delivery and mailbox monopolies, its universal service obligation (USO), and laws that apply differently to USPS and its competitors. Pursuant to these requirements, PRC issued a 2008 report that, among other things, estimated the value of USPS’s letter delivery and mailbox monopolies and the cost of its USO. The Federal Trade Commission issued a 2007 report that, among other things, estimated the financial impact of laws that apply differently to USPS and its competitors. FTC’s report concluded that, “from USPS’s perspective, its unique legal status likely provides it with a net competitive disadvantage versus private carriers.” Since 2008, PRC has conducted an annual analysis to estimate the lost net revenues that USPS would incur if the monopolies were eliminated. To conduct this analysis, PRC assesses the value of the monopolies based on the volume of mail—and the associated net revenues—that USPS would be expected to lose if its monopolies were eliminated and new entrants were allowed to provide mail delivery. PRC also examines the impact of eliminating solely the mailbox monopoly using the same methodology. Most of the postal experts we interviewed said they consider PRC’s method of estimating the loss in mail volume and revenues that USPS would experience if the monopolies were lifted to be a reasonable approach to measuring their value. See appendix II for a discussion of the methodology PRC employs to develop its estimates of the value of USPS’s monopolies. PRC’s annual estimate of the value of USPS’s monopolies has increased substantially in recent years—from $3.28 billion in fiscal year 2012 to $5.45 billion in fiscal year 2015. Similarly, the value of the mailbox monopoly alone rose from $700 million to $1.03 billion over the same timeframe. Moving forward, although USPS’s overall mail volumes are declining, PRC staff told us that they believe the volume of mail that entrants would be likely to successfully compete for if the monopolies were eliminated will increase, and therefore the value of the monopolies is expected to rise. Table 1 provides PRC’s annual estimates of the value of USPS’s monopolies since 2007. In addition to PRC’s efforts to estimate the value of USPS’s monopolies, another study recently examined other financial implications of USPS’s exclusive access to mailboxes. This analysis focused on the benefit that USPS’s exclusive access to mailboxes provides, relative to competitors that are not allowed to place items in mailboxes. In particular, the study examined the extra costs that are involved in door delivery—the main option available to USPS’s competitors—compared to delivery to mailboxes, especially when mailboxes are located at the curbside, in cluster boxes, or centralized mailrooms in large residential and commercial buildings. Using the average costs of door delivery compared to average delivery costs to these other mailbox locations that are less costly to access, the study estimated that USPS’s costs would have been $14.9 billion greater in fiscal year 2013 if it had to deliver all mail and packages to the door, as its competitors generally do. It is important to note that this work did not measure the value of USPS’s monopolies as the reduction in USPS’s net revenues if they were to be eliminated; as such, this work is not comparable to PRC’s analysis or estimates. Narrowing or eliminating (“relaxing”) USPS’s letter delivery and mailbox monopolies could likely have a number of varied effects. While postal stakeholders, experts, and USPS told us that relaxing these monopolies could decrease USPS’s revenues and threaten its ability to continue providing universal service as currently implemented, experts stated doing so could also lead to greater efficiencies and innovation. The experiences of selected foreign postal operators and regulators we contacted illustrate that, while some postal operators saw decreases in revenue and losses in market share, some also reported increases in competition and efficiency. Stakeholders, experts, and the experiences of selected foreign posts all suggest that USPS’s monopolies and other postal policies are interdependent—particularly the specifics around universal service—and therefore should be considered in tandem. We have previously reported that, given the changing use of mail, many of the statutory and regulatory elements that shape USPS’s structure and service—as well as the broader delivery market—might be relevant to reconsider. Stakeholders, experts, and USPS told us that narrowing or eliminating the existing letter delivery and mailbox monopolies could likely reduce USPS’s revenues and threaten its ability to provide the current level of universal service. Stakeholders: Eight of the 16 stakeholders who responded to our questionnaire stated that the letter delivery monopoly is needed to protect USPS’s ability to continue universal service at affordable rates. In particular, some of these stakeholders said that, if the letter monopoly were narrowed or eliminated, new entrants would be more likely to serve profitable areas, such as cities, and leave less profitable rural areas to USPS—assuming that it were to remain the nation’s universal service provider. One postal management organization told us that removing the letter delivery monopoly would result in a devastating shortfall between USPS revenues and its costs related to meeting its universal service obligation. Experts: Likewise, eight of the nine experts we interviewed agreed that relaxing USPS’s monopolies would likely result in greater financial burden on USPS, and six of them said that doing so could lead to the need for reduced provision of universal service. Despite this, seven of the nine experts support pursuing such changes to USPS’s letter delivery monopoly. Two of the experts favor narrowing—but not fully eliminating— the letter delivery monopoly, and five favor completely eliminating both monopolies. USPS and PRC: USPS stated that narrowing or eliminating the letter delivery monopoly would place significant mail volume and related revenues at risk, compromising its ability to provide high quality, affordable universal service in a financially self-sufficient manner. Further, USPS stated that narrowing or eliminating its letter delivery monopoly would result in competitors diverting the most profitable mail volume, which would significantly accelerate trends that are already very challenging to its financial sustainability. According to USPS, additional decline in mail volumes would pose a fundamental challenge because economies of scale are crucial to its ability to provide a high level of universal service at affordable rates. USPS also told us that there is no basis to conclude that eliminating the letter monopoly is necessary for it to promote efficiency, quality service, and innovative postal products. As in the past, USPS strongly opposes any changes to its letter delivery and mailbox monopolies. While PRC has not taken a position on whether USPS’s letter delivery monopoly should be narrowed or eliminated, it reported in 2008 that “…under the current system, the (letter delivery) monopoly is maintained to offset the costs placed on by the USO.” Stakeholders: Nine of the 16 stakeholders believe that narrowing or eliminating USPS’s mailbox monopoly could decrease mail security, and they also oppose changes to this monopoly. One mailer association said that the mailbox monopoly provides USPS customers with assurance that their mail is secure. This stakeholder also said that USPS’s exclusive access to mailboxes helps facilitate investigations of mail theft and other mail crimes. One of the 16 stakeholders who responded to our questionnaire favors relaxing USPS’s mailbox monopoly and stated that it is unnecessary to protect the security of the mailbox, as criminal and civil law punish theft and trespass. Experts: Although seven of the nine experts we interviewed support relaxing this monopoly, four experts cited increased concerns about mail security if the mailbox monopoly no longer limited access to USPS’s competitors. Two experts stated that having multiple entities with access to the mailbox could cause problems such as cluttering. One expert opposed to relaxing USPS’s mailbox monopoly noted that much of the increased delivery to mailboxes—were access to be broadened—would be advertising mail and suggested that it would be questionable to relax this monopoly to primarily facilitate advertising distribution. USPS and PRC: USPS stated that relaxing its mailbox monopoly could negatively affect mail safety and security and that doing so would result in both decreased efficiencies and service performance, and threaten the financing of its universal service obligation. First, USPS stated that removing this monopoly could increase opportunities for criminal activity involving mail and complicate the enforcement of postal laws. USPS added that, in addition to direct harm to consumers and employees who could be harmed by dangerous substances in mailboxes, USPS and the mailing industry could suffer from the resulting lack of public trust in the mail as a secure communication medium. Second, according to USPS, its efficiency and service performance could also be adversely impacted, particularly regarding the delivery of market-dominant mail volumes. For example, increased mailbox clutter due to the presence of items left there by third-parties would mean that USPS letter carriers would have to spend time determining which items to take with them (collection mail) and which to leave behind (items delivered by alternative delivery providers)—or may not be able to fit mail into a mailbox at all. Third, USPS advised that allowing third-party mailbox deliveries could allow competing providers to skim relatively profitable mail volume away from USPS, leaving it with less revenue to finance the costs of its universal service obligation. In regard to the mailbox monopoly, PRC cited its 2008 report, which stated that past public proceedings indicate a broad spectrum of support for its continuation and cited disadvantages to USPS if this monopoly were to be eliminated, including difficulty investigating mail fraud, maintaining mail security, and efficiently collecting mail from cluttered mailboxes. Seven of the nine experts we interviewed said that relaxing USPS’s monopolies could create competition in the postal market. Additionally, seven of these experts also said relaxing the monopolies could induce USPS to become more efficient and increase innovation across the postal market. Although experts offered mixed views regarding how much actual entry into the postal delivery market would occur if the letter delivery monopoly were withdrawn, they generally stated that the prospect of competitive pressure would stimulate USPS to be more efficient through both cost-cutting and general restructuring of the organization. One expert told us that such changes would also benefit the economy as a whole. Similarly, some of the experts we interviewed also said that relaxing USPS’s mailbox monopoly would have beneficial effects. With regard to USPS’s mailbox monopoly, four of the nine experts told us that its elimination would stimulate USPS to be more efficient and six experts said it would stimulate more innovation. For example, one expert noted that the characteristics of mailboxes have changed little in decades and suggested that opening them to competitive providers might bring innovation to the mailbox itself. A number of countries have narrowed or eliminated their postal monopolies over the past two decades as part of overall postal reform that expanded the commercial freedom of their postal services. In responding to our questionnaire requesting information on the effects experienced as a result of modifications to their respective postal monopolies, a number of the selected foreign posts and regulators told us that their respective changes resulted in losses of market share, mail volume, and revenues for the incumbent carrier, as well as varying changes in postal rates. Officials from all six of the countries we contacted noted increases in competition; some also cited increases in efficiency as well as improved customer service and performance. Effect on Market Share, Mail Volumes, and Revenue: Officials from Italy, Germany, and the United Kingdom reported that the liberalization of their postal markets resulted in the incumbent carrier experiencing a loss of market share, volumes, and revenues generated by items previously covered by monopoly. In Germany, Deutsche Post reported that it has lost about 12 percent of market share compared with when it had a full monopoly. In the United Kingdom, Royal Mail and its regulator both reported that, although its market share for letters was minimally impacted, it has lost substantial business from large senders of bulk mail. Officials from Japan, Sweden, and France were unable to confirm whether the liberalization of their countries’ respective postal markets resulted in such effects. Effect on Postal Rates: Officials from Italy and the United Kingdom reported that liberalization of their respective postal markets resulted in changes to some postal rates. Italian officials reported that some of Poste Italiane’s rates increased, but its business rates either remained stable or were slightly reduced. In the United Kingdom, Ofcom officials told us that a new regulatory framework was put in place which allowed Royal Mail to rebalance its prices; Royal Mail then increased the price of single piece stamped items substantially in many cases, while bulk mail prices increased to a lesser extent. Officials from Germany, Sweden, and Japan stated that the liberalization of their postal markets did not result in increased prices of services provided by their posts. Many foreign posts also reported that changes to their postal monopolies resulted in increases in competition and efficiency, as well as improvements in customer service and performance. Specifically: Increased Competition: The post and/or regulator from all six of the countries that we contacted—Sweden, Italy, Japan, Germany, France, and the United Kingdom—said the liberalization of their postal markets resulted in increased competition. According to officials from Italy, competition in the Italian mailing industry increased significantly after liberalization, especially for bulk mail services. French officials said that the number of providers has increased; however, they added that the increase in competition has not been as great as some stakeholders expected and La Poste maintains a dominant market position. Effects on Efficiency of the Collection and Delivery of Mail: Officials from three countries—Germany, Sweden, and the United Kingdom— reported that the liberalization of their postal markets, which was done in tandem with other postal reforms, resulted in increased efficiency of the collection and delivery of mail. For example, in the United Kingdom, officials from Royal Mail stated that it had to modernize its operations to successfully compete in the current postal market. However, officials from Italy stated that liberalization did not lead to greater efficiencies, as increased competition resulted in a large volume decline for Poste Italiane, which was forced to also maintain a heavily fixed cost structure to meet its universal service obligation. Effects on Customer Service and Performance: Officials from three countries—Germany, Sweden, and the United Kingdom—stated that the liberalization of their postal market resulted in improved customer service and performance of their posts. According to German officials, the speed and reliability of their nation’s postal services has increased since liberalization. Officials from Royal Mail stated that while it is meeting service targets, the changes to the way in which services are delivered in the United Kingdom (e.g., post delivered later in day or delivery offices relocating) have not always been popular with customers; both residential and business consumers have reported slightly higher levels of satisfaction with other postal operators in recent years. Effect on Universal Service Obligation: Officials from three countries— Germany, Sweden, and the United Kingdom—stated that liberalization of their postal markets did not negatively impact their universal service obligation for postal retail and delivery services. According to German officials, no changes to its provision of universal service or financial support were required. However, officials from Poste Italiane stated that its competitors are free to enter whichever markets they like (i.e., potentially implement cream-skimming policies), a situation that has put the sustainability of their ability to provide universal service at risk. The experiences of these foreign posts illustrate the variety of effects of making changes to existing postal monopolies. However, it is also important to remember the context in which the posts operate differs greatly from one country to the next—each country is in a unique situation and uses specific measures to address its challenges and opportunities. When compared with other countries, the United States lies in the higher range of universal service obligation scope requirements, especially for quality standards including frequency of delivery and coverage. USPS officials said that some of the effects of liberalization among foreign posts have no bearing in the United States, stating that some of the efficiency advances that other countries have seen in recent years were made long ago in this country. Although we did not directly evaluate the implications of each liberalization experience or how it may apply to the United States, these experiences nonetheless provide important context for the consideration of any changes to USPS’s monopolies. Postal stakeholders and experts—as well as both USPS and PRC— suggested that any consideration of changes to USPS’s letter delivery and mailbox monopolies should take place within the context of broader U.S. postal policy. Stakeholders: Although eleven of 16 postal stakeholders oppose the idea of narrowing or eliminating USPS’s monopolies, some noted that additional postal policy issues should be included in discussion of potential changes to the monopolies. For example, one stakeholder who opposes modifying USPS’s monopolies said that, if they were to be changed, policies to ensure universal delivery service would need to be adopted. Another stakeholder stated that no change in law or regulation could offset the adverse consequences that would come from narrowing or eliminating USPS’s letter delivery monopoly, but suggested that the financial burden of requiring USPS to prefund retiree health benefits be eliminated. Another postal stakeholder explained that improved security and more thorough background checks would need to be required for businesses that would have access to the mailbox, in addition to laws to which non-USPS firms must adhere when delivering to currently prohibited mailboxes. Experts: All nine of the experts we interviewed said that analysis of possible changes to USPS’s monopolies should be conducted in tandem with other postal policy considerations. Experts cited issues such as USPS’s universal service obligation, postal pricing flexibility, and the fair application of policies and rules across providers as important for policymakers to consider. USPS and PRC: USPS and PRC also stressed the importance of considering any potential changes to USPS’s monopolies within the context of the nation’s broader postal policy. USPS officials told us that any modification of its monopolies would require Congress to make significant policy decisions regarding how to ensure that existing postal services could still be achieved in their absence. USPS officials also stated that many foreign posts that have liberalized have not done so in isolation, but rather along with other reforms including government aid and measures to afford the incumbent carriers greater commercial freedom to manage their businesses. PRC staff told us that the scope of the monopoly and the cost of providing universal service are interdependent, such that changes in one alter the value/cost of the other, adding that any changes to the monopoly should carefully consider the relationship of the monopoly and the universal service obligation. Foreign Posts and Regulators: Officials from all six of the countries we contacted told us that, as their respective postal markets were opened to competition from new entrants, the concurrent implementation of other postal polices helped to manage their transition away from their incumbent providers’ monopolies. For example, officials from the United Kingdom stated that The Postal Services Act of 2011 privatized Royal Mail and gave it complete commercial freedom to raise funds for modernization; Royal Mail was also relieved of its pension deficit of 10 billion pounds. In Italy, policymakers decreased the scope of Poste Italiane’s universal service obligation to enable it to maintain its obligations as competition increased in the market. According to officials from the postal regulator in Japan, the Japan Post was granted greater freedom to establish prices for first and second class mail items as part of the process of the liberalization of the Japanese postal market. We have previously reported that action by Congress and USPS is urgently needed on a number of difficult issues to facilitate progress toward USPS’s financial viability. The significant deterioration in USPS’s financial condition, its increasing debt, and the grim forecast for declining overall mail volumes over the next decade led GAO to add USPS’s financial condition to its High-Risk List in 2009. Moreover, the financial condition of USPS is but one outcome of the changing landscape of the postal sector. We reported in 2010 that, given the changing use of mail, many of the statutory and regulatory elements that shape USPS’s structure and service—as well as the broader delivery market—might be relevant to reconsider. These include (1) the appropriate universal service obligation, in light of fundamental changes in the use of mail; (2) whether USPS requires a monopoly over delivery of certain types of letter mail and access to mailboxes to finance—in part or wholly—its universal postal service obligation; and (3) whether USPS should be solely responsible for providing universal postal service, or whether that responsibility should be shared with the private sector. Such considerations may assist Congress, USPS, and other postal stakeholders as they work not only on issues related to the letter delivery and mailbox monopolies, but also to address USPS’s financial difficulties and define its future role in an evolving postal marketplace. We have reported that, as long as it remains a federal entity protected by the postal monopoly, USPS’s ability to compete with the private sector should be balanced with appropriate oversight and adequate legal standards to ensure fair competition. In this regard, some postal stakeholders have maintained that USPS has competitive advantages because it is exempt from some laws governing the private sector. On the other hand, USPS has reported that it is subject to statutory requirements to which its private competitors are not. Given these differences, it is important to understand the constraints and limitations that would complicate the process of arriving at estimates that would be useful for policymakers. To help summarize the key considerations that would need to be addressed to estimate the value of USPS’s financial advantages and burdens resulting from laws that apply differently to USPS and its private competitors, they are organized into four broad categories: (1) objectives to study, (2) scope to be covered, (3) methodology to be used, and (4) reporting. Consistent with government auditing standards, it would be important for anyone examining the laws that apply differently to USPS and its private competitors to carefully define the objectives of the study. As explained by those standards, the objectives for any future study could be framed as questions that the organization conducting the research would seek to answer. Developing these questions is a critical step, as their answers will guide decisions on what specific information will be needed for reporting, which in turn will identify the parameters of the study’s scope and methodology and lay the framework for the context in which the findings are presented. Objectives for such a study could be defined as questions related to the following: What are the financial effects of laws that apply differently to USPS and its private competitors on USPS’s net income? What are the financial effects of laws that apply differently to USPS and its private competitors on USPS’s competitive mail products? What are the financial effects of laws that apply differently to USPS and its private competitors on both USPS and its private competitors? As these questions demonstrate, a study’s objectives can become increasingly complex, requiring more and varied sources of information. Defining the scope of any study is critical because, according to the government auditing standards, the scope defines aspects of the subject matter to be studied and other key data collection considerations—such as the period of time reviewed and the type of data to be collected, among other things. Scoping considerations for this type of study would include, for example: (1) which laws to include, (2) whether to include data already studied or collect new data, (3) how to address the difficulty in quantifying some effects, and (4) how to handle differing stakeholder views on scope. Many Laws Apply Differently to USPS Than to Its Competitors Many laws apply differently to USPS and its competitors (see table 2). FTC found that some laws have positive financial effects on USPS, while others have negative financial effects. These laws also affect private competitors; for example, USPS’s letter and mailbox monopolies limit the types of items competitors can deliver and where they can leave items. The number and type of laws to be included in such a study would affect its approach and eventual results. For example, significant time and resources would be required for a study to estimate the financial effects for all laws that apply differently to USPS and its competitors—as well as the net effect of these legal differences. If results were needed more immediately—or if financial resources were limited—decisions would need to be made to narrow the scope. The analysis of what laws to include and their impact on USPS’s operations is further complicated because while some laws appear to provide USPS with financial advantages, whether or not they actually do may depend on how they are interpreted and applied in practice. For example, FTC’s 2007 report stated that, although some jurisdictions refrain from ticketing its vehicles, USPS has agreed to pay parking fines in other jurisdictions; it is unclear whether USPS vehicles are exempt from being ticketed. In another area, the FTC report found that USPS benefits from “disparate customs treatment,” but did not offer explanation for the reasoning behind this finding. Further, postal pension and retiree health benefit fund assets for postal retirees are in funds that by law are required to be invested solely in Treasury securities, which are backed by the full faith and credit of the federal government. In contrast, private companies can and do invest retirement funds in more diversified portfolios. Use of Findings from Other Studies Another consideration is what information to include from previous studies. One would have to decide whether or not to use PRC’s annual estimates and reports on the financial effects on USPS of laws related to its universal service and public service costs and the value of USPS’s monopolies. In postal labor negotiations, USPS has presented the result of studies on the comparability of postal wages and benefits with the private sector; however, the results have been contested by major postal labor unions involved in collective bargaining. Challenges in Quantifying Some Effects In some cases, effects may be challenging to quantify—in part because they have not been previously quantified, and in part due to the complexity of developing estimates. For example, FTC’s 2007 report that estimated the effects of laws on USPS wages did not attempt to estimate the financial effects of every law. FTC’s report stated it would have been difficult to quantify the effects of some laws, such as USPS’s ability to obtain property through eminent domain and disparate customs treatment for USPS and its competitors. Overcoming these challenges may require additional time and resources, as well as the acceptance of risk beyond the control of the team conducting the study, such as lack of available data. Addressing Differing Stakeholder Views on Scope Key postal stakeholders hold differing views on what the scope of a potential study might be. For example, although National Association of Letter Carriers (NALC) officials told us that NALC does not see any need for an update to the 2007 FTC report, they added that such an update should not attempt to study the comparability of USPS wages to the private sector. NALC officials explained that postal unions have negotiated and debated with USPS over the definition of “comparability” and “comparable levels of work.” NALC officials added that there is no one objective or scientific definition of these terms, which must be negotiated by the parties in the face of changing circumstances and debate, and stated that the collective bargaining table is the appropriate venue for this debate. In contrast, USPS officials told us that any future study should estimate the comparability of both USPS wages and benefits to the private sector. Government auditing standards state that a study’s methodology describes the nature and extent of procedures for gathering and analyzing evidence needed to address its objectives, which should be sufficient and appropriate to support findings to reduce the risk of improper conclusions. When deciding upon a study’s methodology, one would need to address such challenges as (1) lack of consensus on methodology options, (2) constraints on time and resources, and (3) limitations on publicly available data and supporting documentation. The decisions made about how to address these challenges would help determine the usefulness of the estimates to policymakers. Lack of Consensus of Methodology Options No consensus exists on the most appropriate methodology that should be used to estimate the financial effects of certain laws that apply differently to USPS and its competitors. For example, USPS officials told us that there is no generally accepted consensus on how to measure the comparability of postal wages and benefits and said it has used numerous methodologies over the years to make estimates in this area. A NALC official explained that USPS and postal unions have long disagreed on the definition of “comparability” and “comparable” levels of work. Depending on how a study approaches this issue, it could affect decisions about methodology and data. USPS has preferred to define comparability as the level of USPS wages and benefits for different jobs performed by USPS relative to similar jobs performed by companies in the entire private sector (e.g., similar jobs performed by USPS competitors and mail processing jobs performed by private companies in the mailing industry), while the postal unions have preferred to compare USPS with its large competitors such as United Parcel Service (UPS) and FedEx. Potential Constraints on Time and Resources Some methodologies would require considerable time and resources to collect the necessary data, and any constraints in these areas may influence the choice of methodology. A study estimating the financial effects of all laws that apply differently to USPS and its private competitors would require significant time and resources; if estimates were desired in a shorter time frame—or if financial resources were limited—tradeoffs would be required. For example, USPS officials told us that, when estimating the value of USPS’s exemption from property and real estate taxes for USPS-owned properties, the most appropriate data would be the current assessed value of each USPS-owned property and the current applicable tax rate(s). However, they noted that USPS does not have data on the assessed value for its owned properties. They estimated that collecting data on a valid sample of USPS-owned properties would require specialists such as tax assessors and appraisers, and cost $7.6 million to $9 million—an amount they deemed to be cost-prohibitive. With respect to estimating wage and benefit comparability, USPS officials stated that the different types of work performed by postal employees in the various bargaining units does not lend itself to a one-size-fits-all approach to private sector comparability or a single answer for all USPS employees. They said that USPS has generally used multiple methodologies to estimate wage and benefit comparability in each collective bargaining proceeding. They also noted that some of these methodologies require a high degree of expertise and/or subject matter knowledge, such as expertise in statistical or regression analyses, as well as specialized knowledge and experience in employee compensation and wage determination. These factors have implications for the time and resources that would be required to estimate wage and benefit comparability. Additionally, USPS officials said that collecting data on the value of USPS’s exemption from vehicle registration fees would require a review of state registration fees for each state and vehicle type, as well as other information. Limitations on Public Data and Supporting Documentation Some USPS data and documentation that could be useful for a future study may not be publicly available, either because USPS has not been asked to disclose it in public proceedings or because it is considered exempt from public disclosure. For example, USPS is not required to disclose information prepared for use in connection with the negotiation of collective bargaining agreements, which would include USPS studies of wage and benefits comparability prepared for such negotiations. While data and documentation relating to USPS’s competitive products could be instrumental to evaluating any specific financial effects of legal advantages and disadvantages on competitive products, USPS often classifies this information as proprietary and does not disclose it publicly. In addition, according to USPS officials, certain data may need to be collected from USPS’s private competitors who may consider such data to be proprietary and exempt from public disclosure, such as trade secrets. Thus, it is unclear whether these data and their supporting documentation would be available to certain organizations conducting such a study and, if so, under what circumstances and with what limitations they may be able to be discussed in a publicly-available report. In order for estimates to be as useful as possible to readers, the report would need to include sufficient information to allow for informed discussion about the results. Government auditing standards state that a report should disclose the objectives, scope, and methodology—as well as the results, including findings and conclusions. According to these standards, readers need this information to understand the study’s purpose, the nature and extent of the research, context and perspective on what is reported, and any significant limitations. In addition, the standards also state that a report should describe the scope of the work performed and any limitations, including issues that would be relevant to readers, so they can reasonably interpret the findings, conclusions, and recommendations without being misled. Further, according to government auditing standards, a report should also discuss any significant constraints imposed on the approach by information limitations or scope impairments, which might include data that were unavailable due to restrictions on time or resources. This discussion would help readers understand how much confidence to place in the findings. To put the importance of such reporting into context, USPS and many postal stakeholders have made various proposals over the years to change laws that apply differently to USPS and its competitors. A wide range of reasons have been given in support of such proposals, such as enhancing USPS’s financial position, assuring the continuation of universal postal service or revising the universal service obligation to better meet changing customer needs, and assuring fair competition. Regardless of the reasons behind any given proposal, estimating the potential financial effects of specific laws could provide Congress and other stakeholders with information that could be used to consider possible changes to these laws. Estimates of the effects of certain laws on USPS, specifically on its competitive products, and on these products relative to its competitors could be different in their nature and timeframes than the information regularly provided by the Congressional Budget Office (CBO) on the estimated financial effects of proposed laws on federal government revenues and expenses. Considering the longstanding disagreements over controversial issues in this area, it would be particularly helpful for such a study to be conducted by an independent party free of conflicts of interest. Any party tasked with such an undertaking would require sufficient time and resources to produce estimates of sufficient precision to prove useful for decision makers and facilitate broad stakeholder acceptance and use of the results. We provided a draft of this report to USPS, PRC, FTC, and the U.S. Department of Justice (DOJ) for review and comment prior to finalizing the report. FTC and DOJ did not have any comments on the report. We received written comments from USPS, which are reproduced in appendix III. USPS and PRC separately provided technical comments, which we incorporated as appropriate. In its written response, USPS stated that it appreciated our effort to develop an understanding of its monopolies in supporting secure and affordable universal postal service and other postal policy goals. However, USPS disagreed with aspects of our discussion of (1) the impact of relaxation of foreign postal monopolies, (2) PRC’s estimates of the value of USPS’s letter delivery and mailbox monopolies, (3) the 2015 paper by Robert Shapiro, and (4) selected laws that apply differently to USPS and its competitors. In its comments, USPS stated that it appreciated our efforts to obtain views from a broad cross-section of postal stakeholders, although it stated that it does not agree with all of the perspectives expressed. USPS emphasized that, while some believe that narrowing or eliminating its monopolies might have some beneficial effects, it believes doing so would divert revenue away from USPS, compounding its financial pressures. USPS also wrote that, in the current financial environment, it would not be responsible for policymakers to implement such changes while maintaining the USO in its current form. Our report acknowledges these points and states that postal stakeholders and experts agree that actions to narrow or eliminate USPS’s monopolies could reduce its revenues, placing a greater financial burden on USPS and threatening its ability to provide universal service. Regarding the relaxation of foreign postal monopolies, USPS emphasized that other reforms were made in tandem with changes to postal monopolies in other countries, including government aid and changes to afford the incumbent carrier greater commercial freedom to manage its businesses. Although our draft report stated that we found broad consensus among postal stakeholders and experts—as well as both USPS and PRC—that any potential changes to USPS’s monopolies should be considered in tandem with broader postal policies, we updated our report to include additional context in response to USPS’s comments. USPS took issue with the information provided by officials from foreign posts and regulators that the narrowing or elimination of their monopolies resulted in increased efficiency, and noted that many other postal policies were undertaken at the same time, as discussed above. Moreover, USPS wrote that the effects of postal liberalization in other countries has no bearing upon USPS because some of the efficiency advances that other countries have seen in recent years were realized many years ago in the United States. Our report states that it is important to remember the context in which the posts operate differs greatly from one country to the next, and that each country is in a unique situation and uses specific measures to address its challenges and opportunities. At the same time, we state that although we did not directly evaluate the implications of each liberalization experience or how it may apply to the United States, the experiences nonetheless provide important context for the consideration of any changes to USPS’s monopolies. For additional context, we also updated our report to include USPS’s perspective on this issue. In its letter, USPS also suggested further clarification regarding our discussion of certain aspects of PRC’s estimates of the value of USPS’s monopolies. In response to USPS’s suggested clarification of PRC’s estimates presented in table 1, we added a note to make it clear that, while PRC’s estimates presented in the table represent the effect on USPS’s net income if its mailbox monopoly or both monopolies were to be eliminated, it is not the case that subtracting the estimated value of the mailbox monopoly from the estimated value of the combined letter delivery and mailbox monopolies provides the value of the letter delivery monopoly alone. In addition, USPS also expressed concern that the discussion of the 2015 paper by Robert Shapiro could wrongly imply equivalent credibility to the work performed by PRC. We recognize this concern. Our report states that Shapiro’s work did not measure the value of the monopolies as the reduction in USPS’s net revenues if they were to be eliminated, but rather focuses on other financial implications of the mailbox monopoly. We further state that the Shapiro work is not comparable to that conducted by PRC. Regarding our summary of selected laws that apply differently to USPS and its competitors, USPS found it to be relatively thorough, but offered a series of technical comments and suggested additional laws that could be added to the summary. We incorporated technical comments to this section, as appropriate. However, we did not include any additional laws, as our intent was not to provide an exhaustive summary of laws, taxes, and fees that apply differently to USPS and its private competitors. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Postmaster General, the Chairman of PRC, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or rectanusl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff making key contributions to this report are listed in appendix IV. Our objectives were to assess: (1) what is known about the value of the U.S. Postal Service’s (USPS) letter delivery and mailbox monopolies, (2) views on the potential effects of narrowing or eliminating these monopolies; and (3) considerations that would need to be addressed to estimate the value of USPS’s financial advantages and burdens resulting from laws that apply differently to USPS and its private competitors. To determine what is known about the value of USPS’s letter delivery and mailbox monopolies, we reviewed relevant literature, and discussed existing estimates with postal experts. Specifically, we reviewed the Postal Regulatory Commission’s (PRC) 2016 annual report, which includes an estimate of the value of the postal monopolies, as well as PRC’s original 2008 study that described the effort’s methodology in depth. We also reviewed several other studies conducted by other researchers soon after the 2006 enactment of The Postal Accountability and Enhancement Act (PAEA), studies that either described methods for analyses or actually conducted analyses of the value of USPS’s monopolies. In addition to reviewing PRC’s studies, we also met with PRC staff to further discuss aspects of its analyses. Further, we reviewed a study authored by Dr. Robert Shapiro entitled The Basis and Extent of the Monopoly Rights and Subsidies Claimed by the United States Postal Service, which discusses other financial implications of USPS’s mailbox monopoly. Finally, we discussed both the analysis conducted by PRC and Dr. Shapiro’s study with the nine postal experts we interviewed to obtain their views on the methods and findings of these studies. To identify the potential impacts of narrowing or eliminating USPS’s letter delivery and mailbox monopolies, we took a series of steps. First, we created and distributed a questionnaire to 21 postal stakeholder organizations (“stakeholders”) that (1) testified and filed comments during the public comment solicitations as part of the work of the 2003 President’s Commission on the USPS; (2) provided testimony or comments to PRC on universal service and the postal monopoly, and (3) GAO previously surveyed during audit work conducted in support of the prior GAO reported titled U.S. Postal Service: Strategies and Options to Facilitate Progress toward Financial Viability. We initially selected and contacted 25 organizations—including postal unions, management associations, mailing associations, and private companies—to which it was planned to distribute the questionnaire. Subsequently, GAO learned that two of these stakeholders were no longer in existence and that two of them had merged into a single organization, resulting in a list of 21 stakeholders who received a copy of the questionnaire. GAO pretested this questionnaire with two stakeholders to ensure that it was clearly worded, unbiased, comprehensive, and that terminology was used correctly and made changes to the content of the questions in response. GAO received responses from 16 of the 21 stakeholders—a response rate of 76.2 percent—11 of which were completed questionnaires and 5 were comprised of correspondence answering some or all the questions posed. We also provided this questionnaire to USPS and PRC, both of whom completed it. In addition, we also conducted structured interviews with nine postal experts to obtain their views on the potential impacts of narrowing or eliminating USPS’s monopolies. Many of these experts were identified as postal consultants and individuals who worked on or commented on PRC’s 2008 report on universal service and the postal monopoly; others were recommended by stakeholders or agency officials. We determined through a literature search and prior audit work that each expert has substantial knowledge and experience in postal issues. We created and pretested an interview guide with two experts to ensure that questions were accurate, clear, and unbiased, and made changes in response. We analyzed the responses received from postal stakeholders and experts and summarized both their reasons for favoring, opposing, or holding no position on whether USPS’s monopolies should be narrowed or eliminated and what they believed the potential effects of doing so may be. While the responses from the judgmentally-selected group of postal stakeholders and experts are not generalizable, they provide a wide range of views among those who have previously expressed views on the postal monopoly and related policy issues—and represent some of the key groups with whom Congress interacts when developing postal policy. Further, to determine what have been the experiences of foreign postal administrations in selected industrialized countries that have narrowed or eliminated its postal monopolies, we collected information from the top 20 major postal markets, as determined by the Universal Postal Union (UPU), a specialized agency of the United Nations that coordinates international postal polices. Using this information, we identified countries (1) with the largest global shares of postal revenue and domestic mail, (2) with developed economies (classified by their level of development as measured by per capita gross national income); and (3) have fully liberalized their postal monopoly laws. Based on these criteria, we selected six countries–Germany, France, Japan, Italy, Sweden, and the United Kingdom. Using a list of contacts provided by PRC officials, we sent requests to both the postal administration and regulator in each of these six nations to obtain information regarding their respective liberalization experiences. While the responses from foreign postal operators and their regulators in these judgmentally-selected countries are not generalizable, they provide information and perspectives that complement the views of experts and American postal stakeholders on the potential effects of narrowing or eliminating the USPS monopolies. To identify considerations that would need to be addressed to estimate the value of USPS’s financial advantages and burdens resulting from laws that apply differently to USPS and its private competitors, we reviewed criteria from the Government Auditing Standards. These standards provide a framework for performing high-quality performance audits, including establishing an overall approach to obtain reasonable assurance that the evidence is sufficient and appropriate to support the findings. In addition, we reviewed relevant laws and a 2007 Federal Trade Commission (FTC) report and obtained information from FTC on how some of its estimates were compiled. We identified and focused upon four of the largest financial effects of USPS’s legal status estimated in the 2007 FTC report: the comparability of USPS and private sector wages and USPS’s exemptions from property real estate taxes, sales and use taxes, and vehicle registration fees, and obtained information on how these estimates were compiled. We contacted USPS and four postal industry stakeholders who had submitted comments in the FTC proceeding leading up to its 2007 report to obtain their opinion on considerations regarding how these estimates could be updated, considering the factors we had identified. These five stakeholders represented different sectors, including USPS, some of its private competitors, and some of its postal labor unions. We received responses from USPS and the National Association of Letter Carriers (NALC) on considerations for estimating financial advantages and disadvantages of USPS’s unique legal status. Our analysis was limited to identifying key considerations, some of which included different options, for how to conduct research to estimate the financial effects of certain aspects of USPS’s legal status; we did not evaluate these options or recommend which should be pursued if a new study were to be conducted. We conducted this performance audit from December 2015 to June 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix describes the method used by PRC to estimate the value of the U.S. Postal Service’s (USPS) letter delivery and mailbox monopolies. In particular, this appendix discusses (1) why economic value to USPS may derive from the statutory monopolies, (2) the method that PRC employs to estimate the value of the monopolies, and (3) PRC’s findings. Statutory monopolies are sometimes granted when the provision of a good has certain cost conditions. In particular, in a case where one large firm can produce a product for the entire market more cheaply, on average, than a set of smaller firms, monopoly is sometimes viewed as a preferred market structure, and a legal limitation on entry may be used to ensure that the market remains monopolistic. In the case of postal service, for example, USPS’s large and interconnected operational network leverages both economies of scale and scope and, as such, it is unlikely that it would be profitable for a new firm to compete against USPS for all forms of mail or in all locations. However, certain segments of the market may be more profitable to enter—both due to lower costs of service and/or higher potential revenues. If entry is open, competitors may be able to compete for business in these profitable submarkets, leaving the incumbent carrier with higher average costs to serve the remaining market. The loss suffered by the incumbent from the elimination of a statutory monopoly can be viewed as the “value” that the legal monopoly affords the incumbent firm—in this case, USPS. PRC’s method for estimating the value of USPS’s monopolies rests on a counterfactual: how would USPS’s net revenues be affected if the statutory monopolies were removed? As such, PRC estimates (1) the extent to which entry, if it were to be legally allowed, into the mail delivery market would be economically profitable; and (2) based on that entry, the extent of lost net revenues USPS would experience. To do so, PRC makes a variety of assumptions about the prospects for new firms to profitably enter segments of the postal delivery market. The first key element of PRC’s analysis is the identification of which types of mail are “contestable.” PRC identifies contestable mail as those mail items that are presorted and drop-shipped at local destinations—such as advertising mail or First-Class Mail presorted by 5-digit ZIP Codes that could be drop-shipped. Contestable mail is of most interest to an entrant because there is minimal work related to sorting and transport. For the analysis of the elimination of both monopolies, all contestable mail categories are considered to be available to a potential entrant, should it decide to serve the route. In the case of the analysis of the mailbox monopoly alone, PRC only analyzes the contestable mail categories that fall outside of the letter delivery monopoly. In the latter case, private delivery companies are currently allowed to provide such service, such as delivery of magazines and advertising mail not covered by USPS’s delivery monopoly; however, because private delivery companies are not currently able to place items in mailboxes, they may choose not to deliver these items. There are fewer categories of contestable mail outside the letter monopoly; therefore, when examining the value of only the mailbox monopoly, the volume of contestable mail is less than half of that when both monopolies are examined. Once contestable mail volumes are identified, PRC makes a variety of assumptions about the economic circumstances of the potential entrant, relative to USPS. In particular: PRC makes assumptions about the extent to which a potential entrant would be able to deliver contestable mail with a lower cost structure than USPS. Two key elements of an entrant’s cost advantage include: The extent of the entrant’s reduced variable costs relative to those of USPS, due to various efficiencies or cost advantages it may enjoy—most notably lower labor rates. PRC’s base assumption is that the entrant has a 10 percent cost advantage relative to USPS. The extent of the entrant’s lower fixed costs, relative to USPS. Reduced fixed costs are due largely to the assumption that an entrant will not deliver mail 6 days per week, as USPS is required to do. The base case assumption is that an entrant would deliver 3 days per week if both monopolies were eliminated and only 1 day per week if only the mailbox monopoly were eliminated. The reduction in delivery days would provide a cost advantage for the entrant over USPS. PRC also makes assumptions regarding the prices an entrant would charge for its service, relative to USPS rates. PRC’s base assumption is that the entrant’s prices would be 10 percent lower than USPS’s rates. Entrant’s Determination of which Routes to Serve The PRC analysis focuses on the likelihood of entry—that is, the ability of an entrant to make a profit—at the route level. PRC staff told us that PRC’s analysis of entry decisions was conducted at the route level because the data it obtained from USPS for this analysis are for individual routes. To ascertain which routes an entrant might serve, PRC uses data on mail volumes, by each type of mail category, for a sample of routes it obtains from USPS. Based on that route-level volume data, and the specific economic assumptions described above, PRC is able to calculate the entrant’s prospective profitability on the set of sample routes—and thus ascertain which routes the entrant would enter if the monopolies were lifted. Once the entered routes are determined, PRC can estimate USPS’s volume and revenue losses due to entry. PRC’s calculation of USPS’s expected loss in net revenues is considered to be the estimated “value” of USPS’s monopolies. Some Considerations of PRC Method PRC’s economic analysis informs, with caveats, stakeholders and decision-makers about the value of USPS’s monopolies. However, some elements of its analysis are important to consider when interpreting the findings. In particular: PRC assumes that if an entrant chooses to serve a route—that is, the entrant believes it can make a profit on the route—it will immediately garner 100-percent of the contestable mail volume on that route. However, in practice, it may take time for an entrant to gain a significant foothold in the market. Alternative assumptions or sensitivity analyses that attempt to consider the time an entrant would require to build its business—in part due to how mailers make purchasing decisions when there are multiple competitors—might shed light on the reasonableness of the model’s current assumptions on the entrant’s market share. PRC uses USPS data on individual postal routes as the geographic level for this analysis. However, it is possible that potential entrants could make decisions on a broader regional basis, particularly because mailers (i.e., the entrant’s potential customers) might make their purchasing decisions on a broader geographic basis. PRC staff told us that routes abutting one another typically have similar volume characteristics, which would minimize concerns about the small geographic scope of the PRC approach. Nevertheless, it is difficult to ascertain the extent to which using this small geographic scope in this analysis would result in a pattern of entry in alignment with a viable business plan. Since 2008, PRC has conducted an annual analysis to estimate the lost net revenues that USPS would incur if its monopolies were eliminated. For 2015, which is the most recent year available, PRC estimated that the elimination of both the letter and mailbox monopolies would result in $5.45 billion loss in net revenues for USPS, while the elimination of solely the mailbox monopoly would result in $1.03 billion in lost net revenues. PRC staff told us that increases in the value of the postal monopolies in recent years were due to growing contestable mail volumes, and added that they expect these volumes to continue rising in the near future. As such, PRC staff said that both the value of the combined postal monopolies—and the value of the mailbox monopoly on its own—are likely to continue to increase in the next few years. In addition to the individual named above, Derrick Collins (Assistant Director); Chad Williams (Analyst-in-Charge); Samer Abbas; Amy Abramowitz; Antoine Clark; Pat Donahue; Jaci Evans; Kenneth John; Kim McGatlin; Oliver Richard; Frank Todisco; Walter Vance; Michelle Weathers; and Crystal Wesco made key contributions to this report. | USPS's mission is to provide universal delivery service while operating as a self-financing entity. Congress has provided USPS with monopolies to deliver letter mail and access mailboxes to protect its revenues, which enables it to fulfill its universal service mission, among other reasons. Despite its monopolies, USPS's poor financial condition has placed its universal service mission at risk. USPS's net losses were $5.6 billion in fiscal year 2016 and were greater than $62 billion over the past decade. GAO was asked to review the postal monopolies. This report examines (1) what is known about the value of USPS's letter delivery and mailbox monopolies, (2) views on the potential effects of narrowing or eliminating these monopolies; and (3) considerations that would need to be addressed to estimate the effects of laws that apply differently to USPS and its private competitors. To address these questions, GAO reviewed reports issued by PRC and others; obtained views from USPS and PRC, as well as postal stakeholders and experts who have submitted public comments to PRC proceedings; and collected information from six countries—France, Germany, Italy, Japan, Sweden, and the United Kingdom—that have eliminated their postal monopolies, selected based on criteria including their share of global mail volume. GAO is making no recommendations in this report. USPS disagreed with some stakeholder perspectives, among other things. GAO believes that the information is portrayed in a balanced way and added USPS responses, where appropriate. The value of the U.S. Postal Service's (USPS) letter delivery and mailbox monopolies was $5.45 billion in fiscal year 2015, according to the most recent estimate prepared by the Postal Regulatory Commission (PRC), the regulator of USPS. This figure suggests that USPS's net income would decline by this amount if its monopolies were eliminated. To develop these estimates, PRC identifies the mail covered under USPS's monopolies for which a potential entrant might compete to provide service if the monopolies were to be eliminated; such mail is referred to as “contestable.” PRC's estimated value of these monopolies has increased substantially in recent years—it was $3.28 billion in fiscal year 2012—and PRC staff expects that the value will continue to increase in the next few years due to increased volumes of contestable mail. Narrowing or eliminating USPS's letter delivery and mailbox monopolies would likely have varied effects, according to views provided by postal stakeholders, experts, USPS, and PRC. For example, all parties agreed that allowing other entities to deliver letters could decrease USPS's revenues, and that additional strain would be placed on USPS's ability to continue providing the current level of universal service. Additionally, some stakeholders said that allowing other entities to deliver items to the mailbox could adversely affect the security of mail and increase clutter that would impair USPS's delivery efficiency. On the other hand, most of the postal experts we interviewed said that allowing entry to this market by private competitors could result in increased competition that would spur USPS to become more efficient. Officials from foreign posts or regulators in all six of the countries GAO contacted reported increases in competition after ending their postal delivery monopolies, and some of these countries also reported losses of revenue and market share for the carriers providing universal service. Stakeholders, experts, foreign officials, and USPS agreed that postal policies are interdependent and therefore need to be considered in tandem with one another; officials from all six countries we contacted told us that concurrent postal policy changes, such as increasing a post's degree of commercial freedom or decreasing the scope of its universal service obligation, assisted their transitions away from postal monopolies. Estimating the effects of laws that apply differently to USPS and its private competitors would require steps including defining appropriate study objectives and assessing scope and methodological tradeoffs. For example, objectives would need to clarify the extent of financial effects to be estimated—whether for USPS as a whole, for only specific products, or for USPS relative to competitors. Scoping decisions would need to define the specific areas to be studied, the period of time to be reviewed, and the type of data to be collected. This would involve multiple considerations, including determining which laws to include and how to address differing stakeholder views. Additional judgment would be needed to address any lack of consensus on methodologies and to determine the appropriate degree of time and resources. For example, a comprehensive study estimating the effects of every law would require significant time and resources; if estimates were desired in a shorter time frame—or if financial resources were limited—tradeoffs would be required. |
Title insurance is designed to guarantee clear ownership of a property that is being sold. The policy is designed to compensate either the lender (through a lender’s policy) or the buyer (through an owner’s policy) up to the amount of the loan or the purchase price, respectively. Title insurance is sold primarily through title agents who check the history of a title by examining public records. The title policy insures the policyholder against any claims that existed at the time of purchase but were not in the public record. Title insurance premiums are paid only once during a purchase, refinancing, or, in some cases, home equity loan transaction. The title agent receives a portion of the premium as a fee for the title search and examination work and its commission. The party responsible for paying for the title policies varies by state. In many areas, the seller pays for the owner’s policy and the buyer pays for the lender’s policy, but the buyer may also pay for both policies—or split some, or all, of the costs with the seller. According to a recent nationwide survey, the average cost for simultaneously issuing lender’s and owner’s policies on a $180,000 loan (plus other associated title costs) was approximately $925, or about 34 percent of the average total loan origination and closing fees. We identified several important items for further study, including the way policy premiums are determined, the role played by title agents, the way that title insurance is marketed, the growth of affiliated business arrangements, and the involvement of and coordination among the regulators of the multiple types of entities involved in the marketing and sale of title insurance. For several reasons, the extent to which title insurance premium rates reflect insurers’ underlying costs is not always clear. First, the largest cost for title insurers is not losses from claims—as it is for most types of insurers—but expenses related to title searches and agent commissions (see fig. 1). However, most state regulators do not consider title search expenses to be part of the premium, and do not include them in regulatory reviews that seek to determine whether premium rates accurately reflect insurers’ costs. Second, many insurers provide discounted premiums on refinance transactions because the title search covers a relatively short period, but the extent of such discounts and their use is unclear. Third, the extent to which premium rates increase as loan amounts or purchase prices increase is also unclear. Costs for title search and examination work do not appear to rise as loan or purchase amounts increase, and such costs are insurers’ largest expense. If premium rates reflected the underlying costs, total premiums could reasonably be expected to increase at a relatively slow rate as loan or purchase amounts increased, however, it is not clear that they do so. Title agents play a more significant role in the title insurance industry than agents do in most other types of insurance, performing most underwriting tasks as well as the title search and examination work. However, the amount of attention they receive from state regulators is not clear. For example, according to data compiled by the American Land Title Association (ALTA), while most states require title agents to be licensed, 3 states plus the District of Columbia do not; 18 states and the District of Columbia do not require agents to pass a licensing exam. Although NAIC has produced model legislation that states can use in their regulatory efforts, according to NAIC, as of October 2005 only three states had passed the model law or similar legislation. For several reasons, the competitiveness of the title insurance market has been questioned. First, while consumers pay for title insurance, they generally do not know how to “shop around” for the best deal and may not even know that they can. Instead, they often rely on the advice of a real estate or mortgage professional in choosing a title insurer. As a result, title insurers and agents normally market their products exclusively to these types of professionals, who in some cases may recommend not the least expensive or most reputable title insurer or agent but the one that represents the professional’s best interests. Second, the title industry is highly concentrated. ATLA data show that in 2004 the five largest title insurers and their subsidiary companies accounted for over 90 percent of the total premiums written. Finally, the low level of losses title insurers generally suffer—and large increases in operating revenue in recent years—could create the impression of excessive profits, one potential sign of a lack of competition. The use of affiliated business arrangements involving title agents and others, such as lenders, real estate brokers, or builders has grown over the past several years. Within the title insurance industry, the term “affiliated business arrangements” generally refers to some level of joint ownership among a title insurer, title agent, real estate broker, mortgage broker, lender, and builder (see fig. 2). For example, a mortgage lender and a title agent might form a new jointly owned title agency, or a lender might buy a portion of an existing title agency. Such arrangements, which may provide consumers with “one-stop shopping” and lower costs, can also can also be abused, presenting conflicts of interest when they are used as conduits for giving referral fees back to the referring entity or when the profits from the title agency are significant to the referring entity. Several types of entities besides insurers and their agents are involved in the sale of title insurance, and the degree of involvement of and the extent of coordination among the regulators of these entities appears to vary. These entities include real estate brokers and agents, mortgage brokers, lenders, and builders, all of which may refer clients to particular agencies and insurers. These entities are generally overseen by a variety of state regulators, including insurance departments, real estate commissions, and state banking regulators, that interact to varying degrees. For example, one state insurance regulator with whom we spoke told us that the agency coordinated to some extent with the state real estate commission and at the federal level with HUD, but only informally. Another regulator said that it had tried to coordinate its efforts with other regulators in the state, but that the other regulators had generally not been interested. HUD, which is responsible for implementing RESPA, has conducted some investigations in conjunction with insurance regulators in some states. Some of these investigations of the marketing of title insurance by title insurers and agents, real estate brokers, and builders have turned up allegedly illegal activities. Federal and state investigations have identified two primary types of potentially illegal activities in the sale of title insurance, but the extent to which such activities occur in the title insurance industry is unknown. The first involves allegations of kickbacks–that is, fees that title agents or insurers may give to home builders, real estate agents and brokers, or lenders in return for referrals. Kickbacks are generally illegal. In several states, state insurance regulators identified captive reinsurance arrangements that title insurers and agents were allegedly using to inappropriately compensate others, such as builders or lenders, for referrals. State and federal investigators have also alleged the existence of inappropriate or fraudulent affiliated business arrangements. These involve a “shell” title agency that generally has no physical location, employees, or assets, and does not actually perform title and settlement business. Investigators alleged that the primary purpose of these shell companies was to provide kickbacks for business referrals. Investigators have also looked at the various types of alleged kickbacks that title agents have provided, including gifts, entertainment, business support services, training, and printing costs. Second, investigators have uncovered instances of alleged misappropriation or mishandling of customers’ premiums by title agents. For example, one licensed title insurance agent who was the owner (or partial owner) of more than 10 title agencies allegedly failed to remit approximately $500,000 in premiums to the title insurer. As a result, the insurer allegedly did not issue 6,400 title policies to consumers who had paid for them. In response to the investigations, insurers and industry associations say they have begun to address some concerns raised by affiliated businesses, but that clearer regulations and stronger enforcement are needed. One title insurance industry association told us that recent federal and state enforcement actions had motivated title insurers to address potential kickbacks and rebates through, for example, increased oversight of title agents. In addition, the insurers and associations said that competition from companies that break the rules hurt companies that were operating legally and that these businesses welcome greater enforcement efforts. Several associations also told us that clearer regulations regarding referral fees and affiliated business arrangements would aid the industry’s compliance efforts. Specifically, we were told that regulations need to be more transparent about the types of discounts and fees that are prohibited and the types that are allowed. Over the past several years, regulators and others have suggested changes to regulations that would affect the way title insurance is sold, and further study of the issues raised by these potential changes could be beneficial. In 2002, in order to simplify and improve the process of obtaining a home mortgage and to reduce settlement costs for consumers, HUD proposed revisions to the regulations that implement RESPA. But HUD later withdrew the proposal in response to considerable comments from the title industry, consumers, and other federal agencies. In June 2005, HUD announced that it was again considering revisions to the regulations. In addition, NAIC officials told us that the organization was considering changes to the model title insurance and agent laws to address current issues such as the growth of affiliated business arrangements and to more closely mirror RESPA’s provisions on referral fees and sanctions for violators. Finally, some consumer advocates have suggested that requiring lenders to pay for the title policies from which they benefit might increase competition and ultimately lower costs for consumers, because lenders could then use their market power to force title insurers to compete for business based on price. The issues identified today raise a number of questions that we plan to address as part of our ongoing work. We look forward to the continued cooperation of the title industry, state regulators, and HUD as we continue this work. Mr. Chairman, this completes my prepared statement. I would be pleased to answer any questions that you or Members of the Subcommittee may have. For further information about this testimony, please contact Orice Williams on (202) 512-8678 or williamso@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributors to this testimony include Larry Cluff (Assistant Director), Tania Calhoun, Emily Chalmers, Nina Horowitz, Marc Molino, Donald Porteous, Melvin Thomas, and Patrick Ward. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Title insurance is a required element of almost all real estate purchases and is not an insignificant cost for consumers. However, consumers generally do not have the knowledge needed to "shop around" for title insurance and usually rely on professionals involved in real estate--such as lenders, real estate agents, and attorneys--for advice in selecting a title insurer. Recent state and federal investigations into title insurance sales have identified practices that may have benefited these professionals and title insurance providers at the expense of consumers. At the request of the House Financial Services Committee, GAO currently has work under way studying the title insurance industry, including pricing, competition, the size of the market, the roles of the various participants in the market, and how the industry is regulated. This testimony discusses the preliminary results of GAO's work to date and identifies issues for further study. In so doing, this testimony focuses on: (1) the reasonableness of cost structures and agent practices common to the title insurance market that are not typical of other insurance markets; (2) the implications of activities identified in recent state and federal investigations that may have benefited real estate professionals rather than consumers; and (3) the potential need for regulatory changes that would affect the way that title insurance is sold. Some cost structures and agent practices that are common to the title insurance market are not typical of other lines of insurance and merit further study. First, the extent to which premium rates reflect underlying costs is not always clear. For example, most states do not consider title search and examination costs--insurers' largest expense--to be part of the premium, and do not review these costs. Second, while title agents play a key role in the underwriting process, the extent to which state insurance regulators review agents is not clear. Few states collect information on agents, and three states do not license them. Third, the extent to which a competitive environment exists within the title insurance market that benefits consumers is also not clear. Consumers generally lack the knowledge necessary to "shop around" for a title insurer and therefore often rely on the advice of real estate and mortgage professionals. As a result, title agents normally market their business to these professionals, creating a form of competition from which the benefit to consumers is not always clear. Fourth, real estate brokers and lenders are increasingly becoming full or part owners of title agencies, which may benefit consumers by allowing one-stop shopping, but may also create conflicts of interest. Finally, multiple regulators oversee the different entities involved in the title insurance industry, but the extent of involvement and coordination among these entities is not clear. Recent state and federal investigations have identified potentially illegal activities--mainly involving alleged kickbacks--that also merit further study. The investigations alleged instances of real estate agents, mortgage brokers, and lenders receiving referral fees or other inducements in return for steering business to title insurers or agents, activities that may have violated federal or state anti-kickback laws. Participants allegedly used several methods to convey the inducements, including captive reinsurance agreements, fraudulent business arrangements, and discounted business services. For example, investigators identified several "shell" title agencies created by a title agent and a real estate or mortgage broker that had no physical location or employees and did not perform any title business, allegedly serving only to obscure referral payments. Insurers and industry associations with whom we spoke said that they had begun to address such alleged activities but also said that current regulations needed clarification. In the past several years, regulators, industry groups, and others have suggested changes to the way title insurance is sold, and further study of these suggestions could be beneficial. For example, the Department of Housing and Urban Development announced in June 2005 that it was considering revisions to the regulations implementing the Real Estate Settlement Procedures Act. In addition, the National Association of Insurance Commissioners is considering changes to model laws for title insurers and title agents. Finally, at least one consumer advocate has suggested that requiring lenders to pay for the title policies from which they benefit might increase competition and ultimately lower consumers' costs. |
National default and foreclosure rates rose sharply from calendar year 2005 through 2009 to the highest level in at least 29 years (fig. 1). Default rates declined slightly from the fourth quarter of 2009 to the first quarter of 2010 but, at 4.91 percent, were still more than six times higher than they were at the start of 2005. Foreclosure start rates—the percentage of loans that entered the foreclosure process each quarter—grew nearly three-fold in the 5-year period from 0.42 percent to 1.23 percent in the first quarter of 2010. Put another way, more than half a million mortgages entered the foreclosure process in the first quarter of 2010, compared with about 165,000 in the first quarter of 2005. Finally, foreclosure inventory—the number of houses for which the lender has initiated foreclosure proceedings but has not yet sold the properties—rose more than 325 percent from the first quarter of 2005 to the first quarter of 2010, increasing from 1.08 percent to 4.63 percent, with most of that growth occurring after the second quarter of 2007. As a result, as of the end of the first quarter of 2010, more than 2 million loans were in the foreclosure inventory. As we reported in December 2008, Treasury has established an Office of Homeownership Preservation within OFS to address the issues of preserving homeownership and protecting home values. On February 18, 2009, Treasury announced the broad outline of a three-pronged effort to help homeowners avoid foreclosure and provided additional program descriptions on March 4, 2009; April 28, 2009; and May 14, 2009: The Home Affordable Refinance Program (HARP), which provides a refinancing vehicle for homeowners who are current on their mortgage payments with mortgages held or guaranteed by Fannie Mae and Freddie Mac, interest rates higher than the prevailing market rates, and loan-to- value ratios of between 80 and 105. Using the prevailing interest rates in February 2009, Treasury estimated that between four and five million borrowers could refinance their mortgages through this program. No TARP funds will be used to refinance these loans. Instead, Fannie Mae or Freddie Mac, as the owner or guarantor of the loan, purchased or guaranteed the refinanced mortgages. The program has resulted in relatively few refinances—between February 2009 and March 2010, fewer than 292,000 borrowers were refinanced through this program. In March 2010, the program’s end date was extended from June 10, 2010, to June 30, 2011. An increased funding commitment from Treasury for preferred stock purchases from Fannie Mae and Freddie Mac to strengthen confidence in the two government-sponsored enterprises (GSE) and help support low mortgage rates. The preferred stock purchase agreements, authorized by the Housing and Economic Recovery Act of 2008 (HERA), were amended in May 2009 to increase Treasury’s commitment to each GSE from $100 billion to $200 billion. On December 24, 2009, the preferred stock purchase agreements were again amended with the provision that the $200 billion cap increase as necessary. The increased funding commitment would be made under HERA and would not require the use of TARP funds. Through March 2010, the cumulative reduction in the net worth of the two GSEs required them to draw $111 billion from the Treasury under the senior preferred stock purchase agreements. In May 2010, the Federal Housing Finance Agency requested an additional $10.6 billion in Treasury assistance for Freddie Mac and an additional $8.4 billion for Fannie Mae. HAMP, which was designed to commit up to $75 billion of GSE and TARP funds to offer loan modifications to up to three to four million borrowers who were struggling to pay their mortgages. According to Treasury officials, HAMP would use up to $50 billion of TARP funds, primarily to encourage the modification of non-GSE mortgages that financial institutions owned and held in their portfolios (whole loans) and mortgages held in private label securitization trusts. Fannie Mae and Freddie Mac together are expected to provide up to an additional $25 billion to encourage servicers and borrowers to modify loans owned or guaranteed by the two GSEs. As outlined in the March 4, 2009, program guidelines, HAMP’s eligibility requirements for first-lien modifications stipulate that: the property must be owner-occupied and the borrower’s primary residence (the program excludes vacant and investor-owned properties); the property must be a single-family property (one to four units) with a maximum unpaid principal balance on the unmodified first-lien mortgage that is equal to or less than $729,750 (for a one-unit property); the loan must have been originated on or before January 1, 2009; the borrower must complete a HAMP Hardship Affidavit documenting a financial hardship; and the first-lien mortgage payment must be more than 31 percent of the homeowner’s gross monthly income. The HAMP first-lien modification program has four main features: 1. Cost sharing. Mortgage holders and investors will be required to take the first loss in reducing the borrower’s monthly payments to no more than 38 percent of the borrower’s income. For non-GSE loans, Treasury will then use TARP funds to match further reductions on a dollar-for-dollar basis, down to the target of 31 percent of the borrower’s gross monthly income. The modified monthly payment is fixed for 5 years or until the loan is paid off, whichever is earlier, as long as the borrower remains in good standing with the program. After 5 years, investors no longer receive payments for cost sharing, and the borrowers’ interest rate may increase by 1 percent a year to a cap of the Freddie Mac rate for 30-year fixed rate loans as of the date that the modification agreement was prepared, and the borrower’s payments would increase to accommodate the increase in interest rate. The interest rate and monthly payments are then fixed for the remainder of the loan. 2. Standardized net present value (NPV) model. The NPV model compares expected cash flows from a modified loan to the same loan with no modification, based on certain assumptions. If the expected investor cash flow with a modification is greater than the expected cash flow without a modification, the loan servicer is required to modify the loan. According to Treasury, the NPV model increases mortgage investors’ confidence that modifications under HAMP are in their best financial interests and helps ensure that borrowers are treated consistently under the program by providing a transparent and externally derived objective standard for all loan servicers to follow. 3. Standardized waterfall. Servicers must follow a sequential modification process to reduce payments as close to 31 percent of gross monthly income as possible. Servicers must first capitalize accrued interest and certain expenses paid to third parties and add this amount to the loan balance (principal) amount. Next, interest rates must be reduced in increments of one-eighth percent until the 31 percent debt-to-income target is reached, but servicers may not reduce interest rates below 2 percent. If the interest rate reduction does not result in a debt-to-income ratio of 31 percent, servicers must then extend the maturity and/or amortization period of the loan in 1-month increments up to 40 years. Finally, if the debt-to-income ratio is still over 31 percent, the servicer must forbear, or defer, principal until the payment is reduced to the 31-percent target. Servicers may also forgive mortgage principal at any step of the process to achieve the target monthly payment ratio of 31 percent, provided that principal reduction is allowed by the investor. 4. Incentive payment structure. Treasury will use HAMP funds to provide both one-time and ongoing (“pay-for-success”) incentives for up to 5 years to non-GSE loan servicers, mortgage investors, and borrowers to increase the likelihood that the program will produce successful modifications over the long term and help cover the servicers’ and investors’ costs of modifying a loan. Borrowers must also demonstrate their ability to pay the modified amount by successfully completing a trial period of at least 90 days before the loan is permanently modified and any government payments are made under HAMP. Treasury has entered into agreements with Fannie Mae and Freddie Mac to act as its financial agents for HAMP. Fannie Mae, as the HAMP program administrator, is responsible for developing and administering program operations including registering servicers and executing participation agreements with and collecting data from them. A separate division within Freddie Mac, the Making Home Affordable- Compliance (MHA-C) team is the HAMP compliance agent, and is responsible for assessing servicer compliance with non-GSE program guidelines, including conducting onsite and remote servicer reviews and audits. As of mid-June 2010, 109 active servicers had signed HAMP Servicer Participation Agreements to modify first-lien mortgages not owned or guaranteed by Fannie Mae and Freddie Mac. Roughly $39.9 billion in TARP funds has been committed to these servicers for modification of non-GSE loans. Based on the HAMP Servicer Performance Report through May 2010, more than 1.5 million HAMP trial modifications had been offered to borrowers of GSE and non-GSE loans, and more than 1.2 million of these had begun HAMP trial modifications. Of the trial modifications begun, approximately 468,000 were in active trial modifications, roughly 340,000 were in active permanent modifications, roughly 430,000 trial modifications had been canceled, and roughly 6,400 permanent modifications had been canceled. As of May 17, 2010, more than $132 million in TARP funds had been disbursed to HAMP servicers. Borrowers who received permanent first-lien HAMP modifications had high levels of total debt and high loan-to-value ratios. Through the end of May 2010, borrowers receiving permanent HAMP modifications had a median back-end debt ratio (the ratio of total monthly debts to gross monthly income) of roughly 80 percent prior to loan modification. The median reduction in monthly mortgage payments as a result of HAMP was roughly $514, which reduced these borrowers’ median back-end debt-to- income ratio to 64 percent. In addition, according to Fannie Mae, through mid-April 2010, many borrowers continued to be underwater after a HAMP modification, with an average loan-to-value ratio more than 150 percent. In addition to first-lien modifications, in March 2010 Treasury issued revised guidelines for the second-lien modification program under HAMP (2MP), as well as the Home Affordable Foreclosures Alternatives Program (HAFA). However, Treasury has not stated how much of the $50 billion in TARP funds these two programs are expected to use. 2MP provides incentives to investors, servicers, and borrowers for the modification of second liens if the first lien has been modified under HAMP. Under 2MP, servicers who sign agreements to participate in the program must modify, partially extinguish, or fully extinguish second liens where the first lien has been modified under HAMP. As of June 2010, seven servicers had signed up for 2MP, and at least one of these servicers has initiated trial modifications for second liens. According to Treasury, four of these seven servicers hold more than 50 percent of all second liens. Regarding HAFA, as of April 5, 2010, non-GSE servicers could also begin offering foreclosure alternatives, such as short sales and deeds-in-lieu, in cases where the servicer was unable to approve the borrower for HAMP, the borrower did not accept a HAMP trial modification, or the borrower defaulted on a HAMP modification. The program provides incentive payments to investors, servicers, and borrowers for completing these foreclosure alternatives in lieu of foreclosure. In March 2010, Treasury announced four additional HAMP-funded programs—one for principal reduction under HAMP, one for temporary forbearance for unemployed borrowers, an FHA refinancing program and the HFA Hardest-Hit Fund. Principal reduction and temporary forbearance for unemployed borrowers could be implemented in the summer of 2010, and the FHA refinancing program in the fall, but implementation of the HFA Hardest-Hit Fund programs will vary by state. The principal reduction program under HAMP will require servicers to consider principal reduction for HAMP-eligible borrowers with loan-to- value ratios greater than 115 percent. Treasury has not yet finalized the potential amount of TARP funds that will be spent on this HAMP program or the number of borrowers expected to receive principal reductions. Initial program guidelines were issued in June 2010 and the program is expected to be effective for participating HAMP servicers in the fall of 2010. Under the plan for temporary forbearance for unemployed borrowers, which will be effective July 1, 2010, servicers will be required to consider unemployed borrowers for a forbearance plan to reduce mortgage payments to an affordable level for the lesser of 3 months or upon notification that the borrower has become reemployed. To be considered, unemployed borrowers must request forbearance before falling behind on three monthly mortgage payments. The servicers must offer forbearance if the borrower’s monthly mortgage payments exceed 31 percent of monthly gross income, including unemployment benefits. Treasury has not established how many borrowers are likely to be helped with this feature. Once the borrower has found employment, or 30 days before the forbearance period has expired, the servicer must evaluate the borrower for eligibility for a HAMP first-lien modification. According to Treasury, there will be no HAMP incentive payments made for these forbearance plans, so the program will not require TARP funds. Missed payments during the forbearance period are capitalized, and servicers may not collect late fees during the forbearance period. According to Treasury, representatives of investors and the four largest servicers, some servicers are already offering similar forbearance programs to unemployed borrowers. The new FHA refinance program will be designated a maximum of $14 billion of the $50 billion originally intended for HAMP and will be a voluntary program for servicers. However, if servicers choose this option, they must reduce borrowers’ original first-lien principal by at least 10 percent, and the resulting ratio of all mortgage debt, including junior liens, to the value of the house can be no greater than 115 percent. The principal balance of the refinanced first-lien loan cannot exceed 97.75 percent of the home’s value. The borrower must be current on existing mortgage payments to qualify and have a credit score of at least 500. The terms and uses of the $14 billion have yet to be specified. The HFA Hardest-Hit Fund designated $2.1 billion out of the $50 billion originally intended for HAMP to 10 state housing finance agencies to develop more localized programs to preserve homeownership and protect home values. As of mid-May 2010, Treasury was in the process of reviewing program proposals submitted by the first five housing finance agencies that received funding and expected to receive proposals from the second five state agencies on June 1, 2010. However, according to initial proposals, some program efforts may require significant implementation periods. For example, one state agency reported that some of its program features may not be available until 5 months after Treasury approves the program. As shown in table 1, the implementation dates for a number of the HAMP- funded homeowner assistance programs have not yet been specified, and Treasury has not announced how many borrowers the programs are expected to help. With the exception of the HFA Hardest-Hit Fund, the cutoff date for borrowers to be accepted into any of the HAMP-funded programs is December 31, 2012, and disbursements of TARP funds may continue until December 2017. The cutoff date and last possible disbursement for the HFA Hardest-Hit Fund has yet to be determined. Although one of Treasury’s stated goals for HAMP is to standardize the loan modification process across the servicing industry, we identified several areas of inconsistencies in how servicers treat borrowers under HAMP. These areas of inconsistency could lead to inequitable treatment of similarly situated borrowers, and borrowers in similar circumstances could have different outcomes. First, we found that servicers differed in when and how they solicited borrowers for HAMP, and numerous borrowers had complained that they did not receive timely responses to their HAMP applications or had difficulty getting information from their servicers about the program. Until March 2010, a year into the program, Treasury had only minimal requirements for soliciting borrowers for HAMP and had yet to finalize comprehensive measures that addressed servicers’ performance in this area. Further, Treasury had not issued specific guidelines for servicers on how to determine whether borrowers current on their mortgage payments were in imminent danger of default or for conducting internal quality assurance reviews. Treasury also had not provided servicers with specific requirements detailing how servicers should handle and track borrowers’ complaints about HAMP. As a result, some servicers that we contacted did not systematically track all HAMP complaints or their resolutions, and borrowers may not have been aware that an independent escalation process existed to handle complaints about servicers or to challenge HAMP eligibility denial determinations. Lastly, Treasury had not yet determined specific remedies for servicer noncompliance with HAMP program requirements—a key enforcement mechanism for ensuring that servicers treated borrowers equitably under HAMP. For the first year of the HAMP first-lien program, Treasury’s key guidance on its requirements for the initial outreach to or solicitation of borrowers for participation in HAMP stated that servicers should follow their existing practices for soliciting borrowers. The 10 servicers we contacted reported varying practices, with a few soliciting borrowers who were 31 days delinquent on payments and some others not soliciting borrowers until borrowers were at least 60 days delinquent on payments. However, even when servicers said their practice was to solicit borrowers who were 60 days past due, they very often did not. The proportion of borrowers who were 60 days delinquent on their mortgages and who were solicited for HAMP ranged from 16 to 95 percent. On average, the 10 servicers we contacted solicited approximately 60 percent of such borrowers. Some servicers explained that they did not solicit certain borrowers because, for example, the borrowers did not meet basic eligibility criteria or because the investors for that particular pool of mortgage-backed securities did not allow HAMP modifications. However, as of December 2009, the MHA-C group within Freddie Mac, the compliance agent for HAMP, identified four servicers through their onsite Management Compliance Audits that could not always provide evidence that borrowers who were potentially eligible for HAMP had been solicited. In March 2010, more than a year after the program was first announced, Treasury issued additional guidelines governing solicitation efforts. Effective June 2010, servicers must prescreen all first-lien loans with two or more mortgage payments are due and unpaid to determine if the loans meet the basic HAMP eligibility criteria (e.g. the home is an owner- occupied, primary residence and a single family one-to-four unit property; the loan originated before January 1, 2009; and the loan balance is within specified limits). Servicers must make a “reasonable effort” to solicit for HAMP any borrower who passes this prescreening—that is, servicers must make a minimum of four telephone calls to the borrower’s last known phone number at different times of the day and send two written notices, by different means, to the borrower’s last known address within 30 days. Because these are new requirements, we could not determine how effective they might be in standardizing solicitation practices, but standardizing solicitation requirements may help ensure that all potentially eligible borrowers are notified about HAMP in a timely manner. Moreover, it appears that some borrowers had problems reaching their servicers and obtaining information on the status of their applications and on HAMP in general. For example, between the end of June 2009 and mid- April 2010, approximately 27,000 of the more than 48,000 borrower complaints to the HOPE Hotline—a 24-hour telephone line that provides borrowers with free foreclosure prevention information and counseling— were about this issue. The most common complaints involved the difficulty of reaching servicers or not hearing back from them in a timely manner after submitting documentation. During our visits to six HAMP servicers, we observed a small sample of phone calls between borrowers and their servicers, several of which involved complaints about the difficulty of contacting servicers about HAMP. For example, four out of the nine calls we observed at one of the large HAMP servicers involved complaints related to servicers’ communications with borrowers. These included complaints that the servicer had lost documentation and that the borrower was not able to speak with a representative knowledgeable about the status of the HAMP application. In October 2009 and in March 2010 Treasury implemented guidelines attempting to address some of these issues. Guidelines issued in October 2009 mandated that servicers acknowledge in writing the receipt of borrowers’ initial HAMP application packages within 10 business days and that they include in their responses a description of their evaluation process and timeline for processing paperwork. Additionally, in March 2010, servicers were required to include a toll-free number in all communications with borrowers, which would allow them to reach a representative capable of providing specific details about the HAMP modification process. In April 2010, the Congressional Oversight Panel recommended that Treasury monitor program participants and enforce the new borrower outreach and communication standards and timelines to increase program transparency. Treasury plans to include the new program requirements in MHA-C’s compliance reviews of HAMP servicers, and it will be important for Treasury to review findings from these reviews to determine whether these requirements do improve servicers’ communications with borrowers and fully address differences among servicers in soliciting borrowers for HAMP. Treasury first drafted metrics to assess HAMP servicers’ performance in communicating with borrowers in October 2009, but these metrics have not yet been finalized. In December 2009, Treasury requested that nine of the largest HAMP servicers provide information on a revised version of these metrics, and Treasury officials told us they were using the results of this request to further revise the metrics to ensure consistent and comparable responses. According to Treasury, the preliminary metrics include measures such as the average speed for answering loss mitigation calls and the number of attempts made to contact each borrower who is in the initial stages of foreclosure. Preliminary results showed inconsistencies among servicers’ responses that could indicate differences either in how servicers were interpreting the questions or in how they treated borrowers. In our July 2009 report, we noted that Treasury lacked finalized performance measures for HAMP. Since then, the Congressional Oversight Panel and SIGTARP have recommended that Treasury collect additional program data and publicly report on the metrics to ensure transparency and evaluate program success. Treasury officials told us they would continue to work with servicers on their responses to these metrics to finalize them and establish a common reporting standard. Treasury plans to collect these metrics for the eight largest HAMP servicers and publicly disclose the results in July 2010. Without establishing key performance metrics and reporting of individual servicer performance with respect to those metrics, Treasury cannot achieve full transparency and accountability for the HAMP first-lien modification program results and progress. While Treasury’s goal is to create uniform, clear, and consistent guidance for loan modifications across the servicing industry, as we noted in March 2010, Treasury has not provided specific guidance on how to determine whether borrowers are in imminent danger of default. As also noted in SIGTARP’s March 2010 report on HAMP, this lack of consistent and clear standards could mean that servicers are inconsistently applying criteria in this area and thereby inequitably treating borrowers across the program. According to HAMP guidelines, borrowers who are current or less than 60 days delinquent on their mortgage payments but in imminent danger of defaulting may be eligible for HAMP modifications, and Treasury has emphasized the importance of reaching borrowers before they are delinquent. In particular, Treasury instituted additional incentives to servicers and investors for modifying loans for such borrowers. According to Treasury, 22.9 percent of all trial modifications started as of May 2010 were in this category. Treasury stated that it did not create such guidelines when developing HAMP because it was focused primarily on delinquent borrowers. However, Fannie Mae and Freddie Mac have had standardized imminent default criteria since late April 2009 for modifications of loans owned or guaranteed by the GSEs, and in January 2010 (with an effective date of March 1, 2010) further aligned these guidelines to provide greater consistency between the two GSEs. Treasury officials have stated that they plan to monitor the impact of servicers’ implementation of the new GSE imminent default guidance over the next few months. Treasury then plans to determine whether it will adopt similar criteria for non-GSE loans. As a result of the lack of specific guidance, we found seven different sets of criteria for determining imminent default among the 10 servicers we contacted. The seven sets of criteria that we found varied in both the types of information the servicers considered and in the thresholds they set for factors such as income and cash reserves. Two servicers considered borrowers who met the basic HAMP eligibility requirements (greater than 31 percent monthly mortgage debt-to-income ratio, one-to-four unit single family residence, etc.) in imminent default and the servicers did not impose any additional criteria on them. Three servicers aligned their imminent default criteria for their non-GSE portfolios with the imminent default criteria that the GSEs required for their loans prior to March 1, 2010. In addition to the basic HAMP eligibility requirements, these criteria require borrowers to have cash reserves of no more than 3 months of housing payments (including monthly principal, interest, property tax, insurance, and either condominium, cooperative, or homeowners’ association payments) and a ratio of disposable net income to monthly housing payments (debt coverage ratio) of less than 120 percent. One servicer had begun using the new GSE criteria that sets a new maximum cash reserves limit of $25,000 and does not have debt coverage ratio requirements for its non-GSE loans. The remaining four servicers included various additional considerations among their criteria, including: a sliding income scale for the borrower’s mortgage debt-to-income ratio; an increase in expenses or decrease in income that is more than a certain a loan-to-value ratio that is above a certain percentage; and a “hardship” situation lasting longer than 12 months. These differences in criteria may result in one borrower being approved for HAMP, and another with the same financial situation and loan terms being denied by a different servicer. In addition, if a servicer has few or no additional imminent default criteria, the servicer may be offering HAMP modifications to borrowers who may not actually be at true risk of defaulting on their loan. However, if a servicer has very stringent criteria, it may be denying HAMP modifications to borrowers who will ultimately default on their loans because of unaffordable monthly mortgage payments. To account for differences in servicers’ loan portfolios, Treasury specifically allows some differences in how servicers evaluate borrowers for HAMP that could result in inconsistent outcomes for borrowers. For example, servicers may add a risk premium of up to 2.5 percent to the Freddie Mac rate for 30-year fixed mortgages when inputting the discount rate to the NPV model used in evaluating eligibility for HAMP. The NPV model compares the net present value of expected cash flows to the investor from a loan that receives a HAMP modification with the expected cash flows of the same loan with no modification (also considering the likelihood that the loan would end in foreclosure). If the estimated cash flow with a modification is “positive” (i.e., equal to or more than the estimated cash flow of the unmodified loan), the loan servicer is required to make the HAMP modification. The higher the risk premium a servicer chooses, the fewer the number of loans that are likely to pass the NPV model, because expected future cash flows would have less value. Servicers must apply one risk premium to all loans held in their portfolio and one to loans serviced for other investors. Treasury noted that it chose to allow this variation because mortgage holders and investors could have different opportunity costs of capital and different interpretations of risk. Of the 10 servicers we interviewed, 3 servicers (2 large and 1 medium- sized servicers) added the full 2.5 percent risk premium allowable, while the other 7 servicers did not add an additional risk premium. According to our analysis of Treasury data, as of April 17, 2010, 11 servicers used a risk premium, most of them the full 2.5 percent. Of concern, MHA-C, through its compliance audits, found that 15 of the largest 20 participating servicers did not comply with various aspects of the program guidelines in their implementation of the NPV model. This lack of compliance likely resulted in differences in how borrowers were evaluated, and could have resulted in the inequitable treatment of similarly situated borrowers. Servicers have two options for implementation of the NPV model. Either they may use the Treasury version of the NPV model housed on a Web portal hosted by Fannie Mae in its capacity as Treasury’s financial agent, or they may recode the NPV model to run it on their own internal systems. Among seven servicers that had recoded the NPV model to run it on their own internal systems, MHA-C found that the servicers had failed to hold certain data constant when rerunning the NPV model for borrowers they were evaluating for a permanent HAMP modification. HAMP guidelines state that only income-related inputs or incorrect data can be changed during a second NPV model run. But because these servicers often linked the NPV model with their servicing system, values for inputs such as property values and credit scores were erroneously updated during the rerunning of the NPV model. In these cases, MHA-C required the servicer to make the appropriate fixes so that their in-house models were consistent with the Treasury model. Until such fixes were made, MHA-C required the servicers to refrain from denying permanent modifications because of negative NPV results unless these results were validated by the Treasury version of the NPV model housed on the Fannie Mae Web portal with the appropriate data values. In addition, MHA-C has required these servicers to proactively resolicit any borrowers who were incorrectly denied a permanent HAMP modification due to the NPV errors. Eight servicers that exclusively use the Fannie Mae Web portal had similar problems with their NPV inputs when rerunning the NPV model while evaluating borrowers for a permanent modification. In these cases, servicers have been required to reanalyze loans that were affected by the error and outline a corrective action plan. Although MHA-C notified almost all of the 15 servicers of these errors in February 2010, some of the servicers are still in the process of analyzing which borrowers were affected, and MHA-C is monitoring the servicers’ progress in these analyses and has instructed servicers not to conduct foreclosure sales until remediation activities are complete. According to Treasury, the number of borrowers who were denied because of a servicer’s NPV errors could range from a handful to thousands, depending on the size of the servicer and the extent of the error. In addition, servicers themselves have identified process errors that led to inconsistencies in how they were evaluating borrowers for HAMP through their quality assurance reviews. We reviewed quality assurance reports from the 10 servicers we interviewed and found that the error rates for the calculation of borrower income were well above the servicers’ own established error thresholds, often set at 3 to 5 percent. In fact, half of these servicers reported at least a 20-percent error rate for the loan modifications sampled during the most recent review provided to us. Without accurate income calculations, similarly situated borrowers applying for HAMP may be inequitably evaluated for the program and may be inappropriately deemed eligible or ineligible for the program. Some servicers also found other types of errors, such as failing to include condominium association dues in the monthly target housing payment; charging borrowers fees prohibited by HAMP guidelines—for example, for property valuation; and not reducing the monthly mortgage payment for the HAMP modification to 31 percent or less of the borrower’s gross monthly income. As a result of these audit findings, servicers implemented process improvements and corrective actions. Some of the servicers resolicited borrowers who were incorrectly turned down for HAMP, while others implemented additional controls to their evaluation processes, such as additional reviews and enhanced technology systems to aid in the income calculation process. Most of the servicers implemented additional training for staff in the specific areas in which errors were found. For example, one servicer held training on calculating rental income and income for self-employed borrowers, since these types of income calculations accounted for a large portion of errors. However, a lack of specific guidelines has also led to significant variations in servicers’ quality assurance programs for HAMP. According to the Standards for Internal Control in the Federal Government, the scope of internal program evaluations should be appropriate and reflect the associated risks. Treasury guidance requires servicers to develop and execute internal quality assurance programs to ensure compliance with HAMP, but its guidelines are not sufficiently specific to ensure that servicers are mitigating all of the potential program risks. For example, potential program risks include improper offers of permanent and trial HAMP modifications, as well as improper denials of both permanent and trial modifications. However, while Treasury’s guidelines state that servicers must include either a statistically based sample (with a 95 percent confidence level) or a 10-percent stratified sample of loans modified, drawn within 30 to 45 days of the final modification, Treasury does not specify whether trial and permanent modifications should be sampled separately or whether denied modifications should be sampled at all. According to Treasury, MHA-C has suggested to servicers that their quality assurance procedures should include evaluations of the whole HAMP population, including those in trial modifications and those denied HAMP, but servicers receive this feedback only after MHA-C completes its compliance reviews. Only 4 of the 10 servicers we interviewed separately sampled active trial modifications, approved permanent modifications, denied trial modifications, and denied permanent modifications, a methodology that allowed them to review statistically significant samples within each of these categories. Three of the servicers we interviewed did not review a representative sample of approved trial modifications, and two of the servicers did not review a representative sample of denied modifications. In addition, one servicer we interviewed did not sample its HAMP modifications separately from its proprietary modifications and therefore reviewed too few HAMP modifications to result in HAMP- specific findings. Treasury guidelines also do not specify required areas of review, and we found variations in the content of servicers’ quality assurance reviews. For example, while most servicers we interviewed recalculated borrowers’ income for the loans that they sampled as part of their quality assurance procedures, half of the servicers did not review the inputs for the NPV model despite the key role that the model plays in determining whether or not a borrower qualifies for HAMP. In addition, while 8 of the 10 servicers we interviewed performed some type of quality assurance review on denied HAMP modifications, one of these servicers focused its reviews only on whether denial letters were sent to the borrowers and not on whether the borrowers were appropriately denied HAMP. As part of its HAMP compliance procedures, MHA-C has outlined more specific expectations for what servicers should include in their internal quality assurance reviews, but these expectations are not published or shared with servicers prior to their MHA-C compliance reviews. Without more specific guidance in this area from Treasury, some servicers may continue to have less robust quality assurance procedures and thereby risk not identifying practices that may lead to inequitable treatment of borrowers or harm taxpayers through greater potential for fraud or waste in the program. Treasury has directed HAMP servicers to have procedures and systems in place to respond to HAMP inquiries and complaints and to ensure fair and timely resolutions. However, some servicers were not systematically tracking HAMP complaints or their resolutions, making it difficult for Treasury to determine whether this requirement was being met. For example, according to Treasury, a compliance review conducted by MHA- C in the fall of 2009 cited a servicer for not tracking, monitoring, or reporting HAMP-specific complaints. In the absence of an effective tracking system, the compliance agent could not determine whether the complaints had been resolved. Similarly, several of the servicers we interviewed indicated that they tracked resolutions only to certain types of complaints. For instance, several servicers told us that they tracked only written HAMP complaints and handled these written complaints differently depending on the addressee. Without tracking all complaints, it is not possible for any internal or external review to determine whether complaints had been properly handled. Fannie Mae, in its role as the administrator for HAMP, has contracted with the HOPE Hotline to handle incoming borrower calls about HAMP. Borrowers may obtain information about the program and assess their preliminary eligibility, or discuss their individual situations, which may include complaints about their servicer or about potentially incorrect denials. Borrowers calling the hotline with a HAMP complaint can be transferred to a housing counseling agency approved by the Department of Housing and Urban Development (HUD), and when the complaint pertains to a borrower assertion that they have been wrongfully denied a modification or that their servicer has not applied program guidelines appropriately, the borrower is transferred to the Making Home Affordable (MHA) Escalation Team, which is housed within a HUD-approved counseling agency. If additional intervention is needed, the counselor is to “escalate” the complaint to the housing counseling agency’s management (fig. 2). As of mid-April 2010, more than 37,000 borrower complaints had been escalated to the MHA Escalation Team, and an unknown number had been escalated to the housing counseling agency’s management. Through mid-April 2010, more than 4,000 calls to the HOPE Hotline were about potentially incorrect denials for a HAMP modification. According to Fannie Mae, between January and April 2010 the housing counseling agency that handles HOPE Hotline escalations resolved 99 percent of its complaints within 4 days. Complaints that the counseling agency’s management cannot resolve are referred to an escalation team within Fannie Mae known as the HAMP Solution Center, which also handles escalations on behalf of borrowers referred by housing counselors and government agencies outside of the HOPE hotline. As of April 1, 2010, more than 3,700 complaints had been escalated to this team. Of these escalated complaints, nearly 2,900 had been resolved, with 19 percent of the resolved escalations resulting in the initiation of a trial or permanent modification and approximately 35 percent in a determination of ineligibility. An additional 17 percent were referred back to the servicers or the HOPE Hotline, and the remaining 29 percent had other outcomes—for example, some were referred to other loss mitigation alternatives, and no action was taken on others. Fannie Mae has set a goal of 7 business days for the HAMP Solution Center to resolve complaints, but as of mid-April 2010, the average resolution time was 23 days. It is unclear whether the HOPE Hotline and escalation processes are effective mechanisms for resolving concerns about potentially incorrect HAMP denials. At each level of the escalation process, the party handling the complaint works with the servicer and the borrower (or borrower advocate) to obtain information or actions that would resolve it. Neither the MHA Escalation Team counselor nor HAMP Solution Center staff review the borrower’s application or loan file; rather, further reviews of borrowers are to be conducted by the servicers. According to Treasury, it would be difficult to obtain borrower’s loan files because they are so large. Instead, Treasury officials told us that they were working toward providing MHA Escalation Team counselors and HAMP Solution Center staff with access to some information from the loan files, such as whether the investor would allow the loan to be modified under HAMP, that could be used during the escalation process. In addition, Fannie Mae has set up a quality assurance process for housing counselors who handle MHA escalations that includes monitoring and scoring of counselors’ calls with borrowers. Although this quality assurance process evaluates the way counselors resolve borrowers’ concerns, it is not clear how the evaluators could determine whether the resolutions were correct, since the evaluators also lack access to the borrowers’ loan files. As a result, servicers maintain discretion in determining how to resolve borrowers’ concerns about potentially incorrect HAMP denials. Further calling into question the effectiveness of the escalation process, in its April 2010 report on HAMP, the Congressional Oversight Panel raised additional concerns about the effectiveness of the HOPE Hotline by stating that it is unclear whether the HUD-approved housing counseling agencies that work with the HOPE Hotline have sufficient capacity or adequate training to properly handle borrower requests for assistance. While the HOPE Hotline escalation process is the primary means for borrowers to raise concerns about their servicer’s handling of their HAMP applications and potentially incorrect denials, Treasury has not explicitly informed borrowers that the hotline can be used for these purposes. For example, the Making Home Affordable Web site states only that the HOPE Hotline provides help with the program and no-cost access to counselors at a HUD-approved housing counseling agency. Treasury also requires that servicers provide information in their denial letters about the HOPE Hotline, with an explanation that the borrower can seek assistance at no charge from a counselor at a HUD-approved housing counseling agency and can request assistance in understanding the denial notice. Neither of these communication mechanisms fully informs borrowers that they can call the HOPE Hotline to voice concerns about their servicer’s performance or decisions and therefore may limit the number of borrowers who use the hotline for these purposes. For example, as of mid- April 2010, less than 2 percent of the more than 48,000 calls to the hotline were from borrowers who felt they had wrongfully been denied under the Making Home Affordable program, which could include HAMP. Treasury has taken some steps to ensure that servicers comply with HAMP program requirements, including those related to the treatment of borrowers, but has yet to establish specific consequences or penalties for noncompliance with HAMP guidelines. We first reported in July 2009 that Treasury had not yet formalized a policy to assess remedies for noncompliance among servicers. The HAMP servicer participation agreement describes actions that Fannie Mae, as program administrator (at Treasury’s direction), may take if a servicer fails to perform or comply with any of its material obligations under the program, but does not lay out the specific conditions under which these actions should be taken. In October 2009, Treasury established the HAMP Compliance Committee to monitor the performance and activities of servicers based on information gathered by Fannie Mae, MHA-C, and others. According to Treasury, the compliance committee—comprised of staff from Treasury, Fannie Mae, and MHA-C—has drafted a policy to establish consequences for servicer noncompliance with HAMP program requirements. Treasury officials told us that the policy was initially approved in October 2009, but following an internal review the compliance committee determined that it needed more experience with servicers’ performance before finalizing the policy. The committee is still redrafting the policy, and Treasury expects that it will be internally reviewed again in June 2010. Until the policy is finalized, the committee has instructed MHA-C to report all issues of servicer noncompliance to the committee which then evaluates these issues on a case-by-case basis, leaving open opportunities for inconsistencies in how incidences of noncompliance are remedied. According to Treasury, no financial remedies have been issued to date, though Treasury has required MHA-C to perform more targeted reviews, as well as directed MHA-C to require some servicers to take action to correct areas of noncompliance. In its April report on HAMP, the Congressional Oversight Panel recommended that Treasury ensure compliance through established enforcement mechanisms that provide a clear message of the consequences for servicer actions to increase program accountability. Without standardized remedies for noncompliance, Treasury risks inconsistent treatment of servicer noncompliance and lacks transparency with respect to the severity of the steps it will take for specific types of noncompliance. In our testimony on March 25, 2010, we noted that Treasury faced several additional challenges as it continues to implement HAMP. These challenges include (1) converting trial modifications to permanent status, (2) addressing the growing issue of negative equity, (3) reducing redefaults among borrowers with modifications, and (4) ensuring program stability and effective program management. While Treasury has taken some steps to address these challenges, such as announcing a principal reduction program under HAMP and finalizing the second-lien modification program, it needs to expeditiously finalize and implement remaining programs in a manner that ensures transparency and accountability. Our review of HAMP suggests that potential concerns exist in the areas of program stability and adequacy of program management as Treasury continues to add or revise HAMP-funded programs. HAMP servicers reported a wide range of conversion rates and gave a variety of reasons to explain why trial modifications were not converting to permanent modifications. Through the end of May 2010, servicers reported conversion rates ranging from 11 percent to 86 percent. Furthermore, a few servicers reported that more than half of their active trial modifications had been in the trial period for more than 6 months. The 10 servicers we contacted reported conversion rates ranging from 1 percent to 57 percent for non-GSE HAMP modifications that had been in trial periods for 3 or more months as of December 31, 2009 (fig. 3). Of these 10 servicers, the 3 we contacted that required borrowers to provide full documentation of their income before starting trial modifications reported the highest conversion rates (38 percent to 57 percent). The seven servicers that used stated income to determine eligibility for trial modifications had conversion rates ranging from 1 percent to 18 percent. We asked these servicers for the percentages of nonconversions that had resulted from incomplete or problematic documentation, missed trial period payments, or having to wait for a servicer to take action to complete the conversion. Several of the servicers reported that these scenarios were responsible for fewer than half of their nonconversions (fig. 3). Not surprisingly, the servicers that used verified income reported lower rates of nonconversions because of incomplete or problematic documentation (1 percent to 14 percent) compared with the servicers that used stated income (4 percent to 58 percent). Servicers also reported a wide range of nonconversions that could be attributed to missed payments during trial modifications—roughly 2 percent to more than 70 percent. However, 9 of the 10 servicers reported that these types of nonconversions accounted for less than a quarter of the total, and the highest percentage (71 percent) was reported by a servicer that primarily serviced subprime loans. Finally, some servicers reported having borrowers who had submitted all documentation and made all trial payments but were waiting on action from the servicer to receive permanent modifications. For example, one servicer reported that nearly a third of borrowers who had been in trial modifications for at least 3 months, but had not been converted to permanent modifications, were in this situation. In November 2009, Treasury launched a conversion campaign and revised the first-lien HAMP guidelines in an effort to address the challenges associated with converting trial modifications to permanent modifications. The conversion campaign included a temporary review period lasting through January 31, 2010, that did not allow servicers to cancel trial modifications for any reason other than failure to meet HAMP property requirements (for example, if the property was not owner-occupied). In addition, Treasury required the eight largest servicers to submit conversion action plans that included strategies such as having people knock on doors to collect missing documentation from borrowers, having call center staff follow up on trial payments, and developing call scripts to include a description of incentives available to borrowers after completion of the trial period. Treasury also formed “SWAT” teams comprised of Treasury and Fannie Mae staff to visit large servicers’ offices and offer on- site assistance with conversions. During the conversion campaign, the number of new conversions each month increased from roughly 26,000 in November to roughly 35,000 in December and roughly 50,000 in January. To address the specific challenge of obtaining complete documentation from borrowers, Treasury has made several changes to streamline and improve documentation requirements. In October 2009, Treasury announced a streamlining of required documentation that, among other things, allows borrowers to use a standard application form that incorporates income, expense, and hardship information. Treasury further simplified the documentation requirement in January 2010 when it announced that pay stubs used to verify income no longer needed to be consecutive, provided the pay stubs included year-to-date income and the servicer judged that the borrower’s income had been accurately established. While the streamlining of documentation could make it easier for certain borrowers to provide all required documentation, therefore improving conversion rates, it could also increase the risk of fraud or abuse in the program. Also in January 2010, Treasury announced that beginning in mid-April, servicers would be required to evaluate borrowers for trial modifications based on fully documented income. While using fully documented income will potentially be a significant change for some servicers, particularly given Treasury’s July 2009 statement that servicers should evaluate borrowers for trial modifications based on stated income, it could help improve conversion rates. As we have seen, among the 10 servicers we spoke with, the 3 already requiring full documentation up front generally reported higher conversion rates. However, converting trial modifications continues to be a challenge. As of the end of May 2010, Treasury data showed that only 31 percent of trial modifications started at least 3 months prior, and therefore potentially eligible for conversion, had converted to a permanent modification. In fact, the total number of permanent modifications started through May 2010 was less than the total number of trial modifications canceled during the same time period (roughly 347,000 versus 430,000). Furthermore, as servicers focus on conversions and began the transition to evaluating borrowers using verified income, the number of trial modifications begun has decreased significantly. In May 2010, roughly 30,000 trial modifications were started, compared with nearly 63,000 in March 2010 (fig. 4). As of the end of May 2010, Treasury reported that there were roughly 1.7 million estimated eligible 60-day delinquent borrowers. According to Treasury officials, Treasury is not planning on taking any additional steps to address nonconversions because the agency’s current focus is on clearing the backlog of trial modifications awaiting conversion decisions. The officials noted that servicers had committed to clearing their backlogs by the end of June 2010. Going forward, Treasury anticipates that the requirement for up-front documentation will reduce the challenge of converting trial modifications to permanent modifications. Borrowers may not convert to permanent modifications for several reasons, including ineligibility for HAMP and failure to make the required trial modification payments. Some borrowers who do not receive permanent modifications may be eligible for other non-HAMP loan modification programs that servicers offer or for alternatives to foreclosure such as those offered under the HAFA program. For example, Treasury reported that through April 2010, among the top eight HAMP servicers, nearly half of borrowers who had trial modifications canceled received non-HAMP loan modifications. The proportion of homeowners who owe more than the value of their homes continues to be high in many states and, as we reported in July 2009, HAMP as initially designed may not address the growing number of foreclosures among borrowers with negative equity (“underwater” borrowers). According to data reported by CoreLogic, a company that collects and analyzes U.S. real estate and mortgage data, more than 11.2 million (24 percent) of borrowers across the country had negative equity at the end of the first quarter of 2010. In addition, of borrowers with loan- to-value ratios greater than 150 percent, more than 14 percent had received a notice of default—the first step in the public recording of default—compared with roughly 2 percent of those with at least some equity in their homes. As we have seen, according to Fannie Mae, borrowers have loan-to-value ratios of roughly 150 percent, on average, after a HAMP modification. While HAMP’s initial design focused on bringing mortgage payments to an affordable level, severe levels of negative equity and expectations that house prices will continue to decline may lead some borrowers to choose to default on their mortgage payments even if the payments are affordable or could be modified to affordable levels. In an effort to help address the challenge of negative equity, in March 2010 Treasury announced a principal reduction program under HAMP. According to the initial program guidelines issued in June 2010, the principal reduction HAMP program will allow some underwater homeowners to reduce the balance owed on their mortgage in steps over 3 years, if they remain current on their payments. Servicers will be required to run both the standard NPV test and an alternative that considers principal reduction and to compare the results. Under the alternative approach, servicers will assess the NPV of a modification that starts by forbearing the principal balance to 115 percent of the home’s value, or to an amount necessary to bring the borrower’s payments to 31 percent of income, whichever requires less principal reduction. If forbearing principal to 115 percent of the home’s value does not reduce monthly payments to 31 percent of income, the servicer will follow HAMP’s standard procedures for modifying loans—lowering the interest rate, extending the term of the loan, forbearing additional principal, or a combination of these steps in this order. If the NPV under this approach is higher than it is for a modification without principal forbearance, the servicer will have the option—but will not be required—to forgive principal. Servicers will initially treat the reduced principal amount as forbearance and will forgive the forborne amount in three equal steps over 3 years, as long as the homeowner remains in good standing. Investors will receive incentives for reducing principal, and the incentive amounts vary based on the delinquency level of the borrower and the current loan-to- value ratio. Servicers will be required to establish written policies detailing when principal reduction will be offered, and, according to Treasury, MHA-C will review these policies to ensure that similarly situated borrowers are treated equitably with respect to principal reduction. Some program details continue to be unspecified. In particular, the alternative NPV model has not yet been specified, and it is unclear how it will evaluate the impact of principal reduction, including the changes in the likely redefault rate of borrowers receiving principal reductions. According to Treasury, the alternative NPV model will be ready in September or October 2010. In addition, although the original program announcement stated that servicers would be required to retroactively consider borrowers for principal forgiveness who had already received a trial or permanent modification, it is unclear whether and how servicers will be required to do this. According to Treasury, additional guidance addressing this issue will be issued in July 2010. Servicers will be required to start evaluating borrowers for principal reduction on the later of October 1, 2010, or the implementation date of the new version of the NPV model, though servicers could begin offering principal reduction and receiving incentives as of June 3, 2010. Due to the continued severity of the foreclosure crisis and negative equity problem, Treasury will need to expeditiously finalize all program details. While this program could help some borrowers whose loans are greater than 115 percent of the home’s value, servicers could vary in when they choose to offer principal reduction. In some cases, servicers may reasonably refuse to reduce principal, even when the NPV using principal reduction is higher than the NPV without using it. For example, servicers may have contractual agreements with investors that prohibit principal reduction. According to Treasury, principal reduction is not mandatory because HAMP is a voluntary program and the HAMP Servicer Participation Agreement allows servicers to opt out of material program changes made after the agreement was signed. In addition, the Congressional Oversight Panel reported in April 2010 that allowing servicers to choose whether to offer principal reduction could help limit moral hazard. Specifically, if borrowers do not know whether their servicers will forgive principal, they will not be motivated to change their behavior in order to receive it. According to Treasury, servicers will be required to report to Treasury the NPV outcomes with and without principal reduction, as well as whether the borrower was offered it. Further, Treasury officials noted that beginning in late 2010 or early 2011, public reports on servicer performance will include information such as the proportion of borrowers who were offered principal reduction. Because servicers will have significant discretion in whether and when to offer principal reduction under this program, Treasury will need to ensure that public reporting of servicer activity related to principal forgiveness provides sufficient program transparency and addresses potential questions of whether similarly situated borrowers are being treated fairly and consistently. Households with second-lien mortgages are more likely to be underwater than those without second-lien mortgages. According to CoreLogic, in the first quarter of 2010, 38 percent of borrowers with junior liens such as second-lien mortgages were underwater, compared with 19 percent of borrowers with only first-lien mortgages. Offering relief on second-lien mortgages is therefore an important factor in addressing the challenge of underwater borrowers. According to the initial guidelines for the principal reduction program, second-lien holders must agree to reduce principal on the second lien mortgage in the same proportion as the principal reduction on the first lien mortgage. Separately, under the guidelines for 2MP, incentives are offered for the extinguishment or partial extinguishment of second liens. In addition, Treasury announced a new FHA refinancing program, which is expected to be implemented by the fall of 2010 and will allow lenders to refinance underwater first-lien loans into FHA-insured loans if the borrower is current on mortgage payments. This program has been designated up to $14 billion in funds that were originally intended for HAMP and, as with the principal reduction program under HAMP, will be voluntary for servicers. According to initial program descriptions, investors must agree to a principal write-down on the original first-lien loans of at least 10 percent and the combined loan-to-value ratio, which includes both first and junior liens, cannot be greater than 115 percent after the refinancing (97.75 percent for the first lien only). The new FHA refinance option is available only to homeowners who are current on an existing first-lien mortgage that is not insured by FHA. Eligible underwater loans are refinanced into FHA loans on FHA terms based on full documentation, income ratios, and complete underwriting. Total debt including all forms of household debt cannot be greater than approximately 50 percent except for some borrowers with especially strong credit histories. Investors we spoke with supported principal reduction in conjunction with an FHA refinance, because even though they would suffer a loss on the reduction, they would not bear the risk of the borrower redefaulting, as the loan would then be FHA-insured and out of their pools. However, they also noted that the program might reach only a limited number of borrowers as it would only help borrowers who are current on existing first-lien mortgage payments, underwater, and have mortgage payments that could be reduced to 31 percent of income with a loan-to-value ratio for the new loan no greater than 97.75 percent of the appraised value of the home. Treasury has stated that FHA will publish quarterly data on numbers of loans refinanced in this way, including average percentages for loans that are written down and amounts of principal that are reduced. However, Treasury has not yet specified what servicers will be required to report for borrowers considered for the program, including those considered for, but not offered, the refinance. Also, though Treasury has designated up to $14 billion for this program, it has not specified how these funds will be used or the number of borrowers likely to be helped by this program. Finally, Treasury has designated $2.1 billion in HAMP funds for the HFA Hardest-Hit Fund, providing 10 states with the opportunity to design programs to prevent foreclosure and improve housing market stability, potentially including programs to address negative equity. As of May 11, 2010, Treasury had not yet approved any programs under this fund, so the extent to which the programs will address negative equity remains to be seen. The first five states were required to submit proposals on April 16, 2010, and according to Treasury, it is evaluating them to determine whether they meet the act’s requirements and support its goals of preserving homeownership and protecting housing market stability. However, according to initial proposals, some program efforts may require significant implementation periods. For example, one state reported that some of its program features might not be available until 5 months after Treasury approved the program. To promote transparency, each state HFA will be required to establish monitoring mechanisms and to implement a system of internal controls that minimize the risk of fraud, mitigate conflicts of interest, and maximize operational efficiency and effectiveness. In addition, HFAs will report data to Treasury on a periodic basis, including the metrics that are used to measure program effectiveness against stated objectives. According to Treasury, all program designs will be posted online, along with metrics measuring performance of each HFA program. Treasury has stated that the principal reduction program under HAMP, the FHA refinance program, and the HFA Hardest- Hit Fund will be the primary efforts to address the challenge of negative equity, and no new programs are expected. Limited information is available on redefaults on permanent modifications to date, largely because few trials have become permanent. Treasury’s expectations of the number of redefaults may be changing, although Treasury has not specified the number of successful permanent HAMP modifications it expects. Through the end of May 2010, 6,233 of the 346,816 permanent modifications had redefaulted and 124 loans had been paid off. Treasury has begun to publish the debt levels of those receiving permanent HAMP modifications. As we have seen, as of the end of May 2010, these borrowers had a median total debt-to-income ratio of roughly 64 percent after the HAMP modification. In April 2010, the Congressional Oversight Panel noted that with such high debt levels, a small disruption in income or increase in expenses could result in many redefaults. Treasury said that it would examine redefault rates after borrowers had been in HAMP permanent modifications for longer than 3 months. As we reported in July 2009, the redefault rates Treasury anticipated at the inception of HAMP were consistent with the Office of the Comptroller of the Currency’s (OCC) and the Office of Thrift Supervision’s (OTS) analyses of loan modifications, as well as with the Federal Deposit Insurance Corporation’s estimates for the IndyMac loan modification program. At the time, OCC and OTS reported that about 52 percent of modifications redefaulted after 12 months, and IndyMac estimated a redefault rate of 40 percent. However, more recently Treasury officials told us that the redefault rate could be higher for a typical HAMP modification, noting that borrowers entering the HAMP program to date had low credit scores and high loan-to-value ratios relative to those in other modification programs, further increasing the risk of redefault. As noted, Treasury has not publicly disclosed its redefault estimates or the number of successful permanent modifications it expects. In December 2008, we noted that limiting the likelihood of redefault would be a significant challenge as Treasury began its efforts to establish a loan modification program, and Treasury continues to struggle with this challenge. As we pointed out, Treasury’s primary effort to limit redefaults under the HAMP first-lien program was to require that borrowers with high total debt agree to obtain counseling. However, it is unclear how many borrowers have actually received this counseling, and Treasury does not plan either to monitor whether borrowers actually obtain counseling or to assess the requirement’s effectiveness in limiting redefaults. According to Fannie Mae, the HOPE Hotline had received 104,253 calls about this counseling through April 4, 2010, but Fannie Mae did not track whether these borrowers actually obtained counseling. However, the best available information shows that few borrowers have obtained such counseling to date. Specifically, according to NeighborWorks, whose National Foreclosure Mitigation Counseling network consists of roughly 1,700 entities that must be either HUD-approved counseling agencies or state housing finance agencies, as of March 2010 it had only funded about 2,700 HAMP counseling sessions for borrowers with high total debt. This further underscores the importance of monitoring and assessing HAMP’s counseling requirement, as we recommended in July 2009. In March 2010, Treasury issued revised guidelines for the HAMP second- lien program, 2MP, which, to the extent that it reduces borrowers’ total debt, could help limit redefaults on first-lien modifications. However, although a second-lien modification program was initially announced at the inception of HAMP, Treasury has yet to issue estimates of the number of borrowers that the program could help. Treasury officials noted that they would examine the redefault rates of borrowers receiving 2MP modifications. As of June 2010, seven servicers have signed agreements to modify or extinguish second liens under HAMP. However, Treasury will not begin making incentive payments or tracking modifications under 2MP until the fall of 2010. Until recently, servicers may not have been able to identify whether borrowers of second liens in their portfolios have been modified by the first-lien servicer if they do not also service the first lien. First liens must be in HAMP trial periods before second liens begin trial modifications, so in order to modify a second lien, a servicer must first know whether the corresponding first lien has been modified. Treasury developed a database to match first and second liens, which, according to Treasury, was ready in May 2010. Under 2MP, non-GSE servicers can receive up-front and pay-for-success incentive payments, borrowers can receive pay-for-performance incentives, and investors can receive payment reduction cost-share incentives. When a borrower’s first lien is modified under HAMP, a participating second-lien servicer must offer to modify the borrower’s second lien. The modification steps for 2MP are similar to those for HAMP first-lien modifications. As with first liens, servicers first capitalize accrued interest and servicing advances, then reduce the interest rate, then extend the term of the mortgage, and finally, forbear or forgive principal. However, with second liens, the interest rate is generally reduced to 1 percent; the term is extended to match, at a minimum, the term of the HAMP-modified first lien; and the principal forbearance or forgiveness is expected to be proportional to the amount of principal forbearance or reduction on the first lien. Servicers are not required to reduce principal under 2MP, unless principal was forgiven on the first lien, but may offer principal reduction and will receive additional incentives for doing so. The incentive amount for reducing second liens varies depending on the combined loan-to-value ratio, or the ratio of the first and second liens to the value of the home. The terms of the first-lien modification will be used to determine the terms of the second-lien modification, and no additional evaluation is done to determine eligibility for 2MP. The second-lien servicer relies on the information the borrower provides for the first-lien loan modification. In particular, the second-lien servicer is not required to perform an additional NPV model of the related second-lien mortgage, since it can be reasonably concluded that the combined modifications will result in a positive NPV outcome if the first lien was NPV positive. According to Treasury, because the HAMP-modified first-lien mortgage is delinquent or facing imminent default, the servicer may reasonably conclude that the borrower is in imminent danger of defaulting on the second lien. Further, Treasury has stated that postforeclosure recoveries on second liens are likely to be minimal if the first lien is delinquent or at risk of default, so it is reasonable for servicers to conclude that modifications of second liens are likely to result in higher expected cash flows than foreclosure. While servicers were performing loan modifications prior to HAMP, HAMP is a new, complex, and large-scale program that places a significant amount of taxpayer dollars at risk. We have previously reported that Treasury faced challenges in implementing first-lien modifications, including finalizing program guidelines and establishing a comprehensive system of internal controls. Since then, Treasury has announced several new programs and program features. Going forward, in designing and implementing the programs, Treasury could benefit from lessons learned from the initial design and implementation of HAMP. In particular, it will be important for Treasury to expeditiously develop and implement these programs while also developing sufficient program planning and implementation capacity, meaningful performance measures, and appropriate risk assessments in accordance with standards for effective program management. In its April 2010 report, the Congressional Oversight Panel likewise noted that Treasury’s response has lagged behind the pace of the crisis and underscored the need for Treasury to get its new initiatives up and running quickly and to ensure program accountability. We will continue to monitor Treasury’s implementation and management of HAMP-funded programs as part of our ongoing oversight of TARP to ensure that new programs are appropriately designed and operating as intended. Program planning and implementation capacity. In July 2009, we recommended that Treasury finalize a comprehensive system of internal control for HAMP. According to GAO’s Standards for Internal Control in the Federal Government, effective internal controls include activities to ensure the appropriate planning and implementation of government programs. Effective program planning includes having complete policies, guidelines, and procedures in place prior to program implementation. As we noted in March 2010, servicers told us that they faced significant challenges implementing HAMP first-lien modifications because of numerous changes to program guidance. For example, Treasury’s new requirement that servicers evaluate borrowers for trial modifications using verified rather than stated income will likely mean that some servicers will need to alter their policies and processes, as well as retrain staff. Treasury officials told us that it did not anticipate any new programs or significant changes to HAMP going forward. Nonetheless, to avoid potential implementation challenges with the newly announced programs Treasury must balance the need to fully establish guidelines and reporting requirements in advance of implementation by servicers while implementing these programs as quickly as possible. In addition, GAO’s Internal Control Management and Evaluation Tool, which is based on GAO’s Standards for Internal Control in the Federal Government, states that program managers must identify and define tasks required to accomplish particular jobs and fill all necessary positions. In July 2009, we recommended that Treasury place a high priority on fully staffing vacancies in the Homeownership Preservation Office (HPO) and evaluating staffing levels and competencies. However, Treasury has reduced staffing levels in HPO from 36 to 29 full-time positions without formally assessing staffing levels or determining whether HPO staff have the necessary skills to govern the program effectively. Treasury officials told us that it was in the process of approving two additional positions for administering the HFA Hardest-Hit Fund. In addition, they noted that the responsibilities of the policy development staff in HPO would be largely concluded after the final policy documents were issued, and these staff would then be able to support program implementation. However, as of May 14, 2010, Treasury still had not conducted a workforce assessment of HPO, despite the office’s additional administrative responsibilities for the recently announced FHA refinancing program, and ongoing HAMP implementation, including first- and second-lien modifications, HAFA, principal reductions, and forbearance for unemployed borrowers. We noted in July 2009 that having enough staff with appropriate skills was essential to governing HAMP effectively, and we continue to believe that it will be an important factor in Treasury’s ability to design and implement the new HAMP-funded programs both quickly and effectively. According to Treasury, its financial agents—Fannie Mae and MHA-C—are developing a two-stage approach to assessing the capacity and readiness of the top 25 HAMP servicers to implement the recently announced programs. First, servicers will conduct a self-assessment of their readiness using a HAMP checklist. According to Treasury, the self-assessment will be provided to Fannie Mae for review, and Fannie Mae will provide further training, additional guidance, and other support as needed. Treasury officials told us that the second stage would involve on-site walk-throughs conducted by MHA-C that will consist of discussions with management, reviews of documentation such as project plans and testing results, and an end-to-end walk-through of processes. Treasury officials told us that as of the end of April 2010, 21 servicers had been sent a self-assessment on capacity to implement HAFA, and that as of May 2010 on-site readiness reviews for HAFA and 2MP had begun. However, Treasury has not specified a time frame for the completion of either of the two stages of readiness assessment for the other recently announced HAMP-funded programs. Meaningful performance measures. We reported in July 2009 that Treasury must establish specific and relevant performance measures that will enable it to evaluate the program’s success against stated goals in order to hold itself and servicers accountable for these TARP-funded programs. As noted in GPRA, meaningful and useful performance measures should focus on program outcomes and provide a basis for comparing actual program results with performance goals. However, Treasury did not develop performance measures before implementing the first-lien modifications. According to Treasury, revised performance measures were drafted in March 2010, a year after program implementation. Performance measures include process measures such as the number of servicers participating in the program, as well as outcome measures such as average debt-to-income ratios (pre- and postmodification) and redefault rates. Treasury had not yet developed expected performance measures for 2MP, or the recently announced principal reduction, forbearance for unemployed borrowers, or FHA refinance programs as of May 14, 2010. To ensure clear standards for accountability for the newly announced programs, Treasury will need to establish specific outcomes-based performance measures at the outset of the programs. For example, to assess the success of the HAMP principal reduction and FHA refinance programs, Treasury will need to develop measures and goals to assess the extent to which these programs are helping borrowers with negative equity and limiting foreclosures among this population—Treasury’s stated goals for the program. Similarly, early development of meaningful performance measures and goals could help Treasury evaluate the extent to which the 3-month forbearance program is helping unemployed borrowers avoid foreclosure. Such measures could be used to determine whether program parameters, including the amount of time allowed for borrowers to find new employment, are appropriate and sufficient for ensuring program success. As noted by both the Congressional Oversight Panel and SIGTARP, it will be imperative for Treasury to clearly define performance measures for HAMP to ensure program accountability. Furthermore, Treasury has yet to develop benchmarks, or goals, for specific performance measures. According to Treasury, draft first-lien performance measures include metrics such as conversion and redefault rates. But in the absence of predefined goals to indicate what Treasury considers acceptable conversion and redefault rates, assessing the results of these measures will be difficult. Likewise, as Treasury develops performance measures for the recently announced HAMP-funded programs, it must also establish benchmarks for them. Appropriate risk assessments. Also in our July 2009 report, we noted that while some processes and internal controls had been developed during the early stages of HAMP’s implementation, many more controls needed to be finalized as the program progressed to ensure that taxpayer dollars were safeguarded, program objectives achieved, and program requirements met. The adequacy of Treasury’s internal controls for HAMP continues to be an area of concern as Treasury refines the first-lien program and adds new HAMP programs. According to GAO’s Standards for Internal Control in the Federal Government, there are five key components or standards for effective internal control: (1) the control environment, (2) risk assessment, (3) control activities, (4) information and communications, and (5) monitoring. The internal control standards state that agencies must identify the risks that could impede the success of the newly announced programs and determine appropriate methods of mitigating these risks. After risks have been identified, the agency should undertake a thorough and complete analysis of the possible effects of the risks that includes an assessment of how likely the risks are to materialize. Finally, agencies should determine how best to manage or mitigate risk and what specific actions they should take. Treasury, in conjunction with Fannie Mae as the HAMP program administrator, has developed risk control matrixes that identify various risks associated with the first-lien modification process, such as potential inaccuracies in accruals of incentive payments or data reporting, and the controls they have developed to mitigate the identified risk. However, other programmatic risks may exist that Treasury has not addressed. For example, as noted above, Treasury requires that borrowers demonstrate a hardship to qualify for HAMP but does not require servicers to verify the hardship. For example, if the borrower indicates that the household has experienced a decrease in income, the servicer is not required to obtain documentation on past income to compare to current income. As a result, taxpayer funds may be used to support modifications of borrowers who have not in fact experienced a hardship. Furthermore, in December 2008 we noted that one of the key challenges for loan modification programs was mitigating the risk of moral hazard—the possibility that borrowers might choose to default when they otherwise would not in order to benefit from the loan modification. Requiring borrowers to demonstrate hardship is one means of mitigating this risk, but by not requiring servicers to verify the hardship, Treasury has not fully realized the potential benefit s of this co ntrol. Our prior work looking at the implementation of the first-lien program underscores the importance of fully identifying and assessing the potential risks associated with the newly announced HAMP-funded homeowner assistance efforts. Further, Treasury needs to develop appropriate controls to mitigate those risks prior to the implementation date for the newly announced HAMP programs. For example, moral hazard is of particular concern for the programs that include principal reduction. Treasury has built some features into HAMP to manage the risk of moral hazard, such as requiring a positive NPV model in order to have principal reduced, something that borrowers cannot easily calculate in advance. Further, the principal reduction is initially treated as forbearance and forgiven in three equal steps over 3 years as long as the homeowner remains current on payments. Under the FHA refinance program, borrowers must be current on their mortgage payments to qualify, eliminating the risk that they will default on their mortgages when they otherwise would not in order to qualify for this program. However, the issue of moral hazard is one that will require Treasury’s continued attention to ensure that the safeguards that are put in place sufficiently limit this risk. The adequacy of Treasury’s risk assessments and control activities for the newly announced HAMP- funded programs is an area that we plan to monitor and report on as part of our ongoing oversight of Treasury’s use of TARP funds to preserve homeownership and protect property values. Treasury’s HAMP program is part of an unprecedented response to a particularly difficult time in our nation’s mortgage markets. The Emergency Economic Stabilization Act called for Treasury to, among other things, preserve homeownership and protect home values, and HAMP continues to be Treasury’s cornerstone effort for doing this. However, more than a year after Treasury’s initial announcement of HAMP and the program’s goal of bringing consistency to foreclosure mitigation, servicers continue to treat borrowers seeking to avoid foreclosures inconsistently in part because of a lack of specific guidelines from Treasury. In particular, Treasury did not specify requirements for soliciting potentially eligible borrowers for HAMP during the first year of the program, even though outreach is important in the early phases of program implementation. While Treasury has recently issued more specific requirements on communicating with borrowers, it is continuing to finalize measures of servicer performance in this area. In addition, while Treasury’s stated goals are to standardize the loan modification process and reach borrowers before they are delinquent on their loans, Treasury’s lack of guidelines on how servicers should determine whether borrowers who are current in their payments but may be in imminent danger of default has led to significant differences in how servicers are evaluating these borrowers for HAMP. By specifying clear and specific guidelines, such as those implemented by the GSEs for their HAMP modifications, Treasury could better ensure that similarly situated borrowers receive equitable treatment under HAMP. Furthermore, Treasury has not fully specified parameters for servicers’ internal quality assurance programs for HAMP and therefore is not maximizing the potential for servicers’ quality assurance procedures to ensure equitable treatment of borrowers. With greater specificity from Treasury on how to categorize loans for sampling and what servicers should be evaluating in their reviews, servicers would be more likely to have robust HAMP quality assurance programs. Finally, although Treasury drafted a policy that established consequences for servicer noncompliance with HAMP requirements in October 2009, as of May 2010 it had not yet finalized the policy. As a result, Treasury lacks transparency and risks inconsistency in how it enforces HAMP servicer requirements. Treasury requires servicers to have procedures and systems in place to respond to HAMP complaints and utilizes the HOPE Hotline to escalate borrowers’ concerns about servicers’ handling of HAMP applications and potentially incorrect denials. However, because Treasury has not specified requirements on the types of complaints that servicers should track, some servicers are tracking only certain types of complaints such as those addressed to a company executive. Without consistent tracking of HAMP complaints, Treasury cannot determine with certainty whether servicers are ensuring fair and timely resolutions of HAMP complaints. Treasury has set up the HOPE Hotline escalation process as the primary means for borrowers to raise concerns about their servicer’s performance on the HAMP loan modification request and potentially incorrect denials. But whether this is an effective mechanism to resolve such concerns remains unclear because neither MHA Escalation Team counselors, their quality assurance reviewers, nor HAMP Solution Center staff independently review borrowers’ applications or loan files. As a result, discretion over how to resolve borrowers’ concerns about potentially incorrect HAMP denials largely remains with the servicers. Therefore, Treasury needs to monitor the effectiveness of this escalation mechanism, particularly to resolve potentially incorrect denials, and make improvements to this mechanism or replace it as appropriate. In addition, Treasury has not taken steps to specifically inform borrowers that the hotline can be used to escalate concerns about servicers’ handling of HAMP applications and potentially incorrect denials. As a result, borrowers facing foreclosure who have been told by their servicers that they do not qualify for a HAMP loan modification may feel that they cannot challenge the servicer’s determination and may lose their homes to foreclosures that might have been prevented. As we noted in our March 2010 testimony, Treasury faces several challenges in implementing HAMP going forward, including converting trial modifications to permanent modifications, addressing the growing number of foreclosures among borrowers with negative equity, limiting redefaults among borrowers who receive HAMP modifications, and ensuring adequate program stability and management. While Treasury has taken some steps toward addressing these challenges, the multitude of problems facing U.S. mortgage markets call for swift and deliberate action, and it remains to be seen how effective Treasury’s efforts will be. For example, to address the challenge of converting trial modifications to permanent modifications, Treasury launched a conversion campaign, streamlined required documentation, and switched to verified income documentation to start a trial. In addition, in March 2010 Treasury announced several potentially substantial new HAMP-funded efforts, but it did not say how many borrowers these programs were intended to reach or discuss the specifics of these programs. In particular, Treasury announced a principal reduction program under HAMP that could help borrowers with negative equity. However, Treasury has stated that principal reduction will be voluntary for servicers and will need to ensure that future public reporting of this program ensures program transparency and addresses potential questions about whether all borrowers are being treated fairly. In our July 2009 report, we made a number of recommendations to improve HAMP’s effectiveness, transparency, and accountability. For example, we recommended that Treasury consider methods of monitoring whether borrowers who receive HAMP modifications and continue to have high total household debt (more than 55 percent of their income) obtain the required HUD-approved housing counseling. While Treasury has told us that monitoring borrower compliance with the counseling requirement would be too burdensome, we continue to believe that it is important that Treasury determine whether consumers are actually receiving counseling and whether the counseling requirement is having its intended effect of limiting redefaults. In addition, we recommended that Treasury place a high priority on fully staffing HPO and noted that having enough staff with appropriate skills was essential to governing HAMP effectively. However, Treasury has since reduced the number of HPO staff without formally assessing staffing needs. We believe that having sufficient staff is critical to Treasury’s ability to design and implement HAMP-funded programs both quickly and effectively. We also recommended that Treasury finalize a comprehensive system of internal controls for HAMP that will continue to be important as Treasury implements new HAMP-funded programs. Finally, as Treasury continues with first-lien modifications, and implements 2MP, HAFA, and the newly announced programs, it will be important to adhere to standards for effective program management and to establish sufficient program planning and implementation capacity, meaningful performance measures, and appropriate risk assessments. As we, the Congressional Oversight Panel, and SIGTARP have previously noted, establishing key performance metrics and reporting on individual servicers’ performance with respect to those metrics are critical to the program’s transparency and accountability. Additionally, without preestablished performance measures and goals, Treasury will not be able to effectively assess the outcomes of the newly announced programs. Given the magnitude of the investment of public funds in HAMP, it will be imperative that Treasury take the steps needed to expeditiously implement a prudent design for the remaining HAMP-funded programs. We will continue to monitor Treasury’s implementation and management of HAMP-funded programs as part of our ongoing oversight of TARP to ensure that such programs are appropriately designed and operating as intended. As part of its efforts to continue improving the transparency and accountability of HAMP, we recommend that the Secretary of the Treasury take actions to expeditiously: establish clear and specific criteria for determining whether a borrower is in imminent default to ensure greater consistency across servicers; develop additional guidance for servicers on their quality assurance programs for HAMP, including greater specificity on how to categorize loans for sampling and what servicers should be evaluating in their reviews; specify which complaints servicers should track to ensure consistency and to facilitate program oversight and compliance; more clearly inform borrowers that the HOPE Hotline may also be used if they are having difficulty with their HAMP application or servicer or feel that they have been incorrectly denied HAMP, monitor the effectiveness of the HOPE Hotline as an escalation process for handling borrower concerns about potentially incorrect HAMP denials, and develop an improved escalation mechanism if the HOPE Hotline is not sufficiently effective; finalize and issue consequences for servicer noncompliance with HAMP requirements as soon as possible; report activity under the principal reduction program, including the extent to which servicers determined that principal reduction was beneficial to investors but did not offer it, to ensure transparency in the implementation of this program feature across servicers; finalize and implement benchmarks for performance measures under the first-lien modification program, as well as develop measures and benchmarks for the recently announced HAMP-funded homeowner assistance programs; and implement a prudent design for remaining HAMP-funded programs. We provided a draft of this report to Treasury for its review and comment. We received written comments from the Assistant Secretary for Financial Stability that are reprinted in appendix III. We also received technical comments from Treasury that we incorporated into the report as appropriate. In its written comments, Treasury stated that it would review our final report and provide Congress with a detailed description of the actions that Treasury had taken and intended to take regarding the recommendations in the report. Treasury also stated that while GAO notes the progress Treasury has made in implementing HAMP, it believed that the draft report did not sufficiently take into the account the scope and complexity of the challenges Treasury faced when it developed and implemented a modification initiative, the scale of which had never been previously attempted. We acknowledge that the HAMP program is part of an unprecedented response to a particularly difficult time in our nation’s mortgage markets. As noted by Treasury when it first announced the HAMP framework in February 2009, the deep contraction in the economy and the housing market had devastating consequences for homeowners and communities throughout the country. However, more than a year after Treasury first announced HAMP, the number of permanent modifications has been limited and key HAMP program components have not been fully implemented. Treasury noted in its written comments that the servicing industry did not have the capacity or infrastructure needed to implement a national loan modification program such as HAMP. This issue of servicer capacity to successfully implement HAMP was one that we raised in our July 2009 report as needing Treasury’s attention and remains a concern as Treasury implements the additional programs and components it has announced to supplement the HAMP first-lien modification program. While Treasury has taken some steps to address the challenges we and others have previously identified, the continuing problems in the U.S. mortgage markets call for swift and deliberate action. Given the challenges involved and the magnitude of public funds invested—up to $50 billion in TARP funds and $25 billion in GSE funds—it remains to be seen how effective Treasury’s efforts will be. As part of our ongoing monitoring of Treasury’s implementation of TARP, we will continue to monitor Treasury’s progress in implementing these and other planned initiatives in future reports. We are sending copies of this report to the Congressional Oversight Panel, Financial Stability Oversight Board, Special Inspector General for TARP, interested congressional committees and members, Treasury, the federal banking regulators, and others. This report is also available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact Richard J. Hillman at (202) 512-8678 or hillmanr@gao.gov, Thomas J. McCool at (202) 512-2642 or mccoolt@gao.gov, or Mathew J. Scirè at (202) 512-8678 or sciremj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To examine servicers’ treatment of borrowers under the Home Affordable Modification Program (HAMP), between November 2009 and March 2010, we spoke with and obtained information from 10 HAMP servicers of various sizes that collectively represented 71 percent of the Troubled Asset Relief Program (TARP) funds allocated to participating servicers, visiting 6 of them. The six servicers we visited were: Aurora Loan Services, LLC; Bank of America, NA; Carrington Mortgage Services, LLC; GMAC Mortgage, Inc.; Ocwen Financial Corporation, Inc.; and Wells Fargo Bank, NA. The four additional servicers we spoke with and obtained data from were: CitiMortgage, Inc.; J.P. Morgan Chase Bank, NA; Saxon Mortgage Services, Inc.; and Select Portfolio Servicing. For each of these 10 servicers, we reviewed their HAMP policies, procedures, and quality assurance reports; and interviewed management and quality assurance staff. We also requested and reviewed data about these servicers’ solicitations of borrowers for HAMP between when they began participating in the program and December 31, 2009. We determined that these data were reliable for the purposes of our report. In addition, for the servicers we visited, we observed a sample of HAMP-related phone calls between borrowers and their servicers. We also reviewed HAMP program documentation issued by the Department of the Treasury (Treasury), including the supplemental directives related to first-lien modifications and servicer communications with borrowers, press releases detailing aspects and goals of the program, and draft operational metrics. We obtained and analyzed information from Treasury on servicers’ HAMP loan modification activity. Our work focused on non-GSE HAMP activity using TARP funds, but the information obtained from Treasury did not always break out GSE and non-GSE activity. We also spoke with officials at Treasury and its financial agents—Fannie Mae and Making Home Affordable-Compliance—to understand their rationale for program changes, what they were doing to ensure compliance with HAMP guidelines, and their processes for resolving HAMP complaints. In addition, we reviewed data on the content and resolution of these complaints. To understand the characteristics of borrowers in the program, we analyzed data from IR/2, the HAMP database managed by Fannie Mae to track the status of HAMP modifications, and we determined that these data were reliable for the purposes used in our report. To learn more about the process for and resolution of HAMP-related complaints, we spoke to the administrators of the HOPE Hotline and representatives of NeighborWorks, a national nonprofit organization created by Congress to provide foreclosure prevention and other community revitalization assistance to the more than 230 community-based organizations in its network. We also met with a trade association that represents both investors and servicers, and an organization representing a national coalition of community investment organizations. To examine actions Treasury has taken to address the challenges of (1) converting trial modifications to permanent modifications, (2) addressing potential foreclosures among borrowers with negative equity, (3) limiting the likelihood of redefault among borrowers with permanent modifications, and (4) ensuring program stability and effective program management, we reviewed the program announcements of current and upcoming HAMP-funded homeowner assistance programs to determine the extent to which they address these challenges. We also spoke with Treasury officials to understand the goals of these programs, and the steps Treasury has taken to ensure program stability and adequate program management in light of these programs. In addition, we requested and reviewed data from Treasury and servicers relevant to each challenge. Specifically, we requested information from the 10 HAMP servicers described above on the number of borrowers who had been in trial modifications for at least 3 months, as of December 31, 2009, and of these, the number that had converted to permanent modifications. We also reviewed Treasury reports on conversion rates and documentation related to Treasury’s conversion campaign. To understand the extent to which borrowers may be facing negative equity, we reviewed data from American Core Logic for the first quarter of 2010. Finally, we reviewed the Government Performance and Results Act and the Standards for Internal Control in the Federal Government to determine the key elements needed to ensure program stability and adequate program management. We coordinated our work with other oversight entities that TARP created— the Congressional Oversight Panel, the Office of the Special Inspector General for TARP, and the Financial Stability Oversight Board. According to Treasury, it considered options for monitoring what proportion of borrowers is obtaining counseling, but determined that it would be too burdensome to implement. Treasury does not plan to assess the effectiveness of counseling in limiting redefaults because it believes that the benefits of counseling on the performance of loan modifications is well documented and the assessment of the benefits to HAMP borrowers is not needed. Reevaluate the basis and design of the Home Price Decline Protection (HPDP) program to ensure that HAMP funds are being used efficiently to maximize the number of borrowers who are helped under HAMP and to maximize overall benefits of utilizing taxpayer dollars. On July 31, 2009, Treasury announced detailed guidance on HPDP that included changes to the program’s design that, according to Treasury, improve the targeting of incentive payments to mortgages that are at greater risk because of home price declines. Treasury does not plan to limit HPDP incentives to modifications that would otherwise not be made without the incentives, due to concerns about potential manipulation of inputs by servicers to maximize incentive payments and the additional burden of re- running the net present value model for many loans. Institute a system to routinely review and update key assumptions and projections about the housing market and the behavior of mortgage-holders, borrowers, and servicers that underlie Treasury’s projection of the number of borrowers whose loans are likely to be modified under HAMP and revise the projection as necessary in order to assess the program’s effectiveness and structure. According to Treasury, on a quarterly basis it is updating its projections on the number of TARP-funded first-lien modifications expected when it revises the amount of TARP funds allocated to each servicer under HAMP. Treasury is gathering data on servicer performance in HAMP and housing market conditions in order to improve and build upon the assumptions underlying its projections about mortgage market behavior. Place a high priority on fully staffing vacant positions in the Homeownership Preservation Office (HPO)—including filling the position of Chief Homeownership Preservation Officer with a permanent placement—and evaluate HPO’s staffing levels and competencies to determine whether they are sufficient and appropriate to effectively fulfill its HAMP governance responsibilities. A permanent Chief Homeownership Preservation Officer was hired on November 9, 2009. According to Treasury, staffing levels for HPO have been revised from 36 full-time equivalent positions to 29. According to Treasury, as of April 2010, HPO had filled 27 of the total of 29 full time positions. Expeditiously finalize a comprehensive system of internal control over HAMP, including policies, procedures, and guidance for program activities, to ensure that the interests of both the government and taxpayer are protected and that the program objectives and requirements are being met once loan modifications and incentive payments begin. According to Treasury, it will work with Fannie Mae and Freddie Mac to build and refine the internal controls within these financial agents’ operations as new programs are implemented. Treasury expects to finalize a list of remedies for servicers not in compliance with HAMP guidelines by June 2010. Expeditiously develop a means of systematically assessing servicers’ capacity to meet program requirements during program admission so that Treasury can understand and address any risks associated with individual servicers’ abilities to fulfill program requirements, including those related to data reporting and collection. According to Treasury, a servicer self-evaluation form, which provides information on the servicer’s capacity to implement HAMP, has been implemented beginning with servicers who started signing Servicer Participation Agreements in December 2009. In addition to the contacts named above, Lynda Downing, Harry Medina, John Karikari (Lead Assistant Directors); Tania Calhoun; Emily Chalmers; William Chatlos; Heather Latta; Rachel DeMarcus; Karine McClosky; Marc Molino; Mary Osorno; Jared Sippel; Winnie Tsen; and Jim Vitarello made important contributions to this report. | Congress created the Troubled Asset Relief Program (TARP) to, among other things, preserve homeownership and protect home values. In March 2009, the U.S. Department of the Treasury (Treasury) announced the Home Affordable Modification Program (HAMP) as its cornerstone effort to achieve these goals. This report examines (1) the extent to which HAMP servicers have treated borrowers consistently and (2) the actions that Treasury has taken to address the challenges of trial modification conversions, negative equity, redefaults, and program stability. GAO obtained information from 10 servicers that account for 71 percent of HAMP funds and spoke with Treasury, Fannie Mae, and Freddie Mac officials. While one of Treasury's stated goals for HAMP was to standardize the loan modification process across the servicing industry, GAO found inconsistencies in how servicers were treating borrowers under HAMP that could lead to inequitable treatment of similarly situated borrowers. First, because Treasury did not issue guidelines for soliciting borrowers for HAMP until a year after announcing the program, servicers notified borrowers about HAMP anywhere from 31 days to more than 60 days after a delinquency. Many borrowers also complained that they did not receive timely responses to their HAMP applications and had difficulty obtaining information about the program. Treasury has recently issued guidelines on borrower communications, and plans to monitor compliance with the guidelines. Second, Treasury has emphasized the importance of reaching borrowers before they are delinquent but has not issued guidelines for determining when borrowers are in imminent danger of default. As a result, the 10 servicers that GAO contacted reported 7 different sets of criteria for determining imminent default. Third, while Treasury required servicers to have internal quality assurance procedures to ensure compliance with HAMP requirements, Treasury did not specify how loan files should be sampled for review or what the reviews should contain. As a result, some servicers did not review trial modifications or HAMP denials as part of their quality assurance procedures. Fourth, Treasury has not specified which HAMP complaints should be tracked, and several servicers track only certain types of complaints. Fifth, Treasury has not clearly informed borrowers that the HOPE Hotline can be used to raise concerns about servicers' handling of HAMP loan modifications and to challenge potentially incorrect denials, likely limiting the number of borrowers who have used the hotline for these purposes. Finally, Treasury does not have clear consequences for servicers that do not comply with program requirements, potentially leading to inconsistencies in how instances of noncompliance are handled. |
Since the 1940s, one mission of DOE and its predecessor agencies has been processing uranium as a source of nuclear material for defense and commercial purposes. A key step in this process is the enrichment of natural uranium, which increases its concentration of uranium-235, the isotope of uranium that undergoes fission to release enormous amounts of energy. Before it can be enriched, natural uranium must be chemically converted into uranium hexafluoride. The enrichment process results in two principal products: (1) enriched uranium hexafluoride, which can be further processed for specific uses, such as nuclear weapons or fuel for nuclear power plants; and (2) leftover “tails” of uranium hexafluoride. These tails are also known as depleted uranium because the material is depleted in uranium-235 compared with natural uranium. Since 1993, uranium enrichment activities at DOE-owned uranium enrichment plants have been performed by USEC, formerly a wholly owned government corporation that was privatized in 1998. However, DOE still maintains over 700,000 metric tons of depleted uranium tails in about 63,000 metal cylinders in storage yards at its Paducah, Kentucky, and Portsmouth, Ohio, enrichment plants. It must safely maintain these cylinders because the tails are dangerous to human health and the environment. Uranium hexafluoride is radioactive and forms extremely corrosive and potentially lethal compounds if it contacts water. In addition, DOE also maintains large inventories of natural and enriched uranium that are also surplus to the department’s needs. Tails have historically been considered a waste product because considerable enrichment processing is required to further extract the remaining useful quantities of uranium-235. In the past, low uranium prices meant that these enrichment services would cost more than the relatively small amount of uranium-235 extracted would be worth. However, an approximately tenfold increase in uranium prices—from approximately $21 per kilogram of uranium in the form of uranium hexafluoride in November 2000 to about $200 per kilogram in February 2008—has potentially made it profitable to re-enrich some tails to further extract uranium-235. Even with the current higher uranium prices, however, only DOE’s tails with higher concentrations of uranium-235 (at least 0.3 percent) could be profitably re-enriched, according to industry officials. About one-third of DOE’s tails contain uranium-235 concentrations at that level or higher. DOE’s potential options for its tails include selling the tails “as is,” re- enriching them, or storing them indefinitely. However, DOE’s legal authority to sell the tails in their current form is doubtful. Although we found that DOE generally has authority to carry out the re-enrichment and storage options, the department has not finished a comprehensive assessment of these options, and it is still evaluating the details of how such options might be implemented. While selling the tails in their current unprocessed form is a potential option, we believe that DOE’s authority to conduct such sales is doubtful because of specific statutory language in 1996 legislation governing DOE’s disposition of its uranium. Appendix I contains our analysis of DOE’s authority to sell or transfer its depleted uranium in its current form, as well as to re-enrich and sell the tails, and to store the tails indefinitely. As our analysis explains, in 1996, Congress enacted section 3112 of the USEC Privatization Act, which limits DOE’s general authority, under the Atomic Energy Act or otherwise, to sell or transfer uranium. In particular, section 3112 explicitly bars DOE from selling or transferring “any uranium”— including but not specifically limited to certain forms of natural and enriched uranium—”except as consistent with this section.” Section 3112 then specifies conditions for DOE’s sale or transfer of natural and enriched uranium of various types, including conditions in section 3112(d) for sale of natural and low-enriched uranium from DOE’s inventory. To ensure the domestic uranium market is not flooded with large amounts of government material, in section 3112(d), Congress required DOE to determine that any such inventory sales will not have a material adverse impact on the domestic uranium industry. Congress also required in section 3112(d) that DOE determine it will receive adequate payment—at least “fair market value”—if it sells this uranium and that DOE obtain a determination from the President that such materials are not necessary for national security. Nowhere, however, does section 3112(d) or any other provision of section 3112 explicitly provide conditions for DOE to transfer or sell depleted uranium. Because section 3112(a) states that DOE may not “transfer or sell any uranium . . . except as consistent with this section,” and because no other part of section 3112 sets out the conditions for DOE to transfer or sell depleted uranium, we believe that under rules of statutory construction, DOE likely lacks authority to sell the tails. While courts have not addressed this question before and thus the outcome is not free from doubt, this interpretation applies the plain language of the statute. It also respects the policy considerations and choices Congress made in 1996 when presented with the disposition of DOE’s valuable uranium in a crowded and price-sensitive market. Finally, this reading of DOE’s authority is consistent with how courts address changes in circumstances after a law is passed: Statutes written in comprehensive terms apply to unanticipated circumstances if the new circumstances reasonably fall within the scope of the plain language. Thus, under the current terms of section 3112, DOE’s sale of its tails would be covered by the statute’s general prohibition on sale of uranium, even if tails were not part of the universe Congress explicitly had in mind when it enacted the statute in 1996. Should Congress grant DOE the needed legal authority by amending the USEC Privatization Act or through other legislation, firms such as nuclear power utilities and enrichment companies would be interested in purchasing at least that portion of the tails with higher concentrations of extractable uranium-235 as a valuable source for nuclear fuel. Officials from 8 of 10 U.S. nuclear utilities indicated tentative interest in such a purchase. Individual utilities were often interested in limited quantities of DOE’s tails because they were concerned about depending upon a single source to fulfill all of their requirements. Multiple utilities acting together as a consortium could mitigate these concerns and purchase larger quantities of tails. Some enrichment firms also told us of some interest in purchasing portions of the inventory, but their anticipated excess enrichment capacity to process the tails into a marketable form affected both the quantity of tails they would purchase and the timing of any purchase. Potential buyers suggested various commercial arrangements, including purchasing the tails through a competitive sale, such as an auction, or through negotiations with DOE. However, industry officials told us that buyers would discount, perhaps steeply, their offered prices to make buying tails attractive compared with purchasing natural uranium on the open market. That is, DOE might get a discounted price for the tails to compensate buyers for additional risks, such as rising enrichment costs or buyers’ inability to obtain sufficient enrichment services. In addition, potential buyers noted that any purchase would depend upon confirming certain information, such as that the tails were free of contaminants that could cause nuclear fuel production problems and that the cylinders containing the tails—some of which are 50 years old and may not meet transportation standards—could be safely shipped. Although DOE’s legal authority to sell the tails in their current form is doubtful, DOE has the general legal option, as discussed in appendix I, of re-enriching the tails and then selling the resulting natural or enriched uranium. DOE would have to contract for enrichment services commercially because the department no longer operates enrichment facilities itself. Furthermore, DOE would have to find a company with excess enrichment capacity beyond its current operations, which may be particularly difficult if large amounts of enrichment processing were required. Within the United States today, for example, the only operating enrichment facility is DOE’s USEC-run Paducah, Kentucky, plant, and almost all of its enrichment capacity is already being used through 2012, when the facility may stop operating. USEC and at least two other companies are also constructing or planning to construct new enrichment facilities in the United States that potentially could be used to re-enrich DOE’s tails. Although DOE would have to pay for re-enrichment, it might obtain more value from selling the re-enriched uranium instead of the tails if its re- enrichment costs were less than the discount it would have to offer to sell the tails as is. Enrichment firms with whom we spoke told us they would be interested in re-enriching the tails for a fee. The quantity of tails they would re-enrich annually would depend on the available excess enrichment capacity at their facilities. Additionally, as noted above, prior to selling any natural or enriched uranium that results from re-enriching tails, DOE would be required under section 3112(d) of the USEC Privatization Act to determine that sale of the material would not have a material adverse impact on the domestic uranium industry and that the price paid to DOE would provide at least fair market value. Section 3112(d) also would require DOE to obtain the President’s determination that the material is not needed for national security. DOE also has the general legal option, as discussed in appendix I, to store the tails indefinitely. In the late 1990s, when relatively low uranium prices meant that tails were viewed as waste, DOE developed a plan for the safe, long-term storage of the material. DOE is constructing two new facilities to chemically convert its tails into a more stable and safer uranium compound that is suitable for long-term storage. DOE estimates that after the conversion facilities begin operating in 2009, it will take approximately 25 years to convert its existing tails inventory. Storing the tails indefinitely could prevent DOE from taking advantage of the large increase in uranium prices to obtain potentially large amounts of revenue from material that was once viewed as waste. DOE would also continue to incur costs associated with storing and maintaining the cylinders containing the tails. These costs amount to about $4 million annually. Sale (if authorized) or re-enrichment of some of DOE’s tails could also reduce the amount of tails that would need to be converted and, thereby, save DOE some conversion costs. Moreover, once the tails were converted into a more stable form of uranium oxide, DOE’s costs to re-enrich the tails would be higher if it later decided to pursue this approach. This is because of the cost of converting the uranium oxide back to uranium hexafluoride, a step that would be required for re-enrichment. However, according to DOE officials, after the conversion plants begin to operate, the plants will first convert the lower concentration tails because they most likely will not be economically worthwhile to re-enrich. This would give DOE additional time to sell or re- enrich the more valuable higher-concentration tails. DOE has been developing a plan since 2005 to sell excess uranium from across its inventories of depleted, natural, and enriched uranium to generate revenues for the U.S. Treasury. In March 2008, DOE issued a policy statement that established a general framework for how DOE plans to manage its uranium inventories. One feature of this policy statement is the establishment of an annual cap on total uranium sales from all of DOE’s inventories. The cap is designed to minimize a material adverse impact on domestic uranium producing companies that could result from DOE depressing uranium prices by selling large amounts of uranium. Thus, under this policy, the maximum amount of tails that DOE would sell annually will depend on the amount of planned sales from its other uranium inventories. In addition, because most uranium to be used as fuel for U.S. nuclear power plants comes from foreign sources, DOE may also choose to retain, rather than sell, some of its uranium as a reserve stockpile to be used in case of a significant disruption in world supplies. However, the March 2008 policy statement is not a comprehensive assessment of the sales, re-enrichment, or storage options for DOE’s tails. The policy statement lacks specific information on the types and quantities of uranium that the department has in its inventory. Furthermore, the policy statement does not discuss whether it would be more advantageous to sell the higher-concentration tails as is (if authorized) or to re-enrich them. It also does not contain details on when any sales or re-enrichment may occur or DOE’s legal authority to carry out those options under section 3112 of the USEC Privatization Act. It also lacks information on the uranium market conditions that would influence any DOE decision to potentially sell or re-enrich tails. Further, it does not analyze the impact of such a decision on the domestic uranium industry, and it does not provide guidance on how a decision should be altered in the event that market conditions change. Although the policy statement states that DOE will identify categories of tails that have the greatest potential market value and that the department will conduct cost-benefit analyses to determine what circumstances would justify re-enriching and/or selling potentially valuable tails, it does not have specific milestones for doing so. Instead, the policy statement states that this effort will occur “in the near future.” At current uranium prices, we estimate DOE’s tails to have a net value of $7.6 billion; however, we would like to emphasize that this estimate is very sensitive to changing uranium prices, which recently have been extremely volatile, as well as to the availability of enrichment capacity. This estimate assumes the February 2008 published uranium price of $200 per kilogram of natural uranium in the form of uranium hexafluoride and $145 per separative work unit—the standard measure of uranium enrichment services. Our model also assumes the capacity to re-enrich the higher- concentration tails and subtracts the costs of the needed enrichment services. It also takes into account the cost savings DOE would realize from reductions in the amount of tails that needed conversion to a more stable form for storage, as well as the costs to convert any residual tails. As noted above, this estimate is very sensitive to price variations for uranium as well as to the availability of enrichment services. Uranium prices are very volatile, and a sharp rise or fall in prices could greatly affect the value of the tails. For example, since 2000, uranium prices have varied from a low of about $21 per kilogram in November 2000 to a high of about $360 per kilogram in mid-2007, before falling to their recent level of about $200 per kilogram. Substituting the high and low end of historical uranium prices over the past 8 years for current prices results in a range of values for the tails from being nearly worthless, assuming $21 per kilogram of uranium, to over $20 billion, assuming $360 per kilogram of uranium. There is no consensus among industry players whether uranium prices will fall or rise in the future or on the magnitude of any future price changes. Furthermore, the introduction of additional uranium onto the market by the sale of large quantities of DOE depleted, natural, or enriched uranium—assuming DOE obtains authority to sell depleted uranium—could also lead to lower uranium prices. Therefore, according to DOE officials, DOE’s uranium sales strategy, when completed, will likely call for limits on the quantity of uranium the department would sell annually to help achieve DOE’s goal of minimizing the negative effects on domestic uranium producers. However, this would lengthen the time necessary to market DOE’s uranium, increasing the time the department is exposed to uranium price volatility. These factors all result in great uncertainty of the valuation of DOE’s tails. In addition, the enrichment capacity available for re-enriching tails may be limited, and the costs of these enrichment services are uncertain. For example, USEC currently only has a small amount of excess enrichment capacity at its Paducah plant. If it used the spare capacity, USEC would only be able to re-enrich about 14 percent of DOE’s most economically attractive tails between now and the possible closing of the plant in 2012. Although USEC officials told us the company was willing to explore options to extend the Paducah plant’s operations beyond 2012 and dedicate Paducah’s capacity solely to re-enriching DOE’s tails after this point, negotiations between the company and DOE would be needed to determine the enrichment costs that would be paid by DOE. The Paducah plant uses a technology developed in the 1940s that results in relatively high production costs. Even if the Paducah plant were to be dedicated entirely to re-enriching DOE tails after 2012, over a decade would be required to complete the work because of limitations on the annual volume of tails that can be physically processed by the plant. This lengthy period of time would expose DOE to risks of uranium price fluctuations and increasing maintenance costs. USEC and other companies are constructing or planning to construct enrichment plants in the United States that utilize newer, lower-cost technology. However, these facilities are not expected to be completed until various times over the next decade. It is unclear exactly when these facilities will be fully operating, the extent to which they will have excess enrichment capacity to re-enrich DOE’s tails, and what enrichment costs DOE could expect to pay. For example, the size of the fee DOE may have to pay an enrichment company to re-enrich its tails would be subject to negotiation between DOE and the company. Recent dramatic increases in uranium prices present the U.S. government with an opportunity to gain some benefit from material that was once considered a liability. Under current law, however, one potential avenue for dealing with DOE’s depleted uranium tails—sale of the material in its current form—is likely closed to the department. Obtaining legal authority from Congress to sell depleted uranium under USEC Privatization Act section 3112 or other legislation would provide the department with an additional option in determining the best course of action to obtain the maximum financial benefit from its tails. We therefore recommended that Congress consider clarifying DOE’s statutory authority to manage depleted uranium, under the USEC Privatization Act or other legislation, including explicit direction about whether and how DOE may sell or transfer the tails. Depending on the terms of such legislation, this could reap significant benefits for the government because of the potentially large amount of revenue that could be obtained. In any event, enacting explicit provisions regarding DOE’s disposition of depleted uranium would provide stakeholders with welcome legal clarity and help avoid litigation that could interrupt DOE’s efforts to obtain maximum value for the tails. Unfortunately, DOE has not completed a comprehensive assessment of its options with sufficient speed to take advantage of current market conditions. Despite working since 2005 to develop a plan for its uranium inventories, DOE’s March 2008 policy statement on the management of its excess uranium inventories lacks detailed information on the types and amounts of uranium that the department plans to potentially sell, further enrich, or store. Although pledging to conduct appropriate cost-benefit analyses as well as analyses on the impact of any proposal on the domestic uranium industry, the policy statement lacks specific milestones for doing so. Because of the potentially significant amounts of revenue that could be obtained from DOE’s uranium inventories and the extreme volatility of the uranium market, we recommended that the department complete, as soon as possible, a comprehensive uranium management assessment that details DOE’s options, its authority to implement these options, and the impact of these options on the domestic uranium industry. Without such an assessment that contains detailed information on each of its options, DOE will be unable to quickly react to rapidly changing market conditions to achieve the greatest possible value from its uranium inventories. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions that you or other Members of the Subcommittee may have at this time. If you have any questions or need additional information, please contact Robert A. Robinson at (202) 512-3841 or robinsonr@gao.gov. Major contributors to this statement were Ryan T. Coles (Assistant Director), Ellen Chu, Terry Hanford, Karen Keegan, Omari Norman, Susan Sawtelle, and Franklyn Yao. As part of the Government Accountability Office’s review of the Department of Energy’s (DOE) potential options for managing its inventory of excess depleted uranium (also known as “tails”), we examined DOE’s legal authority to implement three basic options: (1) re- enriching the tails and then selling or transferring them, (2) storing the un- enriched tails indefinitely, and (3) selling or transferring the inventory of tails “as is.” We conclude that DOE has general authority under the Atomic Energy Act to carry out the first and second options—to re-enrich and then sell or transfer the tails, as well as to store them indefinitely. However, we believe that because of constraints on DOE’s Atomic Energy Act authority in the USEC Privatization Act, the department’s authority to carry out the third option—to sell or transfer the tails in their current form—is doubtful. We believe that under rules of statutory construction, DOE likely lacks such authority under current law. Because this is an issue of first impression, and because the question could significantly affect the public interest and DOE’s development of a comprehensive strategy for its excess-uranium inventory, we recommend that Congress consider enacting legislation clarifying the conditions (if any) under which DOE may sell or transfer its depleted uranium. Depending on the terms of such legislation, this could reap benefits for the government because of the potentially significant revenue that could be obtained. In any event, such clarification would provide stakeholders with welcome legal clarity, potentially enhance the attractiveness to interested purchasers, and help avoid litigation that could interrupt DOE’s efforts to obtain maximum value for the public. A. DOE authority to re-enrich and sell or transfer the tails DOE has general authority under the Atomic Energy Act of 1954, as amended, 42 U.S.C. § 2011 et seq. (AEA), to re-enrich its depleted uranium inventory to natural or low-enriched levels and then to sell or transfer the re-enriched product. First, AEA section 41, 42 U.S.C. § 2061, authorizes DOE to re-enrich depleted uranium to low-enriched levels, and AEA sections 63 and 66, 42 U.S.C. §§ 2093, 2096—which authorize DOE’s acquisition and distribution of source material—implicitly authorize DOE to re-enrich depleted uranium to natural levels. Second, AEA sections 53, 63, and 161m, 42 U.S.C. §§ 2073, 2093, 2201(m), authorize DOE to transfer this re-enriched uranium, subject to certain conditions, to appropriately licensed entities such as nuclear power reactor operators. This general AEA authority is limited by any applicable restrictions in the USEC Privatization Act, enacted in 1996. Section 3112(a) of the act, 42 U.S.C. §§ 2297h-10(a), prohibits DOE from transferring or selling “any uranium (including natural uranium concentrates, natural uranium hexafluoride, or enriched uranium in any form) . . . except as consistent with this section.” The remaining provisions of section 3112 then specify the conditions under which DOE may sell or transfer various types of natural and enriched uranium. Thus, DOE is authorized to sell or transfer re-enriched depleted uranium provided such transactions satisfy the remaining section 3112 conditions. B. DOE authority to store the un-enriched tails indefinitely DOE has general authority under the AEA to store its unenriched depleted uranium indefinitely, as well as to convert the tails to a more stable form for storage. We believe this authority is implicit under AEA sections 63 and 66, which, as discussed above, authorize DOE to acquire and distribute source material. This authority is also implicit under AEA section 41, which authorizes DOE to enrich uranium, a process which inevitably generates depleted uranium. In addition, to the extent the department’s depleted uranium is “hazardous waste,” AEA section 91a(3), 42 U.S.C. § 2121(a)(3), explicitly authorizes DOE to store, process, transport, and dispose of “hazardous waste (including radioactive waste) resulting from nuclear materials production, weapons production and surveillance programs, and naval nuclear propulsion programs.” Again, this AEA authority is limited by any applicable restrictions in the USEC Privatization Act. Section 3112 of that act does not apply to, and thus does not restrict, storage of DOE’s uranium. Section 3113, 42 U.S.C. § 2297h-11, does not apply to or restrict storage of its own depleted uranium, but it is relevant in that it reinforces DOE’s authority to store this type of uranium under the AEA. Section 3113(a) requires DOE to accept depleted uranium from other entities for storage and disposal in the event the depleted uranium is determined to be “low-level radioactive waste.” If the waste generator is a Nuclear Regulatory Commission (NRC) licensee, DOE must take title and possession of the depleted uranium “at an existing DUF6 storage facility.” Implicit in these provisions is that DOE may store and dispose of its own depleted uranium waste as well, under its AEA or other authority. C. DOE authority to sell or transfer the tails in their current form DOE has general authority under the AEA to sell or transfer depleted uranium in its current form. As noted, sections 63 and 161m authorize DOE to distribute or sell “source material” to appropriately licensed entities, provided certain conditions are met, and depleted uranium is “source material.” AEA section 11z, 42 U.S.C. § 2014(z). Again, this AEA authority is limited by any applicable restrictions in the USEC Privatization Act. While this is an issue of first impression, we believe DOE’s authority to sell or transfer depleted uranium in its current form is doubtful. We believe courts applying rules of statutory construction would likely find DOE lacks such authority under current law. “ shall not . . . transfer or sell any uranium (including natural uranium concentrates, natural uranium hexafluoride, or enriched uranium in any form) to any person except as consistent with this section.” (Emphasis added.) The remainder of section 3112 then prescribes the conditions under which DOE may sell or transfer particular types of uranium, namely, so-called Russian-origin uranium (subsection (b)); natural and enriched uranium transferred to USEC (subsection (c)); natural and low-enriched uranium sold from DOE’s inventory (subsection (d)); and enriched uranium transferred to federal agencies, state and local agencies, nonprofit, charitable or educational institutions, and others (subsection (e)). No provision explicitly addresses depleted uranium. Read naturally and in accordance with its plain language, section 3112 prohibits DOE from selling or transferring its depleted uranium. The tails consist of uranium-235 and uranium-238, whether they are deemed a waste or a valuable commodity, and a DOE Office of Environmental Management official confirmed to us that operationally, the department treats depleted, natural, and enriched uranium all as “uranium.” Thus, depleted uranium would be covered by section 3112 as a type of “any uranium.” This plain meaning is reinforced by the fact that section 3112(a) lists nonexclusive examples of uranium—”any uranium (including natural uranium . . . or enriched uranium in any form)”—making clear that additional types of uranium are covered by section 3112. A 2005 DOE internal legal memorandum (2005 DOE Memorandum) reaches the same conclusion. Thus, because DOE may sell or transfer uranium only as consistent with the terms of sections 3112(b)-3112(e), and because none of those provisions specifies conditions under which depleted uranium may be sold, the plain words of the statute prohibit it. The statutory structure and legislative history support this conclusion. It is clear that when Congress passed the USEC Privatization Act in 1996, it was familiar with depleted uranium as a category of uranium requiring management. Because depleted uranium was only considered as a valueless waste at that time, Congress only explicitly referred to one management option in the statute: disposal. As noted, in section 3113, Congress required DOE to take responsibility for disposal of other entities’ depleted uranium, should it ever be determined to be a “low-level radioactive waste.” As NRC noted recently in making such a determination, however, when depleted uranium is treated as a “resource,” rather than a waste, section 3113 does not apply. See NRC, In re Louisiana Energy Services, L.P. (National Enrichment Facility), No. CLI-05-05 (Jan. 18, 2005), at 1, 3, 15, 17. In that event—where depleted uranium is a resource to be sold or transferred—section 3112, by its terms, would apply. The fact that Congress did not specify section 3112 conditions under which depleted uranium may be sold, as it did for DOE’s other valuable uranium, reflects only that depleted uranium was not deemed valuable in 1996. It does not reflect congressional intent that valuable depleted uranium is not subject to section 3112’s general prohibition against sales of “any uranium.” While this result may appear anomalous because depleted uranium is now considered a potentially highly valuable commodity and a potential source of revenue for the federal government, that is a matter for Congress to remedy, if it so chooses. A recently issued DOE policy on disposition of its excess uranium inventory recognizes this increase in value for depleted uranium. To take advantage of this development, department officials suggested to us that they would be authorized to sell the tails in their current form using DOE’s general AEA section 161m authority, without regard to the prohibitions in the USEC Privatization Act. They suggested such an approach might be reconciled as “consistent with” section 3112, as section 3112(a) requires, because none of the provisions in section 3112 specifies conditions of sale for depleted uranium. The 2005 DOE Memorandum makes a similar argument, pointing to the fact that the legislative history contains no explicit mention of restricting DOE’s existing AEA authority to sell depleted uranium. We disagree with this interpretation. DOE in effect reads a depleted uranium exception into the unqualified term “any uranium,” and rewrites section 3112 to say that only sale and transfer of uranium categories explicitly identified in that section are restricted. That is not what the statute says, and this reading would violate the principle that statutory exceptions are to be narrowly construed. See, e.g., Commissioner v. Clark, 489 U.S. 726, 738-39 (1989) (“Given that Congress has enacted a general rule . . ., we should not eviscerate that legislative judgment through an expansive reading of a somewhat ambiguous exception.”). Nor does the legislative history support this result. The fact that there was no mention of limiting DOE’s existing depleted uranium sales authority under the AEA is unremarkable, because in 1996, there was no valuable depleted uranium to sell. Finally, it would not be consistent with section 3112 to allow DOE to sell depleted uranium under the AEA. It would violate the statute’s prohibition against sales of “any uranium,” because there are no section 3112 exceptions under which its sale is permitted. It would also be incongruous to allow DOE to sell or transfer potentially billions of dollars’ worth of federal assets without the scrutiny Congress gave to disposition of DOE’s valuable uranium in enacting section 3112. Section 3112 represents Congress’ more specific and later-enacted intent regarding the types of factors to be considered in selling DOE’s uranium inventories, including price, protection of the domestic uranium industry, and safeguarding the national security, and therefore takes precedence. See, e.g., Smith v. Robinson, 468 U.S. 992 (1984) (more specific and recent statute takes precedence). In sum, we believe our reading of section 3112 carries out the plain words of the act and respects the policy considerations and choices Congress made in 1996 when presented with the disposition of DOE’s valuable uranium in a crowded and price-sensitive market. Our reading is also consistent with how courts interpret broad statutes when circumstances change: laws written in comprehensive terms apply to unanticipated circumstances if they reasonably fall within the scope of the plain language. See, e.g., Unexcelled Chemical Corp. v. United States, 345 U.S. 59 (1953). Thus, depleted uranium sales are covered by the prohibition in section 3112, even if depleted uranium was not part of the universe Congress explicitly had in mind when it enacted the statute in 1996. The same concerns that led Congress to legislate explicit conditions of sale for DOE’s other uranium inventories in 1996 may apply equally with regard to sale of its depleted uranium inventory today. Congress now has the opportunity to address the intervening increase in uranium values and balance the competing concerns associated with its sale. Because the question of DOE’s authority to sell its depleted tails would be a statutory construction issue of first impression and thus is not free from doubt, and because the question is an issue of significant public interest and importance, we recommend that Congress consider enacting legislation setting forth the explicit conditions (if any) under which DOE may sell or transfer its depleted uranium. Depending on the terms of such legislation, this could reap significant benefits for the government because of the potentially significant revenue that could be obtained. In any event, enacting explicit provisions regarding DOE’s sale or transfer of its depleted uranium would provide stakeholders with welcome legal clarity and help avoid litigation that could interrupt DOE’s efforts to obtain maximum value for the public. In summary, we conclude that DOE has general authority under the Atomic Energy Act to re-enrich and then sell or transfer the tails, provided the transaction meets the conditions of section 3112 of the USEC Privatization Act. DOE also has general AEA authority to store the tails indefinitely. However, we believe that because of constraints on DOE’s AEA authority in the USEC Privatization Act, the department’s authority to sell or transfer tails in their current form is doubtful and that under rules of statutory construction, DOE likely lacks such authority under current law. We recommend that Congress consider enacting legislation explicitly addressing the scope of DOE’s authority to sell and transfer depleted uranium. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Since the 1940s, the Department of Energy (DOE) has been processing natural uranium into enriched uranium, which has a higher concentration of the isotope uranium-235 that can be used in nuclear weapons or reactors. This has resulted in over 700,000 metric tons of leftover depleted uranium, also known as "tails," that have varying residual concentrations uranium-235. The tails are stored at DOE's uranium enrichment plants in Portsmouth, Ohio and Paducah, Kentucky. Although the tails have historically been considered a waste product and an environmental liability, recently an about tenfold increase in uranium prices may give DOE options to use some of the tails in ways that could provide revenue to the government. GAO's testimony is based on its March 31, 2008, report entitled Nuclear Material: DOE Has Several Potential Options for Dealing with Depleted Uranium Tails, Each of Which Could Benefit the Government (GAO-08-606R). The testimony focuses on (1) DOE's potential options for its tails and (2) the potential value of DOE's tails and factors that affect the value. It also contains an analysis of DOE's legal authority to carry out the potential options. In its report, GAO recommended that Congress consider clarifying DOE's statutory authority to manage depleted uranium. GAO also recommended that DOE complete a comprehensive uranium management assessment as soon as possible. DOE's potential options for its tails include selling the tails "as is," re-enriching the tails, or storing them indefinitely. DOE's current legal authority to sell its depleted uranium inventory "as is" is doubtful, but DOE generally has authority to carry out the other options. The department has not finished a comprehensive assessment of these options and is still evaluating the details of how such options might be implemented. DOE's authority to sell the tails in their current unprocessed form is doubtful. Because of specific statutory language in 1996 legislation governing DOE's disposition of its uranium, we believe that DOE's authority to sell the tails in unprocessed form is doubtful and that, under rules of statutory construction, DOE likely lacks such authority. However, if Congress were to provide the department with the needed authority, firms such as nuclear power utilities and enrichment companies may be interested in purchasing these tails and re-enriching them as a source of nuclear fuel. DOE could contract to re-enrich the tails. Although DOE would have to pay for re-enrichment, it might obtain more value from selling the re-enriched uranium instead of the tails if its re-enrichment costs were less than the discount it would have to offer to sell the tails as is. DOE could store the tails indefinitely. While this option conforms to an existing DOE plan to convert tails into a more stable form for long term storage, storing the tails indefinitely could prevent DOE from obtaining the potentially large revenue resulting from sales at currently high uranium prices. The potential value of DOE's depleted uranium tails is currently substantial, but changing market conditions could greatly affect the tails' value over time. Based on February 2008 uranium prices and enrichment costs and assuming sufficient re-enrichment capacity is available, GAO estimates the value of DOE's tails at$7.6 billion. However, this estimate is very sensitive to changing uranium prices, which recently have been extremely volatile, as well as to the availability of enrichment capacity. |
Over the past decade, the federal government has expanded financial assistance to a wide array of public and private stakeholders for preparedness activities through various grant programs administered by DHS through its component agency, FEMA. Through these grant programs, DHS has sought to enhance the capacity of states, localities, and other entities, such as ports or transit agencies, to prevent, respond to, and recover from a natural or manmade disaster, including terrorist incidents. Four of the largest preparedness grant programs are the Port Security Grant Program, the State Homeland Security Program, the Transit Security Grant Program, and the Urban Areas Security Initiative. The Port Security Grant Program provides federal assistance to strengthen the security of the nation’s ports against risks associated with potential terrorist attacks by supporting increased portwide risk management, enhanced domain awareness, training and exercises, and expanded port recovery capabilities. The State Homeland Security Program provides funding to support states’ implementation of homeland security strategies to address the identified planning, organization, equipment, training, and exercise needs at the state and local levels to prevent, protect against, respond to, and recover from acts of terrorism and other catastrophic events. The Transit Security Grant Program provides funds to owners and operators of transit systems (which include intracity bus, commuter bus, ferries, and all forms of passenger rail) to protect critical surface transportation infrastructure and the traveling public from acts of terrorism and to increase the resilience of transit infrastructure. The Urban Areas Security Initiative provides federal assistance to address the unique needs of high-threat, high-density urban areas, and assists the areas in building an enhanced and sustainable capacity to prevent, protect, respond to, and recover from acts of terrorism. Since its creation in April 2007, FEMA’s GPD has been responsible for managing DHS’s preparedness grants. GPD consolidated the grant business operations, systems, training, policy, and oversight of all FEMA grants and the program management of preparedness grants into a single entity. In February 2012, we identified multiple factors that contributed to the risk of FEMA potentially funding unnecessarily duplicative projects across four of the largest grant programs—the Port Security Grant Program, the State Homeland Security Program, the Transit Security Grant Program, and the Urban Areas Security Initiative. These factors include overlap among grant recipients, goals, and geographic locations, combined with differing levels of information that FEMA had available regarding grant projects and recipients. Specifically, we found that FEMA made award decisions with differing levels of information and lacked a process to coordinate application reviews. To better identify potential unnecessary duplication, we recommended that FEMA (1) take steps to ensure that it collects project information at the level of detail needed to better position the agency to identify any potential unnecessary duplication within and across the four grant programs, and (2) explore opportunities to enhance FEMA’s internal coordination and administration of the programs. DHS agreed with the recommendations and identified planned actions to improve visibility and coordination across programs and projects. We also suggested that Congress consider requiring DHS to report on the results of its efforts to identify and prevent duplication within and across the four grant programs, and consider these results when making future funding decisions for these programs. Since we issued our February 2012 report, FEMA officials have identified actions they believe will enhance management of the four grant programs we analyzed; however, FEMA still faces challenges to enhancing preparedness grant management. First, the fiscal year 2013 President’s Budget outlined a plan to consolidate most of FEMA’s preparedness grants programs, and FEMA officials expect this action would reduce or eliminate the potential for unnecessary duplication. The fiscal year 2013 President’s Budget proposed the establishment of the National Preparedness Grant Program (NPGP), a consolidation of 16 grant programs (including the 4 grants we analyzed in our February 2012 report) into a comprehensive single program. According to FEMA officials, the NPGP would eliminate redundancies and requirements placed on both the federal government and grantees resulting from the existing system of multiple individual, and often disconnected, grant programs. For example, FEMA officials said that the number of applications a state would need to submit and the federal government’s resources required to administer the applications would both decrease under the consolidated program. However, Members of Congress have expressed concern about the consolidation of the 16 grant programs and Congress has not yet approved the proposal. In October 2012, FEMA officials told us that Members of Congress had asked FEMA to refine the NPGP proposal to address concerns raised by stakeholders, such as how local officials will be involved in a state-administered grant program. As of March 2013, FEMA officials reported that the agency was drafting guidance for the execution of the NPGP based on stakeholder feedback and direction from Congress pending the fiscal year 2013 appropriations bill. If the NPGP is not authorized in fiscal year 2013, FEMA officials stated that the agency plans to resubmit the request for the fiscal year 2014 budgetary cycle. If approved, and depending on its final form and execution, the consolidated NPGP could help reduce redundancies and mitigate the potential for unnecessary duplication, and may address the recommendation in our February 2012 report to enhance FEMA’s internal coordination and administration of the programs. Second, in March 2013, FEMA officials reported that the agency intends to start collecting and analyzing project-level data from grantees in fiscal year 2014; however, FEMA has not yet finalized specific data requirements and has not fully established the vehicle to collect these data—a new data system called the Non-Disaster Grants Management System (ND Grants). As of March 2013, FEMA officials expect to develop system enhancements for ND Grants to collect and use project-level data by the end of fiscal year 2013. FEMA officials stated that FEMA has formed a working group to develop the functional requirements for collecting and using project-level data and plans to obtain input from stakeholders and consider the cost effectiveness of potential data requirements. In alignment with data requirement recommendations from a May 2011 FEMA report, the agency anticipates utilizing the new project- level data in the grant application process starting in fiscal year 2014. Collecting appropriate data and implementing ND Grants with project- level enhancements as planned, and as recommended in our February 2012 report, would better position FEMA to identify potentially unnecessary duplication within and across grant programs. Third, in December 2012, FEMA officials stated that there are additional efforts underway to improve internal administration of different grant programs. For example, officials stated that a FEMA task force has been evaluating grants management processes and developing a series of recommendations to improve efficiencies, address gaps, and increase collaboration across regional and headquarters counterparts and financial and programmatic counterparts. These activities represent positive steps to improve overall grants management, but they do not include any mechanisms to identify potentially duplicative projects across grant programs administered by different FEMA entities. According to DHS and FEMA strategic documents, national preparedness is the shared responsibility of the “whole community,” which requires the contribution of a broad range of stakeholders, including federal, state, and local governments, to develop preparedness capabilities to effectively prevent, protect against, mitigate the effects of, respond to, and recover from a major disaster. Figure 1 provides an illustration of how federal, state, and local resources provide preparedness capabilities for different levels of government and at various levels of incident effect (i.e., the extent of damage caused by a natural or manmade disaster). The greater the level of incident effect, the more likely state and local resources are to be overwhelmed. We have previously reported on and made recommendations related to DHS’s and FEMA’s efforts to develop a national assessment of preparedness, which would assist DHS and FEMA in effectively prioritizing investments to develop preparedness capabilities at all levels of government, including through its preparedness grant programs. Such an assessment would identify the critical elements at all levels of government necessary to effectively prevent, protect against, mitigate the effects of, respond to, and recover from a major disaster (i.e., preparedness capabilities), such as the ability to provide lifesaving medical treatment via emergency medical services following a major disaster; develop a way to measure those elements (i.e., capability performance measures); and assess the difference between the amount of preparedness needed at all levels of government (i.e., capability requirements) and the current level of preparedness (i.e. capability level) to identify gaps (i.e., capability gaps). The identification of capability gaps is necessary to effectively prioritize preparedness grant funding. However, we have previously found that DHS and FEMA have faced challenges in developing and implementing such an assessment. Most recently, in March 2011, we reported that FEMA’s efforts to develop and implement a comprehensive, measurable, national preparedness assessment were not yet complete. Accordingly, we recommended that FEMA complete a national preparedness assessment and that such an assessment should assess capability gaps at each level of government based on capability requirements to enable prioritization of grant funding. We also suggested that Congress consider limiting preparedness grant funding until FEMA completes a national preparedness assessment. In April 2011, Congress passed the fiscal year 2011 appropriations act for DHS, which reduced funding for FEMA preparedness grants by $875 million from the amount requested in the President’s fiscal year 2011 budget. The consolidated appropriations act for fiscal year 2012 appropriated $1.7 billion for FEMA Preparedness grants, $1.28 billion less than requested. The House committee report accompanying the DHS appropriations bill for fiscal year 2012 stated that FEMA could not demonstrate how the use of the grants had enhanced disaster preparedness. In March 2011, the White House issued Presidential Policy Directive 8 on National Preparedness (PPD-8), which called for the development of a national preparedness system that includes a comprehensive approach to assess national preparedness. According to PPD-8, the approach should use a consistent methodology to assess national preparedness capabilities—with clear, objective, and quantifiable performance measures. PPD-8 also called for the development of a national preparedness goal, as well as annual preparedness reports (both of which were previously required under the Post-Katrina Act). To address PPD-8 provisions, FEMA issued the National Preparedness Goal in September 2011, which established a list of preparedness capabilities for each of five mission areas (prevention, protection, mitigation, response, and recovery) that are to serve as the basis for preparedness activities within FEMA, throughout the federal government, and at the state and local levels. In November 2011, FEMA issued the National Preparedness System, which described an approach and cycle to build, sustain, and deliver the preparedness capabilities described in the National Preparedness Goal. The system contains six components to support decision making, resource allocation, and progress measurement, including identifying and assessing risk and estimating capability requirements. According to the system, measuring progress toward achieving the National Preparedness Goal is intended to provide the means to decide how and where to allocate scarce resources and prioritize preparedness. Finally, in March 2012, FEMA issued the first National Preparedness Report, designed to identify progress made toward building, sustaining, and delivering the preparedness capabilities described in the National Preparedness Goal. According to FEMA officials, the National Preparedness Report also identifies what they consider to be national-level capability gaps. While FEMA issued the first National Preparedness Report, the agency has not yet established clear, objective, and quantifiable capability requirements and performance measures that are needed to identify capability gaps in a national preparedness assessment, as recommended in our March 2011 report. As previously noted, such requirements and measures would help FEMA identify capability gaps at all levels of government, which would assist FEMA in targeting preparedness grant program funding to address the highest-priority capability gaps. According to the National Preparedness Report, FEMA collaborated with federal interagency partners to identify existing quantitative and qualitative performance and assessment data for each of the preparedness capabilities. In addition, FEMA integrated data from the 2011 State Preparedness Reports, which are statewide survey-based self- assessments of capability levels and requirements submitted by all 56 U.S. states and territories. Finally, FEMA conducted research to identify independent evaluations, surveys, and other supporting data related to preparedness capabilities. However, limitations associated with some of the data used in the National Preparedness Report may reduce the report’s usefulness in assessing national preparedness. First, in October 2010, we reported that data in the State Preparedness Reports—one of the key data sources for the National Preparedness Report—could be limited because FEMA relies on states to self-report such data, which makes it difficult to ensure data are consistent and accurate. Second, at the time the National Preparedness Report was issued, in March 2012, states were still in the process of updating their efforts to collect, analyze, and report preparedness progress according to the new preparedness capabilities issued along with the National Preparedness Goal in September 2011. As a result, the report states that assessment processes, methodologies, and data will need to evolve for future iterations of the report. Third, the report’s final finding notes that while many programs exist to build and sustain preparedness capabilities across all mission areas, challenges remain in measuring progress over time. According to the report, in many cases, measures do not yet exist to gauge performance, either quantitatively or qualitatively. Therefore, while programs may exist that are designed to address a given capability gap, the nation has little way of knowing whether and to what extent those programs have been successful. Thus, as of March 2013, FEMA has not yet completed a national preparedness assessment, as we recommended in our March 2011 report, which could assist FEMA in prioritizing grant funding. However, FEMA officials stated that they have efforts under way to assess regional, state, and local capabilities to provide a framework for completing a national preparedness assessment. For example, in April 2012, FEMA issued guidance on developing Threat and Hazard Identification and Risk Assessments (THIRA), which were initially required to be completed by state and local governments receiving homeland security funding by December 31, 2012. Guidance issued for development of the THIRAs describes a process for assessing the various threats and hazards facing a community, the vulnerability of the community, as well as the consequences associated with those threats and hazards. For example, using the THIRA process, a jurisdiction may identify tornadoes as a hazard and asses its vulnerabilities to and the consequences of a tornado striking the jurisdiction, as well as the capabilities necessary for an effective response. Using the THIRA results, a jurisdiction may then develop a strategy to allocate resources effectively to achieve self- determined capability requirements by closing capability gaps. According to FEMA officials in March 2013, the THIRAs are to be used by state, regional, and federal entities for future planning efforts. At the state level, FEMA guidance notes that state officials are to use the capability requirements they identified in their respective 2012 THIRAs in their future State Preparedness Reports. FEMA officials stated that they planned to use both the THIRAs and the State Preparedness Reports to identify states’ (self-reported) capability gaps based on capability requirements established by the state. At the regional level, each of the 10 FEMA regions is to analyze the local and state THIRAs to develop regional THIRAs. At the national level, the local, state, and regional THIRAs are collectively intended to provide FEMA with data that it can analyze to assist in the identification of national funding priorities for closing capability gaps. The outcome of the THIRA process is intended to be a set of national capability performance requirements and measures, which FEMA officials stated they intend to incorporate into future National Preparedness Reports. As of March 2013, FEMA officials are working to coordinate their review and analysis of the various THIRAs through a THIRA Analysis and Review Team. The team plans to conduct ongoing meetings to discuss common themes and findings from the THIRAs and intends to develop an initial proposed list of national preparedness grant funding priorities by summer 2013. Depending on how the THIRA process is implemented and incorporated into future National Preparedness Reports, such an approach could be a positive step toward addressing our March 2011 recommendation to FEMA to develop a national preparedness assessment of existing capabilities levels against capability requirements. Such a national preparedness assessment may help FEMA to (1) identify the potential costs for developing and maintaining required capabilities at each level of government, and (2) determine what capabilities federal agencies should be prepared to provide. While the recently completed THIRAs and 2012 National Preparedness Report are positive steps in the initial efforts to assess preparedness capabilities across the nation, capability requirements and performance measures for each level of government that are clear, objective, and quantifiable have not yet been developed. As a result, it is unclear what capability gaps currently exist, including at the federal level, and what level of resources will be needed to close such gaps through prioritized preparedness grant funding. We will continue to monitor FEMA’s efforts to develop capability requirements and performance measures. Chairman Brooks, Ranking Member Payne, and Members of the subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. For further information about this statement, please contact David C. Maurer, Director, Homeland Security and Justice Issues, at (202) 512- 9627 or maurerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. In addition to the contact named above, the following individuals also made major contributions to this testimony: Chris Keisling, Assistant Director; Tracey King; Dan Klabunde; Katherine Lee; David Lutter; David Lysy; Lara Miklozek; and Erin O’Brien. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | From fiscal years 2002 through 2012, the Congress appropriated about $39 billion to a variety of DHS preparedness grant programs to enhance the capabilities of state and local governments to prevent, protect against, respond to, and recover from terrorist attacks and other disasters. DHS allocated more than $21.3 billion through four of the largest preparedness programs--the Port Security Grant Program, the State Homeland Security Program, the Transit Security Grant Program, and the Urban Areas Security Initiative. In February 2012, GAO identified factors that contribute to the risk of FEMA potentially funding unnecessarily duplicative projects across the four grant programs. In March 2011, GAO reported that FEMA has faced challenges in developing and implementing a national preparedness assessment, which inhibits its abilities to effectively prioritize preparedness grant funding. This testimony updates GAO's prior work and describes DHS's and FEMA's progress over the past year in (1) managing preparedness grants and (2) measuring national preparedness by assessing capabilities. This statement is based on prior products GAO issued from March 2011 to February 2012 and selected updates in March 2013. To conduct the updates, GAO analyzed agency documents and interviewed FEMA officials. Officials in the Federal Emergency Management Agency (FEMA)--a component of the Department of Homeland Security (DHS)--have identified actions they believe will enhance management of the four preparedness programs GAO analyzed; however, FEMA still faces challenges. In February 2012, GAO found that FEMA lacked a process to coordinate application reviews and made award decisions with differing levels of information. To better identify potential unnecessary duplication, GAO recommended that FEMA collect project-level information and enhance internal coordination and administration of the programs. DHS concurred. The fiscal year 2013 President's Budget, proposed the establishment of the National Preparedness Grant Program (NPGP), a consolidation of 16 FEMA grant programs into a single program. However, Members of Congress raised concerns about the NPGP and have not approved the proposal. As a result, FEMA officials reported that the agency was drafting new guidance for the execution of the NPGP based on pending Congressional direction on fiscal year 2013 appropriations. If approved, and depending on its final form and execution, the NPGP could help mitigate the potential for unnecessary duplication and address GAO's recommendation to improve internal coordination. In March 2013, FEMA officials reported that FEMA intends to start collecting and analyzing project-level data from grantees in fiscal year 2014; but has not yet finalized data requirements or fully implemented the data system to collect the information. Collecting appropriate data and implementing project-level enhancements as planned would address GAO's recommendation and better position FEMA to identify potentially unnecessary duplication. FEMA has made progress addressing GAO's March 2011 recommendation that it develop a national preparedness assessment with clear, objective, and quantifiable capability requirements and performance measures; but continues to face challenges developing a national preparedness system that could assist FEMA in prioritizing preparedness grant funding. For example, in March 2012, FEMA issued the first National Preparedness Report, which describes progress made to build, sustain, and deliver capabilities. FEMA also has efforts underway to assess regional, state, and local preparedness capabilities. In April 2012, FEMA issued guidance on developing Threat and Hazard Identification and Risk Assessments (THIRA) to self-assess regional, state, and local capabilities and required states and local areas receiving homeland security funds to complete a THIRA by December 2012. However, FEMA faces challenges that may reduce the usefulness of these efforts. For example, the National Preparedness Report notes that while many programs exist to build and sustain preparedness capabilities, challenges remain in measuring progress over time. According to the report, in many cases, measures do not yet exist to gauge performance, either quantitatively or qualitatively. Further, while FEMA officials stated that the THIRA process is intended to develop a set of national capability performance requirements and measures, such requirements and measures have not yet been developed. Until FEMA develops clear, objective, and quantifiable capability requirements and performance measures, it is unclear what capability gaps currently exist and what level of federal resources will be needed to close such gaps. GAO will continue to monitor FEMA's efforts to develop capability requirements and performance measures. GAO has made recommendations to DHS and FEMA in prior reports. DHS and FEMA concurred with these recommendations and have actions underway to address them. |
As with other joint combatant commands, NORTHCOM’s organization includes subordinate commands that report directly to NORTHCOM; component commands, which are military service commands that assist NORTHCOM operations; and other supporting commands and DOD agencies. Each of these has a significant role in planning for NORTHCOM’s missions. NORTHCOM planning efforts are guided by DOD policies and procedures on joint planning that specify what should be included in the plans as well as what organizations are required to submit plans in order for the command to complete its planning process. NORTHCOM is the military command responsible for the planning, organizing, and executing DOD’s homeland defense mission within its area of responsibility—the continental United States (including Alaska) and territorial waters—and civil support missions within the United States (see fig. 1). Homeland defense is the protection of U.S. sovereignty, territory, domestic population, and critical defense infrastructure against external attacks and aggression. DOD is the lead federal agency for homeland defense operations, such as air defense. Other federal agencies would act in support of DOD in those circumstances. NORTHCOM’s homeland defense mission incorporates air and space defense, land defense, and maritime defense against external threats. One example of how the homeland defense mission is conducted is Operation Noble Eagle, the ongoing effort to protect against an air attack, such as those that occurred on September 11, 2001. NORTHCOM consists of a combatant command headquarters, a series of smaller subordinate commands focused on particular missions or regions, and component commands of the military services, which support NORTHCOM’s planning and operations and command the land, maritime, and air portions of a NORTHCOM joint operation. The NORTHCOM Commander also commands the North American Aerospace Defense Command (NORAD), a bi-national U.S. and Canadian organization charged with air and maritime warning and airspace control. Figure 2 shows NORTHCOM’s structure. Civil support is DOD support to US civil authorities—such as DHS or other agency—for domestic emergencies and for designated law enforcement and other activities. DOD is not a lead federal agency for such missions and thus operates in support of civil authorities only when directed to do so by the President or the Secretary of Defense. NORTHCOM would command only the federal military portion of such operations and would do so in direct support of another federal agency, such as FEMA. Response to disasters or other catastrophic events in the United States is guided by the National Response Framework, which involves a stepped series of response, beginning with local authorities, state authorities, and outside assistance from other states. Only when these capabilities are exeeded would federal assistance become involved. It is at this point that DOD may be asked to provide assistance. NORTHCOM would command that DOD assistance. For civil support operations, there are three primary situations in which DOD takes part in a federal response to a domestic incident. Federal assistance, including assistance from DOD, can be provided (1) at the direction of the President, (2) at the request of another federal agency, such as DHs or FEMA, or (3) in response to a request from local authorities when time is of the essence. Guidance for developing plans, such as NORTHCOM’s homeland defense and civil support plans, is provided by DOD’s joint operation planning process. This process establishes objectives, assesses threats, identifies capabilities needed to achieve the objectives in a given environment, and ensures that capabilities (and the military forces to deliver those capabilities) are allocated to ensure mission success. Joint operation planning and execution procedures also include assessing and monitoring the readiness of those units providing the capabilities for the missions they are assigned. Overall, the purpose of joint operation planning is to reduce the risks inherent in military operations. Joint operations plans themselves can take several forms, from the more detailed to the more general. Examples of more detailed operations plans include those prepared by several combatant commands for the kinds of military operations dictated by a specific foreign threat or scenario, such as the need to oppose a landward invasion of the territory of a U.S. ally by a hostile nation. Such operations plans (OPLAN) are meant to cover contingencies that are critical to U.S. national security and require detailed planning in order to reduce risk to potential operations. These plans are accompanied by detailed lists of military forces that would provide required capabilities in order to execute the plan. Other plans are prepared for less compelling but otherwise important national interest contingencies and for unspecific threats (e.g., disaster relief, humanitarian assistance, or peace operations fall under this category). These are referred to as concept plans (CONPLAN) and are much more general in nature but nonetheless are required to adhere to joint operational planning standards. All of NORTHCOM’s plans are currently categorized as CONPLANs. Once a plan is drafted, it is reviewed several times by a number of DOD stakeholders, primarily from the Joint Planning and Execution Community, which consists of a broad range of military stakeholders, from the Chairman of the Joint Chiefs of Staff to the military services, the combatant commands, and the major DOD agencies. These stakeholders provide input into all phases of planning, from mission analysis to the final detailed plan. In the last several years, DOD has begun to use what it refers to as an adaptive planning process, whereby major plans are reviewed much more often than in the past. All plans are now reviewed by DOD stakeholders every 6 months. Part of NORTHCOM’s responsibility is to create plans to address its role in various potential threats to the homeland, whether from potential enemy attack or a natural disaster. Because the potential threats are so broad, whether they involve terrorist attacks or potential natural disasters, the plans NORTHCOM was required to develop by DOD usually take the form of CONPLANs. Among the specific areas for which NORTHCOM prepares plans are chemical, biological, radiological, nuclear, and high-yield explosive (CBRNE) consequence management; pandemic influenza; and nuclear accident response. The specific contingencies for which NORTHCOM should plan are directed by the President and the Secretary of Defense. NORTHCOM follows several sets of strategies and guidance when planning for homeland defense and civil support. Homeland defense planning follows DOD guidance, such as the National Defense Strategy of the United States of America, the Unified Command Plan, and Contingency Planning Guidance. Civil support planning requires additional guidance. In addition to the military guidance, because DOD is not the lead federal agency for civil support missions involving domestic emergencies, natural disasters, and similar events, it also follows the guidance prepared by the Homeland Security Council and DHS in order to frame its civil support planning, including the National Response Framework. To further guide planning efforts for all hazards, the Homeland Security Council and DHS—along with the federal interagency, and state and local homeland security agencies—created the national planning scenarios. The scenarios provide parameters for 15 highly plausible terrorist attack and natural disaster situations, such as the detonation of a nuclear device by terrorists or a major earthquake. The scenarios focus on the consequences that federal, state, and local first responders will have to address and are intended to illustrate the scope and magnitude of large-scale, catastrophic events for which the nation needs to be prepared. NORTHCOM prepares individual plans to cover its broad homeland defense and civil support missions as well as subsets of those missions. For example, while NORTHCOM has a major plan each for homeland defense and civil support, it also has plans for air defense and for CBRNE consequence management. NORTHCOM’s plans provide its subordinate, component, and supporting commands and agencies with planning guidance, such as types of incidents to prepare for and what kinds of plans to prepare to support NORTHCOM’s plans. NORTHCOM has completed—or is in the process of revising—all of its major plans. However, NORTHCOM does not regularly track or assess the required supporting plans from other DOD commands and agencies. This heightens the risk that NORTHCOM cannot properly assess whether the supporting organizations have adequately planned to assist the command when an event takes place. Further, although NORTHCOM plans adhere to military guidance in both content and structure, the command faces additional challenges in such areas as (1) identifying required civil support capabilities, (2) allocating capabilities (units, trained personnel, and equipment) to meet potential requirements, and (3) monitoring the readiness of forces delivering those capabilities. NORTHCOM and DOD have some risk mitigation efforts under way in each of these areas that partially address the challenges we found. However, it could take additional steps to reduce the remaining level of risk to its ability to effectively achieve its mission. To date, NORTHCOM has completed nine major homeland defense and civil support plans required by the President, the Secretary of Defense, and DOD guidance, and is in the process of revising several of its plans in accordance with the DOD requirement to review plans every 6 months for potential revision, including its homeland defense plan. NORTHCOM officials told us that they have placed priority on completing all of their major plans over the last 2 years. In addition, NORTHCOM’s plans are now undergoing review and consideration for major revision more often than when the command was first established. Table 1 lists NORTHCOM’s required major plans and the status of each with estimated completion and revision dates where applicable. NORTHCOM has also anticipated that DOD will require a 10th plan— Strategic Communications—and has fully drafted a plan in advance of this guidance. Although the majority of our review was focused on the two major homeland defense and civil support plans, we also reviewed each of the other plans and compared them to DOD’s established joint planning standards for concept plans as well as NORTHCOM’s own concept of operations for how it should plan for and conduct its missions. We found that the plans met DOD’s standards for completeness in accordance with DOD’s joint planning doctrine and adhered to NORTHCOM’s overall concept of operations. For example, the plans include the required concept, objectives, assumptions, and constraints sections that frame the rest of the plan. We also reviewed the assumptions listed in the plans for potential contradictions between one or more plans and found none. We did not, however, independently validate the assumptions in the plans. Some assumptions—such as assuming that adequate DOD forces would be available to execute a plan—seemed broad and had the potential to affect the entire plan if the assumption was proved invalid during a crisis. However, NORTHCOM planning officials told us that some broad assumptions are always necessary in order to even begin planning. They said that once a plan needs to be executed, the assumptions are reviewed again and the plan altered to account for an assumption that was determined to be invalid. We also found that NORTHCOM’s civil support plan adheres closely to the National Response Framework concept in that NORTHCOM is to provide support for civil authorities upon request by a lead federal agency. We also found that NORTHCOM’s plans incorporate 14 of the 15 national planning scenarios developed by the Homeland Security Council in order to guide federal agencies’ general planning and exercises. The one scenario not incorporated into NORTHCOM’s plans is the cyber attack planning scenario, which falls under U.S. Strategic Command’s area of responsibility. Table 2 summarizes each of the 15 planning scenarios and indicates where NORTHCOM planners have taken these scenarios into consideration in their plans. Because NORTHCOM officials have spent considerable time and effort in completing or revising their major plans, they have not focused adequately on the supporting plans that have been—or are to be—developed by other organizations within DOD to assist NORTHCOM. Like all CONPLANs, NORTHCOM’s plans require supporting plans from NORTHCOM’s subordinate and component commands as well as other DOD agencies to assist the responsible command—NORTHCOM—when an event occurs. Because NORTHCOM’s major plans are less detailed and focused than the operational plans of other combatant commands, these supporting plans are critical for providing the operational level detail that is otherwise lacking in the major plans. Supporting plans must also adhere to the same joint doctrine standards as the broader plans and should contain objectives, assumptions and constraints, and sections on such areas as command and control, task organization, intelligence, and logistics. Although there is no explicit DOD requirement that NORTHCOM systematically review and track supporting plans, DOD guidance on joint operation planning indicates that “in the absence of Joint Staff instructions to the contrary, the supported commander will review and approve supporting plans.” Regardless of whether there is an explicit requirement, we believe it is prudent to perform these reviews to reduce the risk that supporting agencies have not adequately planned to support NORTHCOM when needed following a natural or man-made disaster. The number of supporting plans required varies with the type of major plan. For example, NORTHCOM’s homeland defense plan required supporting plans from 25 commands and agencies, whereas the civil support plan required supporting plans from only 6 commands and agencies. Of the 6 supporting plans required by the civil support concept plan, NORTHCOM officials had 4 in their possession when we reviewed the plans at NORTHCOM headquarters. Similarly, of the 25 supporting plans required by NORTHCOM’s homeland defense plan, NORTHCOM also had only 3 at the time we reviewed plans. Some of the other 22 organizations expected to develop supporting plans for homeland defense are the Defense Information Systems Agency, Defense Intelligence Agency, Defense Threat Reduction Agency, and Defense Contract Management Agency. With the exception of supporting plans by NORTHCOM’s subordinate commands—such as Joint Task Force Alaska and Joint Task Force Civil Support—and the component commands whose plans they could provide copies of, NORTHCOM officials could not report to us how many of the other supporting plans are completed. As we report separately, NORTHCOM officials were uncertain about the status and completeness of the supporting plans that the homeland defense CONPLAN required NGB to coordinate with the states and forward to the command. We reviewed all the supporting plans NORTHCOM was able to locate for the Homeland Defense, Defense Support to Civil Authorities, and CBRNE Consequence Management plans, as well as several others we saw during visits to other commands and DOD organizations. We found that in general the supporting plans met the intent and objectives of the major strategic- level plans and had compatible assumptions. We did not, however, review the supporting plans to the degree NORTHCOM officials would have to in order to satisfy themselves that the plans meet the command’s needs, nor did we independently validate the assumptions in the supporting plans. NORTHCOM officials acknowledged that because they had devoted most of their effort to completing and revising the major plans, until recently they had not devoted enough attention to the supporting plans. NORTHCOM officials told us that they are developing a process to track the status of subordinate commands’ supporting plans. In fact, the officials provided us an update on the status of these supporting plans. But this did not include other DOD agencies, such as the Defense Threat Reduction Agency, Defense Intelligence Agency, and Defense Information Support Agency, that are also supposed to be developing supporting plans for some of NORTHCOM’s concept plans. Additionally, NORTHCOM officials told us that they were planning to start reviewing supporting plans in a manner similar to how DOD stakeholders review major plans. As long as this approach encompasses all supporting plans, it could provide NORTHCOM planning and operations officials with a much more detailed analysis of the extent to which supporting plans meet their needs as well as help them identify potential planning gaps. Without knowledge about the completeness of supporting plans and the extent to which these plans address NORTHCOM’s objectives, NORTHCOM officials face increased uncertainty about the extent of planning and preparedness of other DOD agencies if and when these agencies are called to respond. According to the strategic vision contained in NORTHCOM’s concept of operations, NORTHCOM should facilitate the synchronization of national, state, and local assets and capabilities to defend the nation and support civilian authorities. One of the fundamental elements of operational planning is determining the capabilities requirements for the mission to be performed. Because NORTHCOM’s plans are broader CONPLANs rather than more detailed OPLANs, they are not focused on specific scenarios and discrete sets of required capabilities needed to accomplish objectives. Without an understanding of the capabilities necessary for DOD to conduct an operation, it is more difficult to plan in advance for the types, numbers, and timing of capabilities (trained personnel and equipment) to actually conduct an operation. For NORTHCOM’s homeland defense mission, the required capabilities are based on an assessment of threats and a number of factors that NORTHCOM and other DOD commands and organizations assess. For NORTHCOM’s civil support mission, the requirements the command faces are established by the needs of the federal, state, and local agencies and organizations that DOD would be supporting in an actual event. Given the diverse environment that NORTHCOM is responsible for within its area of responsibility, its civil support role varies by area, incident, and other factors, which makes NORTHCOM’s ability to know its capability requirements for any given civil support operation uncertain. Further, NORTHCOM officials told us that they do not have access to enough detail about from DHS or from the states in order to know what capabilities exist at the state level and the extent to which there are capability gaps. DHS has reported on the weaknesses in state and federal emergency plans both in terms of the adequacy of the plans themselves and the lack of information on required capabilities. As we report in a separate letter, NORTHCOM has also not systematically reviewed state emergency plans in order to obtain detailed information about the specific challenges it may face in conducting homeland defense or civil support operations. Coordination between NORTHCOM, DHS, NGB, and the states is therefore important for emergency planning, particularly for civil support operations. NORTHCOM officials told us that understanding National Guard capabilities is also problematic. For example, as we have reported, neither DOD nor the states have fully determined the National Guard’s requirements for civil support operations in the United States. The National Guard serves as a critical portion of the response to a disaster, whether in its normal role under the direct command of a state governor or as part of a federal response once the President has made a determination to federalize the Guard. In either case, uncertainty about the National Guard’s civil support capabilities increases the risk to the adequacy of NORTHCOM’s and DOD’s overall civil support planning effort. In 2006, Congress required that DOD develop and maintain a database that includes the types of emergency response capabilities DOD may be able to provide in support of the National Response Framework’s emergency support functions and the types of emergency response capabilities each state’s National Guard may be able to provide in response to a domestic natural or man-made disaster. DOD is also required to identify in this database the specific units that are able to provide these capabilities. Also, in 2006, Congress required FEMA to accelerate the completion of an inventory of federal response capabilities and to develop a list of organizations and functions within DOD that may be used to provide support to civil authorities during natural or man-made disasters. FEMA is still developing this list, and DOD is still developing the required database. In January 2008, Congress required DOD to work with DHS to determine the military-unique capabilities DOD needs to provide for civil support operations and to prepare a plan to provide funds and resources to maintain existing military-unique civil support capabilities or any additional capabilities required for homeland defense and civil support missions. In addition to descriptions of the emergency support functions, the annexes to the previous National Response Plan—such as the catastrophic incident annex—contain information about agency roles and responsibilities as well as descriptions of capabilities. These annexes are being revised as part of the new National Response Framework. Until these efforts are completed and are coordinated with similar information from the states, there remains a gap in knowledge about what capabilities exist at all levels for responding to natural and man-made disasters. This, in turn, limits NORTHCOM’s ability to fully identify the civil support requirements for DOD forces. NORTHCOM and DOD have taken some steps to mitigate the uncertainty in civil support requirements. NORTHCOM officials reported to us that through analyzing past disasters, such as Hurricane Katrina, and potential disasters—such as those represented by the national planning scenarios— they can reasonably determine the types of capabilities necessary to support civil authorities. NORTHCOM officials said that this allows them to anticipate the needs of states and local authorities in the event of a disaster to some extent but that they can only “lean forward” so far without infringing on the intent of the National Response Framework or the prerogatives of the state governments. NORTHCOM and the Joint Staff are also assessing NORTHCOM’s major plans (including Homeland Defense and Defense Support of Civil Authorities) in order to determine where the potential gaps in required capabilities may be and what specific military capabilities are potentially required to address them. This may better inform capabilities requirements and resource decisions. NORTHCOM has also worked with FEMA and DOD officials to develop prescripted mission assignments, which are descriptions of a set of capabilities civil authorities might need from DOD in an emergency and are written in such a way as to provide a common understanding of a capability. NORTHCOM officials told us that the intent was to avoid requests for specific DOD equipment that may or may not be suitable or available to meet the request and to base requests on capabilities a requesting agency needs that could potentially be addressed by a broader range of DOD assets. For example, FEMA might request the capability to move by air 40 metric tons rather than requesting a specific aircraft. This enables DOD to apply a wide range of resources for solving a problem and reduces confusion associated with varying requirements and terminology across agencies. These mission assignments are designed to leverage DOD’s areas of expertise and capabilities where civil agencies typically fall short. Appendix II shows the 25 prescriptive mission assignments that NORTHCOM and DOD have worked out with FEMA. These mitigation efforts help reduce the uncertainty NORTHCOM faces in determining requirements for civil support planning. But only a broader effort by NORTHCOM, DOD, DHS, and the states to comprehensively assess capabilities and capability gaps will help all stakeholders understand the true extent of requirements in order to plan for natural and man-made disasters in the United States. One of the major challenges NORTHCOM faces in planning for and conducting both homeland defense and civil support operations is ensuring that it has adequate capabilities assigned to conduct those missions as required. The major combatant commands, such as U.S. European Command and U.S. Pacific Command, normally have forces allocated to their operational control on a regular basis to meet their general capabilities requirements and to perform other missions, such as demonstrations of military presence in support of U.S. foreign policy objectives. Further, the OPLANs prepared by combatant commands normally have lists that detail which military units will respond to the plan, if needed, and the timing of that deployment. DOD refers to this information as time-phased force deployment data. The combination of regularly assigned forces and force deployment lists associated with the more detailed operations plans provides combatant commanders with a reasonable level of assurance that sufficient forces will be available to execute a plan if necessary and allows the commander to monitor the readiness of the units assigned to the respective area of responsibility or specific plan. Since NORTHCOM was established in October 2002, DOD has routinely considered the regular assignment of forces to the combatant commands in what DOD refers to as a “Forces For Unified Commands” document. However, despite the priority placed on homeland defense in the National Security Strategy, National Defense Strategy, and other DOD strategic guidance, DOD has only routinely assigned air defense and supporting forces to NORTHCOM. A contributing factor may be that the pace and scope of ongoing operations in Iraq, Afghanistan, and elsewhere in the world has severely limited the number and types of units available to respond to missions in the homeland. The assignment of forces to combatant commands provides commanders with a means to know which specific military forces are committed to that area of responsibility and, conversely, allows commanders to perform risk assessments if those forces must be committed elsewhere. In addition to lacking regularly assigned forces, NORTHCOM officials told us that their plans usually do not have lists that detail the military units that will be used because the plans are meant to cover a less-specific and broader range of threats, rather than specific scenarios. Only one NORTHCOM plan—the CBRNE Consequence Management plan—had a force deployment list at the time of our review of the plans. NORTHCOM has since developed force deployment lists as part of the revised homeland defense plan but not for the civil support plan. NORTHCOM officials told us that they created the CBRNE consequence management list in order to stress the importance of providing forces to the CBRNE mission. The force deployment list lays out the capabilities for what DOD calls the CBRNE Consequence Management Response Force (CCMRF), which is intended to be a series of separate units totaling roughly 15,000 personnel to provide initial response assistance to civil authorities in the event of a major CBRNE incident in the country. We reported previously on the lack of adequate training, equipment, and availability of active and reserve chemical and biological units and the potential difficulty DOD faced in meeting NORTHCOM’s CBRNE requirements. Despite being the only set of capabilities dedicated to a NORTHCOM civil support plan, the CCMRF has never been fully manned and equipped by DOD because many of the units that would make up the force have been deployed to their wartime missions or because of other availability or sourcing issues. DOD and National Guard officials are currently negotiating a plan whereby Guard units may provide the majority of CCMRF capabilities for a certain period until the Active Army can resume responsibility. However, lack of agreement between DOD and National Guard Bureau stakeholders on sources of funding and command and control issues continues to delay the effort. While a force deployment list does not guarantee that the appropriate units, trained personnel, and equipment will be available to execute a military plan, such a list provides a known set of capabilities against which to measure readiness and assess risk if all or part of the forces on the list are unavailable. None of NORTHCOM’s other civil support plans have force deployment lists, which limits NORTHCOM’s ability to know which military units may respond to its homeland defense or civil support missions if the need arises. To help mitigate the uncertainties caused by the lack of forces being assigned to execute NORTHCOM’s plans, NORTHCOM and DOD have developed a series of standing “execute orders” in the homeland defense and civil support areas. These orders identify the general types and numbers of forces necessary to execute missions in such areas as air and maritime homeland defense and defense support of civil authorities. One example is the domestic air defense order associated with Operation Noble Eagle. Additionally, during periods of heightened specific threats, such as the yearly hurricane season, NORTHCOM and the Joint Staff have prepared temporary execute orders that detail more specific military forces that can be called upon to meet an emerging NORTHCOM need to support civil authorities. The execute orders serve as the gateway to the “request for forces” process whereby NORTHCOM requests forces from U.S. Joint Forces Command, through the Joint Staff, and Joint Forces Command assigns specific military forces from the services to meet the specific requirement, if possible. The orders also allow NORTHCOM to place units on notice to prepare to deploy for a short time in advance of their actual assignment to NORTHCOM. According to Joint Forces Command and NORTHCOM officials, about 40,000 military personnel are associated with all of NORTHCOM’s execute orders and the CBRNE Consequence Management plan. However, with the exception of the dedicated homeland defense orders—such as Operation Noble Eagle—the CCMRF deployment list and civil support execute orders have very few units actually sourced to them. This means an increased level of uncertainty about whether the appropriate number of properly trained personnel and the correct equipment will be available when a plan needs to be executed. NORTHCOM officials are concerned about the high number of unsourced units and the corresponding level of uncertainty about the availability of appropriate military forces to meet their homeland defense and civil support needs. It should be noted, however, that we found no instances where Joint Forces Command could not meet NORTHCOM’s operational needs for an actual homeland defense or civil support mission. Fortunately, NORTHCOM’s homeland defense and civil support operations have mostly been manageable and not large- scale events. For example, in addition to Operation Noble Eagle, which NORTHCOM carries out every day, the command conducts anticipated and unanticipated operations in support of civil authorities, such as the response to the Minneapolis bridge collapse and Hurricane Dean in August 2007 and in response to the California wildfires in October 2007. NORTHCOM officials told us that the execute order process has provided them some limited measure of assurance that adequate military forces will be available for their homeland defense and civil support plans. However, the absence of regularly assigned forces in NORTHCOM’s area of responsibility and the lack of units specifically identified to execute NORTHCOM’s plans may increase the level of risk to homeland defense or civil support operations in terms of the availability of a sufficient number of personnel with the appropriate level of training and equipment for conducting the domestic mission. NORTHCOM has difficulty monitoring the readiness of individual military units because in part, few requirements or units that may respond to a request for civil support have been identified. In contrast, through its planning process for homeland defense, NORTHCOM has determined the forces it needs for this mission and, through the services, monitors the readiness of these forces. DOD normally measures the readiness of military units by (1) assigning them to conduct missions associated with specific plans and (2) using lists of mission-essential tasks that correlate to the actual mission they would perform. The degree to which units have the numbers of trained personnel and the equipment necessary to accomplish those mission-essential tasks serves as the overall measure of a unit’s readiness. According to NORTHCOM, Joint Forces Command, Joint Staff, and U.S. Army Forces Command officials, DOD generally assumes that a unit capable of performing its military mission is also capable of performing a civil support mission, but this may not always be true. Neither NORTHCOM nor the military services have developed mission-essential tasks for civil support missions. We have reported on the mismatch between assessments of readiness based solely on wartime missions and the requirements of domestic civil support missions. Whereas homeland defense missions in and around the United States would be similar to traditional wartime missions, those same mission tasks do not necessarily provide a complete picture of readiness for a domestic civil support mission. As a result, DOD does not have a direct method to measure the readiness of units for the civil support mission. DOD officials told us that it is often possible for a unit to be considered not ready for its wartime mission but be able to execute a civil support mission. For example, a U.S. Army air defense unit whose surface-to-air missile launchers are still overseas or undergoing depot repair is not considered ready to conduct its wartime mission. However, to the extent that personnel, trucks, and other equipment were still with the unit, it may be ready to conduct a civil support mission, such as delivering supplies to a disaster area. This is not captured in DOD’s readiness system. Further, the lack of mission-essential tasks for the range of civil support missions leads to a potential gap in DOD’s knowledge of whether sufficient trained personnel and equipment are available. For example, NORTHCOM’s civil disturbance plan assumes that nonlethal equipment and methods would be necessary and that the forces required to conduct such operations have been trained in nonlethal methods. But without a set of mission tasks against which to measure unit readiness, there is no objective means of determining if military units can meet these tasks. Because at the time of our review only one of NORTHCOM’s major plans has actual units assigned to it (CBRNE Consequence Management), NORTHCOM officials were unable to monitor readiness of units that may be asked to respond to other plans, even if there were specific civil support-related mission tasks. We have work under way reviewing DOD’s Readiness Reporting System, and we did not assess the accuracy of that system as part of this review. However, we asked NORTHCOM officials to show us the extent to which they could use DOD’s readiness systems to monitor readiness for both its homeland defense and civil support missions. For the ability to respond to potential CBRNE attacks, NORTHCOM has developed mission-essential tasks for the CCMRF. However, Joint Staff and National Guard officials told us that they estimated that the wartime military tasks of the units only met about 70 percent of the CCMRF’s total mission, which further indicates the mismatch between wartime tasks a unit may face in comparison to tasks it may face in a domestic, non- wartime environment. Joint Task Force Civil Support, NORTHCOM’s subordinate command for CBRNE consequence management, routinely uses the CCMRF’s mission-essential tasks, the existing DOD readiness system, and direct interaction with Joint Forces Command and Army officials to monitor the readiness of CCMRF forces on a routine basis. Even with this effort, NORTHCOM and Joint Task Force Civil Support officials told us that it is difficult to track readiness because, as we indicated earlier, so few of the units are actually filled with the personnel and equipment necessary. Nonetheless, the fact that the CBRNE forces have mission tasks against which to measure readiness in the existing system provides a level of knowledge about the overall state of readiness to execute the CBRNE plan. This, in turn, provides the NORTHCOM Commander and DOD with a clearer picture of the risk they face in that area. Because no mission tasks exist for general civil support missions, NORTHCOM and DOD face greater uncertainty about their ability to execute these plans. Mission-essential tasks are also critical guides for training military units for their missions and for conducting and evaluating exercises. NORTHCOM conducts two major exercises each year that include both homeland defense and civil support elements. The command also participates in other commands’ live exercises as well as tabletop simulations of various homeland defense and civil support operations. Further, NORTHCOM has a system for incorporating lessons learned from training exercises into plans and future training exercises. The system has a good structure for submitting and processing lessons, including multiple layers of review to assess the validity of lessons and the assignment of individuals with the responsibility of managing and addressing lessons. NORTHCOM officials believe that the system is adequate, and they continue to seek ways to improve the process. Ensuring that appropriate mission-essential tasks are associated with each of the missions for which NORTHCOM is responsible would further help NORTHCOM officials evaluate exercises and actual operations and incorporate lessons learned into future exercises and plan revisions. The command would also be in a better position to conduct meaningful analysis to identify recurring lessons and understand the causes of various systemic issues. This, in turn, would allow NORTHCOM and DOD to identify those areas where increased effort—and possibly resources—may be required. To mitigate the uncertainties in readiness for civil support operations, NORTHCOM has worked with Joint Forces Command and the military services in advance of some potential incidents, such as hurricanes and wildfires, to gain a better understanding of what units were likely to be assigned, if necessary. This interaction has allowed NORTHCOM and other DOD stakeholders to directly monitor the personnel and equipment status of military units to determine if they would be prepared to adequately respond to a civil support mission. For the remainder of NORTHCOM’s potential civil support missions, NORTHCOM still lacks an objective means to determine if the units that will be conducting civil support operations in fact have the capabilities needed to fully conduct these missions. In December 2007, the President issued an annex to the 2003 Homeland Security Presidential Directive 8 that establishes a standard and comprehensive approach to national planning for homeland security. Included in the new instructions is a requirement that the federal government more closely integrate federal, state, local, and tribal plans with respect to capability assessments. This may further assist NORTHCOM in more accurately determining its capability requirements for civil support missions. Among the new requirements was also a series of cascading plans at the strategic, operational, and tactical levels. For example, all relevant federal agencies are now required to prepare more detailed OPLANs with respect to their specific homeland security missions. Thus far, NORTHCOM has been required only to prepare less detailed CONPLANs. The definition for OPLANs in the new guidance includes a requirement that such a plan “identifies detailed resource, personnel and asset allocations.” This is similar to the level of detail DOD requires in its OPLANs, including the force deployment lists we discussed. If these comprehensive national planning processes are pursued by DOD, in coordination with DHS, NORTHCOM may be able to further address some of the challenges and gaps we highlight. NORTHCOM has an adequate number of planning personnel, and the command is pursuing opportunities to expand the experience and training for staff needed to perform the command’s planning function. While the unique characteristics associated with a domestic military command present challenges, NORTHCOM officials address these circumstances by integrating National Guard and Coast Guard personnel with NORTHCOM staff. NORTHCOM, independently and with other organizations, is also developing educational opportunities that address the challenges associated with the interagency and state/federal environment that planners face. We compared the numbers and general qualifications of NORTHCOM’s planning staff with those of other combatant commands as a way of gaining a rough understanding of what NORTHCOM’s staff looks like in comparison to commands that have been established for a longer period of time. NORTHCOM’s planning staff is assigned at over 96 percent of the command’s authorized staffing level. These staff members include all headquarters staff who have some form of planning function and not just the staff of the plans directorates or those personnel with specific designations as planners. As shown in table 3, with the exception of the U.S. Central Command, NORTHCOM also has a greater number of staff it considers to be planners and was staffed at a higher percentage of its authorization than all other combatant commands responding to our information requests. We did not independently validate NORTHCOM’s requirements for planning personnel. However, NORTHCOM officials said that they believe they have an adequate number of planning personnel. Further, NORTHCOM has been conducting an ongoing assessment of its overall manpower needs and is evaluating the extent to which changes in requirements for personnel may be needed. NORTHCOM officials stated that partially because of the need to support other operations, such as ongoing military operations overseas, the command attempts to maximize the use of civilian staff in its workforce to maintain continuity and consistency. Civilian staff provides an institutional knowledge base and experience level that compliments the capabilities of military officers who rotate through the command’s directorates. Over one-half of the command’s planning staff is civilian or contractor personnel. As shown in table 4, two other commands in our review, U.S. Central Command and U.S. Pacific Command, also rely heavily on civilian or contract personnel. The military personnel who serve as NORTHCOM planners receive basic planning-related training similar to that of planners in other combatant commands. DOD and the services provide educational opportunities for members of the U.S. Armed Forces, international officers, and federal government civilians. These opportunities provide a broad body of knowledge that enables students to develop expertise in the art and science of war. Many of NORTHCOM’s military planners have completed some of these courses. A number of these courses are also offered to civilian planning personnel. To accomplish its homeland defense and civil support missions, NORTHCOM must plan for and interact with other federal, state, and territorial government agencies in addition to Canada and Mexico. The need to plan for and conduct operations (1) within the United States and (2) in support of other federal agencies, 49 state governments, and Canada and Mexico presents a challenge to most planners who have functioned solely in a military environment. NORTHCOM has sought to address this challenge by integrating personnel from the National Guard and U.S. Coast Guard into NORTHCOM’s headquarters staff. These personnel have experience working in the state environment and are incorporated into most, if not all, of the NORTHCOM directorates that conduct some form of operational planning. Thirty-six National Guard and 22 U.S. Coast Guard personnel are stationed at NORTHCOM. These personnel provide command planners and operations personnel with co-workers who have experience planning for and conducting operations with other federal and state agencies. In January 2008, Congress required the Chairman of the Joint Chiefs of Staff to review the military and civilian positions, job descriptions, and assignments at NORTHCOM. The goal is to determine the feasibility of increasing the number of reserve component military personnel or civilian staff with experience in homeland defense and civil support at NORTHCOM. Having an adequate number of properly trained personnel to ensure that missions are successfully planned is a decisive factor in the success of any mission. NORTHCOM officials have been attempting to establish and maintain a cadre of personnel in the active military with knowledge and experience in NORTHCOM planning, homeland defense, civil support, and interagency planning and coordination that go beyond the basic level training the military provides in joint planning. These efforts extend from the level of basic orientation training all the way to programs at the graduate level. NORTHCOM planners are required to complete an orientation course that serves as a “crosswalk” between DOD’s homeland defense and civil support plans and the plans of their agency partners. The orientation course also provides students with a better understanding of DOD policy regarding the protection of the homeland. DOD officials told us that additional such planning courses are now offered at other DOD schools, such as the Army’s Command and General Staff College and School of Advanced Military Studies. As recommended in the 2006 Quadrennial Defense Review Report, NORTHCOM has taken steps to create training programs and partner with other agencies and private institutions. The goal is to develop educational opportunities for interagency and state/federal environment planners to inform them of other agencies’ homeland security responsibilities to improve overall cooperation and coordination. For example, NORTHCOM has developed a course for DOD and interagency personnel that focuses on support to civil authorities. While the course does not directly address the detailed aspects of planning, it provides an overview of DOD and other agencies’ responsibility for homeland security. Officials from the Joint Forces Staff College believe this is a valuable course and they are considering requiring students to complete it before they can take certain other courses at the college. In addition, NORTHCOM has developed a training curriculum for each of its planning personnel. NORTHCOM officials stated that each planner’s progress in completing the curriculum is automatically tracked to ensure timely completion. Several of the courses in the curriculum must be completed within specific time periods. To further expand the educational opportunities for its own staff as well as staff from agencies across the federal government, NORTHCOM has also partnered with the University of Colorado at Colorado Springs to develop the Center for Homeland Security, located on the campus of the University of Colorado at Colorado Springs, which provides research and educational capabilities to meet specific needs regarding protection of the homeland. One of the accomplishments of the center is the creation of several programs of study in homeland defense, including undergraduate and graduate certificates in homeland security and homeland defense. According to a senior official with the center, the four courses required for the graduate certificate can also be applied toward a master of business administration and a master of public affairs. The center, in cooperation with several of its partners, including NORTHCOM, is also in the process of developing other educational programs, such as a master of arts and a doctoral program in homeland security. According to NORTHCOM officials, a cooperative effort among the University of Colorado at Colorado Springs, the Naval Postgraduate School, and NORTHCOM helped found the Homeland Security/Defense Education Consortium, which is a network of teaching and research institutions focused on promoting education, research, and cooperation related to and supporting the homeland security/defense mission. The consortium conducts two symposia annually, one at NORTHCOM and a second at a FEMA location. The Naval Postgraduate School also has a master’s degree program through its Center for Homeland Defense and Security. This program, designed in cooperation with FEMA, includes strategy development, organizational, planning, and interagency coordination aspects. NORTHCOM personnel have started to take advantage of these programs on a case-by-case basis, but there are no command requirements for NORTHCOM staff to attend any of these courses or programs. NORTHCOM’s efforts to provide additional training and education for its staff should help the command expand its experience in planning and conducting operations with partners at the international, federal, and state levels. NORTHCOM officials have recognized the need for such education opportunities at all levels for their own staff as well as for other military and civilian personnel. At some point, NORTHCOM may be in a position to require certain prerequisites in this area for military or civilian staff who may be considered for assignment to the command. NORTHCOM has taken actions to improve the coordination of its homeland defense and civil support plans and operations with federal agencies. Such coordination is important for ensuring that proper planning in advance of an attack or a natural disaster and that such operations proceed as smoothly as possible if they need to be conducted. However, NORTHCOM lacks formal guidance to coordinate its planning effort with its agency partners. This results in uncertainty about which planning coordination efforts are continued or agreed to by higher authorities and an increased risk that interagency planning will not be done effectively. We found several areas in which NORTHCOM has taken steps to improve coordination with other agencies and organizations, many resulting from the lessons learned following Hurricane Katrina. Coordination is important not just for interagency planning but also to ensure that NORTHCOM and its agency partners work together effectively when an incident actually occurs. For example, NORTHCOM created an Interagency Coordination Directorate in 2002 to assist in its collaboration efforts. Today, 40 agencies and organizations are represented at NORTHCOM, including a senior executive official from DHS as well as officials from FEMA, the Federal Bureau of Investigation, and the Central Intelligence Agency. The directorate is designed to help build effective relationships by facilitating, coordinating, and synchronizing information sharing across organizational boundaries. NORTHCOM and U.S. Southern Command are the only combatant commands with directorates dedicated solely to interagency coordination. Table 5 shows the agencies currently represented at NORTHCOM. The presence of agency representatives provides a regular opportunity for direct interaction between them and NORTHCOM staff. NORTHCOM and other agency officials with whom we spoke agreed that this level of regular contact is beneficial for coordinating plans in advance but also for the more immediate needs of coordination when an event actually occurs. Such agency representatives should therefore have the experience to provide an effective link to their parent agencies and possess the appropriate level of access to agency leadership in order to facilitate interagency decision-making. When a major incident occurs, the agency representatives, known as the Interagency Coordination Center, become a direct adjunct to the NORTHCOM Commander’s battle staff, assisting the command in its immediate crisis planning and providing a direct link to their parent agencies. The Interagency Directorate also administers NORAD-NORTHCOM’s Joint Interagency Coordination Group (JIACG), which is composed primarily of the 40 resident agency representatives who are experts in interagency planning and operations on the command’s staff. The JIACG’s role is to coordinate with civilian federal agency partners to facilitate interagency operational planning in contingency operations. All combatant commands are establishing JIACGs. The JIACG supports day-to-day planning and advises NORTHCOM planners regarding civilian agency operations, capabilities, and limitations. Further, the JIACG provides the command with day-to-day knowledge of the interagency situation and links directly with agency partners at the command and in other locations when an operation is necessary. The JIACG also conducts focused planning on specific issues. For example, the group met with officials from the Centers for Disease Control and Prevention and the Department of Health and Human Services in August 2006 to coordinate federal efforts for responding to a potential influenza pandemic. The JIACG also formed a working group to integrate private sector capabilities and interests into NORTHCOM plans and operations as appropriate. Specifically, the group’s objectives were to determine how to provide NORTHCOM with private sector information regarding facilities and operations, achieve coordination and cooperation with the private sector, and gain and maintain awareness of technological initiatives developed in the private sector. The JIACG also formed working groups for law enforcement issues, earthquakes, and prescripted mission assignments. According to FEMA’s Director, one of the most important interagency planning tools developed as a result of the lessons learned during Hurricane Katrina is the prescripted mission assignments discussed earlier. NORTHCOM collaborated with FEMA and other agencies to identify the most likely tasks DOD would be asked to fulfill and drafted generic mission assignments for those tasks in terms of capability requirements rather than specific resources. Twenty-five prescripted mission assignments are included in NORTHCOM’s standing Defense Support for Civil Authorities Execute Order. These mission assignments also include defense coordinating officers (DCO) who are located in each of FEMA’s 10 regional offices (see fig. 3). Officials from several agencies told us that locating the DCOs in the FEMA regions and assigning greater emphasis to the DCOs’ missions has enhanced interagency coordination, particularly with states. The DCOs are senior military officers with joint experience and training on the National Response Framework, defense support to civil authorities, and DHS’s National Incident Management System. They are responsible for assisting civil authorities, when requested by FEMA, by providing liaison support and capabilities requirements validation. DCOs serve as single points of contact for state, local, and other federal authorities that need DOD support. DCOs work closely with federal, state, and local officials to determine what unique DOD capabilities are necessary and can be used to help mitigate the effects of a natural or man-made disaster. For example, during the recent California wildfires, NORTHCOM’s subordinate command, Army Forces North, deployed the Region IX DCO to support the Joint Field Office in Pasadena, California, and assess and coordinate defense support of civil authorities to FEMA. Based on the requirements identified by state and federal officials in consultation with the DCO, DOD and the National Guard deployed six aircraft equipped with the Modular Airborne Firefighting System to California to assist in fighting the wildfires. U.S. NORTHCOM has also improved interagency coordination through its involvement in hurricane preparation with a wide range of state and federal partners, including state adjutants general, FEMA, NGB, and state and local emergency managers. NORTHCOM facilitates weekly hurricane teleconferences throughout the hurricane season, which lasts from June to November every year, to provide the opportunity for agencies to discuss potential storms; resources available in the affected area as well as through other sources, such as the Emergency Management Assistance Compact (EMAC) or FEMA; and potential needs or unique capabilities that DOD may be asked to provide. As a result of this frequent interaction, NORTHCOM, DHS, and state officials believe the command has begun to build more productive and effective relationships with the hurricane states and participating agencies. For example, in anticipation of Hurricane Dean being upgraded from a tropical storm in August 2007, at FEMA’s request NORTHCOM deployed a DCO and supporting team to the Caribbean in preparation for landfall. The DCO was prepared to coordinate requests for military assistance and resources and provide direct support to federal, state, and local agencies responding to the incident. In addition to efforts to coordinate with federal agencies and organizations, NORTHCOM recently began efforts to increase coordination with private sector businesses and nongovernmental organizations in planning for and responding to disasters to help NORTHCOM better focus resources and ensure that efforts are not duplicated. For example, during Hurricane Katrina, Wal-Mart was able to deliver bottled water to some locations more quickly than federal agencies could. Since many of NORTHCOM’s coordination efforts with nongovernmental organizations are recent, it is too soon to determine how successful they will be. Despite the steps that NORTHCOM has taken to improve federal interagency coordination, we found that it lacks formalized procedures— such as memorandums of understanding or charters—to ensure that agreements or arrangements made between the command and agency representatives can be relied on for planning purposes. As we have reported in the past, key practices that can enhance and sustain interagency planning coordination efforts include—among others— establishing mutually reinforcing or joint strategies, agreeing on roles and responsibilities, and identifying and addressing needs by leveraging resources. We also reported that interagency coordination can be enhanced by articulating agreements in formal documents, such as a memorandum of understanding, interagency guidance, or interagency planning document, signed by senior officials in the respective agencies. DOD’s adaptive planning—that is, the joint capability to create and revise plans rapidly and systematically, as circumstances require—includes interagency coordination as a key part of the plan development process. Further, the nature of NORTHCOM’s homeland defense and civil support missions requires interagency coordination and support throughout all levels of planning and operations. This is particularly important since so many government agencies share the responsibility to ensure an effective response to disasters such as Hurricane Katrina. It is therefore crucial that DOD—through NORTHCOM—plan and coordinate thoroughly with all relevant federal agencies. NORTHCOM planners have achieved some success in coordinating NORTHCOM’s homeland defense plan with an Incident Management Planning Team (IMPT), an interagency team created by DHS to provide contingency and crisis action incident management planning based on the 15 national planning scenarios. However, the planners told us that their successful collaboration with the IMPT is largely because of the dedicated personalities involved. For example, NORTHCOM planners have informally instituted workshops and biweekly teleconferences with the IMPT core and on-call groups to review NORTHCOM’s homeland defense plan, as well as to discuss the overarching objectives of homeland defense and security. NORTHCOM officials told us that the IMPT offers a unique avenue of coordination direct to various agency partners and has helped to break down institutional barriers by promoting more constructive relationships between the agencies involved. However, without a formal charter or memorandum of understanding that institutionalizes the structure for integrated interagency planning, there is a risk that these efforts to coordinate with agency partners will not continue when the current planning staff move to their next assignments. Further, these and other coordination efforts do not have mechanisms for obtaining parent agency approval of agreements reached, and it is unclear what will be done with the results of their efforts. Consequently, many otherwise valuable interagency efforts may not be sufficiently supported by one or more participating agencies, and key agency staff can be confused about which coordination mechanisms serve a particular function. As part of the new Homeland Security Presidential Directive annex on national planning, DHS is required to coordinate with the heads of other federal agencies and develop an integrated planning system. This planning system is required to 1. provide common processes for developing plans; 2. serve to implement phase one of DHS’s Homeland Security 3. national planning doctrine and planning guidance, instruction, and process to ensure consistent planning across the federal government; a mechanism that provides for concept development to identify and analyze the mission and potential courses of action; a description of the process that allows for plan refinement and proper execution to reflect developments in risk, capabilities, or policies, as well as to incorporate lessons learned from exercises and actual events; a description of the process that links regional, state, local, and tribal plans, planning cycles, and processes and allows these plans to inform the development of federal plans; a process for fostering vertical and horizontal integration of federal, state, local, and tribal plans that allows for state, local, and tribal capability assessments to feed into federal plans; and a guide for all-hazards planning, with comprehensive, practical guidance and instruction on fundamental planning principles that can be used at federal, state, local, and tribal levels to assist the planning process. Such an integrated planning system, if developed and institutionalized across the federal government in coordination with state and local governments, should further address the interagency coordination gaps we identified. After being in operation for over 5 years, NORTHCOM has begun to establish itself as a major combatant command and plan for its role in leading homeland defense operations and assisting civil authorities in the event of major disasters. NORTHCOM has developed, refined, and is now revising a body of major homeland defense and civil support plans. Nonetheless, NORTHCOM’s limited progress in adequately tracking and assessing the supporting plans necessary to carry out homeland defense and civil support operations introduces increased risk in the planning process. The review process NORTHCOM officials told us they are developing to track and assess supporting plans from other commands and agencies should help them close this gap, but only if their process is consistently applied and includes supporting plans from all commands, organizations, and agencies required to submit them. Further, the considerable challenges NORTHCOM faces in planning for and conducting homeland defense and civil support missions are exacerbated by decisions DOD and the command have made. DOD’s decision not to assign regular forces to NORTHCOM, the decision not to associate specific military capabilities and units with NORTHCOM’s plans, and the decision not to develop mission-essential tasks for civil support missions each introduce increased uncertainty into NORTHCOM’s homeland defense and civil support planning efforts. When considering their compounding effects together, the risk to NORTHCOM’s planning effort are increased even further. To some degree, NORTHCOM will always face challenges and risk in planning because it has to be prepared for a wide variety of incidents that can range from a regional flood to a catastrophic nuclear incident to a widespread terrorist attack. The capabilities allocation and other planning challenges we discuss can be further addressed, but there is no guarantee that this will compensate for the scarcity of units and equipment because of the pace of ongoing operations overseas. However, addressing the planning gaps we identified would permit NORTHCOM and DOD a much more accurate understanding of the risk associated with homeland defense and civil support operations in the United States. Such risk mitigation efforts have recently been required as part of the President’s and DHS’s national preparedness guidance on national planning, and these requirements provide an opportunity for DOD and NORTHCOM to address the gaps we identified. NORTHCOM’s federal interagency coordination efforts have helped address some of the uncertainty in the homeland defense and civil support planning process and have improved NORTHCOM’s ability to coordinate in the event of actual incidents. This is important because responding to a major disaster in the United States—natural or man-made—is a shared responsibility of many government agencies with states often requiring federal assistance from DHS and DOD. Without clear guidance and procedures on interagency roles and responsibilities across the federal government and an understanding about which interagency planning efforts or coordination mechanisms are authoritative, the multiple interagency efforts that have been ongoing might not meet their potential for integrating operational planning dealing with all threats to the homeland, natural or man-made. If the integrated planning system required by the President’s new homeland security guidance is developed and institutionalized across the federal government in coordination with state and local governments, it should further assist NORTHCOM and DOD in addressing the interagency coordination gaps we identified. To help NORTHCOM reduce the level of risk to its homeland defense and civil support planning efforts, in conjunction with the new national planning requirements of the National Response Framework and the national planning annex to Homeland Security Presidential Directive 8, we are making three recommendations: We recommend that the Secretary of Defense direct the Commander of NORTHCOM to complete the process to track the status of all supporting plans, coordinate the completion of those plans by other commands and agencies, and assess the suitability of those plans to meet the intent and objectives of NORTHCOM’s major plans. Given the priority DOD places on homeland defense, we recommend that the Secretary of Defense assign forces to NORTHCOM—as is done for other combatant commands—as well as require NORTHCOM to develop dedicated time-phased force deployment data lists for each of its major plans. We recommend that the Secretary of Defense direct the Commander of NORTHCOM, in consultation and coordination with the services, to develop mission-essential tasks for its civil support plans. Individual units required for these missions should be identified, and these mission-essential tasks should be included as part of DOD’s readiness assessment systems in order to permit consistent tracking of readiness for specific elements of NORTHCOM’s plans. To help NORTHCOM and DOD better integrate their operational planning practices into the interagency and national preparedness structure, we recommend that the Secretary of Defense, in consultation with the Commander of NORTHCOM and other appropriate federal agencies, develop clear guidance and procedures for interagency planning efforts, including appropriate memorandums of understanding and charters for interagency planning groups. This should be done in conjunction with the integrated planning system required in the national planning annex to Homeland Security Presidential Directive 8. In comments on a draft of this report, DOD generally agreed with the intent of our recommendations and discussed steps it is taking and planning to take to address these recommendations. DOD also provided technical comments, which we have incorporated into the report where appropriate. In response to our recommendation that NORTHCOM complete the process to track the status of supporting plans, coordinate the completion of those plans by other commands and agencies, and assess the suitability of those plans to meet the intent and objectives of NORTHCOM’s major plans, DOD agreed with the need for these actions but stated that the existing guidance we noted in our report already provides sufficient direction. We agree that further formal guidance or direction may be unnecessary as long as NORTHCOM consistently pursues its effort to review supporting plans, including the supporting plans of all commands, agencies, and organizations required to prepare such plans. For example, some plans call for other DOD agencies and even non-DOD agencies to prepare supporting plans. In these cases, while NORTHCOM may not have the authority to compel compliance, it should nevertheless review these supporting plans for adequacy. In response to our recommendation that the Secretary of Defense assign forces to NORTHCOM, DOD agreed that certain specialized forces, such as those trained and equipped for CBRNE consequence management, should be regularly assigned to NORTHCOM but said that it was not practical to attempt to assign general purpose forces to meet all possible civil support contingencies. DOD did not agree that all NORTHCOM plans should have force deployment lists because it would not provide the level of readiness tracking that we highlighted as being necessary in our report. We agree that it is not practical to assign forces to NORTHCOM in an attempt to cover all possible contingencies. Our concern was that the NORTHCOM Commander should have a similar level of flexibility and day-to-day readiness assurance that regularly assigned forces provide to other combatant commanders. Assigning some specialized forces to NORTHCOM would contribute to providing such flexibility and assurance. DOD stated that it will work to develop civil support readiness metrics for general purpose forces rather than prepare specific force deployment lists for individual plans that were not already required to have them. We believe this effort would help institutionalize the importance of DOD’s domestic mission and provide NORTHCOM and other DOD authorities a means of monitoring readiness to accomplish domestic missions. With respect to our recommendation that DOD develop mission-essential tasks for NORTHCOM’s civil support plans and identify the units required for these missions, DOD agreed with our assessment that NORTHCOM needs to track units’ readiness to complete civil support missions but said that identifying units for all its civil support tasks would be impractical. DOD reiterated its proposal to develop civil support-specific metrics against which all general purpose forces could be measured. We believe that developing such metrics would meet the intent of our recommendation and would further institutionalize DOD’s domestic mission throughout the force. DOD agreed with our recommendation that clear guidance be developed for interagency planning efforts. DOD stated that it had begun to incorporate such direction in its major planning documents and would continue to expand on this guidance in the future. We believe DOD’s efforts as part of the Integrated Planning System and on its own, if pursued consistently, should help better focus interagency planning to meet the range of natural and man-made threats. DOD’s written comments are reprinted in appendix III. DHS also reviewed a draft of this report and provided technical comments, which we have incorporated where appropriate. We are sending copies of this report to the Secretary of Defense and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5431 or dagostinod@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key staff members who contributed to this report are listed in appendix IV. To determine the extent to which U.S. Northern Command (NORTHCOM) has prepared plans to execute its homeland defense and civil support missions, we reviewed NORTHCOM’s available major plans and supporting plans, comparing them to established Department of Defense (DOD) joint operational planning criteria for completeness and adequacy. We also met with knowledgeable NORTHCOM officials to discuss the status of each of the plans NORTHCOM is required to prepare and the process whereby the plans were developed and assessed. We did not independently validate the planning elements, such as the assumptions NORTHCOM used. We therefore did not attempt to state the extent to which the plans are executable. We compared the 15 national planning scenarios with NORTHCOM’s plans and discussed the incorporation of the scenarios within those plans with NORTHCOM officials. To assess the challenges NORTHCOM faces in planning for and conducting homeland defense and civil support, we developed a methodology based on DOD’s standards for joint operational planning. Although we included all of NORTHCOM’s plans in our review, we concentrated on the two primary homeland defense and civil support plans as well as the Chemical, Biological, Radiological, Nuclear, and High-Yield Explosive Consequence Management plan. The methodology involved a series of questions and topics to determine the extent to which NORTHCOM and DOD have considered the following as part of their planning for homeland defense and civil support: Allocation of military capabilities to meet identified capability Readiness of forces (trained personnel and equipment) to meet the missions for which they are assigned Conduct of exercises and evaluation of lessons learned that can be fed back into the planning process We discussed this methodology with officials from the National Defense University, NORTHCOM, the Joint Staff, and the Joint Forces Staff College to ensure that it was a reasonable approach to evaluating joint operational planning. We used the results of this analysis and our discussions with a broad range of DOD officials to determine what gaps, if any, exist in NORTHCOM’s planning efforts stemming from these challenges. We also reviewed the structure of NORTHCOM’s lessons learned process and collected information on the origin, analysis, and disposition of homeland defense and civil support lessons. As part of this effort, we observed a major exercise (Ardent Sentry/Northern Edge) in the Indianapolis area in May 2007. During our review, the NORTHCOM Inspector General’s Office was conducting an assessment of the command’s lessons learned process, including oversight mechanisms and internal controls. Therefore, we did not conduct a deeper analysis of those elements. To determine the extent to which NORTHCOM has adequate planning personnel with the relevant experience and training to perform the planning function for the command, we discussed personnel staffing and training with officials from NORTHCOM headquarters, NORTHCOM subordinate commands, and Joint Forces Staff College who were knowledgeable of training courses available to planning personnel. We discussed the extent to which NORTHCOM addresses planning challenges unique to the command in its planning staff structure. In addition, we compared basic information on planning personnel at NORTHCOM with that of U.S. Central Command, U.S. Southern Command, U.S. European Command, and U.S. Pacific Command in such areas as overall staffing levels; numbers of military, civilian, and contractor personnel on staff; and number of planning personnel who had received Joint Professional Military Education credit. Since our intention was to look at all the staff who have a direct relation to planning at the commands, and not just the staff of the plans directorates, we left it up to the commands to define who should be included. We did not validate the commands’ requirements for specific numbers of planning personnel, and we did not independently validate the personnel data we received from the combatant commands. However, we assessed the data reliability measures the commands took to gather and maintain the data and determined that the information originated with the commands themselves and represented the best available source. We did not obtain the data from other sources, such as databases maintained by the military services’ personnel centers. We found the data to be sufficiently reliable for the purposes of this report. To determine the extent to which NORTHCOM coordinates with federal agencies and other organizations in planning for and conducting its missions, we met with officials from NORTHCOM’s Interagency Coordination Directorate; reviewed the documentation and mechanisms for coordination with organizations outside NORTHCOM; and interviewed officials from NORTHCOM’s subordinate commands, the Department of Homeland Security (DHS), the Federal Emergency Management Agency (FEMA), and the National Guard Bureau (NGB). We also surveyed the adjutants general from the 48 contiguous states, Alaska, and the District of Columbia and obtained information from NORTHCOM, DHS, and NGB on NORTHCOM’s coordination with the states. We are reporting separately on the results of that work. In addressing our objectives, we reviewed plans and related documents, obtained information, and interviewed officials at the following locations: NORTHCOM Headquarters, Peterson Air Force Base, Colorado Springs, Joint Forces Command, Norfolk, Virginia The Office of the Secretary of Defense, Washington, D.C. The Joint Staff, Washington, D.C. Joint Task Force-Civil Support, Fort Monroe, Virginia U.S. Army North, Fort Sam Houston, San Antonio, Texas U.S. Army Forces Command, Fort McPherson, Atlanta, Georgia U.S. Army Reserve Command, Fort McPherson, Atlanta, Georgia Joint Force Headquarters National Capitol Region, Fort McNair, Washington, D.C. Fleet Forces Command, Norfolk, Virginia NGB, Arlington, Virginia DHS, Washington, D.C. U.S. Coast Guard Headquarters, Washington, D.C. FEMA, Washington, D.C. We conducted our review from May 2006 to April 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 6 shows the 25 prescripted mission assignments that NORTHCOM and FEMA officials coordinated in order to facilitate the process for requesting DOD capabilities in the event of an emergency. In addition to the contact named above, Lorelei St. James, Assistant Director; Steven D. Boyles; Yecenia C. Camarillo; Angela S. Jacobs; David F. Keefer; Joseph W. Kirschbaum; Joanne Landesman; Robert D. Malpass; Lonnie J. McAllister; Erin S. Noel; Pamela Valentine; and Jena R. Whitley made key contributions to this report. Homeland Defense: Steps Have Been Taken to Improve U.S. Northern Command’s Coordination with States and the National Guard Bureau, But Gaps Remain. GAO-08-252. Washington, D.C.: April 16, 2008. Homeland Security: DHS Improved its Risk-Based Grant Programs’ Allocation and Management Methods, But Measuring Programs’ Impact on National Capabilities Remains a Challenge. GAO-08-488T. Washington, D.C.: March 11, 2008. Department of Homeland Security: Progress Made in Implementation of Management and Mission Functions, but More Work Remains. GAO-08-457T. Washington, D.C.: February 13, 2008. Influenza Pandemic: Opportunities Exist to Address Critical Infrastructure Protection Challenges That Require Federal and Private Sector Coordination. GAO-08-36. Washington, D.C.: October 31, 2007. Homeland Security: Preliminary Information on Federal Actions to Address Challenges Faced by State and Local Information Fusion Centers. GAO-07-1241T. Washington, D.C.: September 27, 2007. Influenza Pandemic: Opportunities Exist to Clarify Federal Leadership Roles and Improve Pandemic Planning. GAO-07-1257T. Washington, D.C.: September 26, 2007. Department of Homeland Security: Progress Report on Implementation of Mission and Management Functions. GAO-07-1240T. Washington, D.C.: September 18, 2007. Homeland Security: Observations on DHS and FEMA Efforts to Prepare for and Respond to Major and Catastrophic Disasters and Address Related Recommendations and Legislation. GAO-07-1142T. Washington, D.C.: July 31, 2007. Influenza Pandemic: DOD Combatant Commands’ Preparedness Efforts Could Benefit from More Clearly Defined Roles, Resources, and Risk Mitigation. GAO-07-696. Washington, D.C.: June 20, 2007. Reserve Forces: Actions Needed to Identify National Guard Domestic Equipment Requirements and Readiness. GAO-07-60. Washington, D.C.: January 26, 2007. Chemical and Biological Defense: Management Actions Are Needed to Close the Gap between Army Chemical Unit Preparedness and Stated National Priorities. GAO-07-143. Washington, D.C.: January 19, 2007. Reserve Forces: Army National Guard and Army Reserve Readiness for 21st Century Challenges. GAO-06-1109T. Washington, D.C.: September 21, 2006. Catastrophic Disasters: Enhanced Leadership, Capabilities, and Accountability Controls Will Improve the Effectiveness of the Nation’s Preparedness, Response, and Recovery System. GAO-06-618. Washington, D.C.: September 6, 2006. Coast Guard: Observations on the Preparation, Response, and Recovery Missions Related to Hurricane Katrina. GAO-06-903. Washington, D.C.: July 31, 2006. Homeland Defense: National Guard Bureau Needs to Clarify Civil Support Teams’ Mission and Address Management Challenges. GAO-06-498. Washington, D.C.: May 31, 2006. Hurricane Katrina: Better Plans and Exercises Need to Guide the Military’s Response to Catastrophic Natural Disasters. GAO-06-808T. Washington, D.C.: May 25, 2006. Hurricane Katrina: Better Plans and Exercises Needed to Guide the Military’s Response to Catastrophic Natural Disasters. GAO-06-643. Washington, D.C.: May 15, 2006. Hurricane Katrina: GAO’s Preliminary Observations Regarding Preparedness, Response, and Recovery. GAO-06-442T. Washington, D.C.: March 8, 2006. Emergency Preparedness and Response: Some Issues and Challenges Associated with Major Emergency Incidents. GAO-06-467T. Washington, D.C.: February 23, 2006. Reserve Forces: Army National Guard’s Role, Organization, and Equipment Need to be Reexamined. GAO-06-170T. Washington, D.C.: October 20, 2005. Homeland Security: DHS’ Efforts to Enhance First Responders’ All- Hazards Capabilities Continue to Evolve. GAO-05-652. Washington, D.C.: July 11, 2005. Reserve Forces: Actions Needed to Better Prepare the National Guard for Future Overseas and Domestic Missions. GAO-05-21. Washington, D.C.: November 10, 2004. Reserve Forces: Observations on Recent National Guard Use in Overseas and Homeland Missions and Future Challenges. GAO-04-670T. Washington, D.C.: April 29, 2004. Homeland Security: Selected Recommendations from Congressionally Chartered Commissions. GAO-04-591. Washington, D.C.: March 31, 2004. Homeland Defense: DOD Needs to Assess the Structure of U.S. Forces for Domestic Military Missions. GAO-03-670. Washington, D.C.: July 11, 2003. Combating Terrorism: Selected Challenges and Related Recommendations. GAO-01-822. Washington, D.C.: September 20, 2001. | It has been 5 years since the Department of Defense (DOD) established U.S. Northern Command (NORTHCOM) to conduct homeland defense and civil support missions in the United States. Planning operations in the United States poses unique challenges for traditional military planning. GAO was asked to assess (1) the status of NORTHCOM's plans and the challenges it faces in planning and conducting operations, (2) the number, experience, and training of planning personnel, and (3) the extent to which NORTHCOM coordinates with other federal agencies. To do this, GAO reviewed available NORTHCOM plans, compared them to joint operational planning criteria, compared planning staff with those at other commands, and reviewed documentation and mechanisms for interagency coordination. NORTHCOM has completed--or is in the process of revising--all of the major plans it is required to prepare for its homeland defense and civil support missions, but it faces a number of challenges in planning for and conducting these missions. NORTHCOM has completed its nine required plans. However, NORTHCOM does not know whether supporting plans that must be developed by other DOD organizations to assist NORTHCOM are complete because it has only recently begun to develop a process to track and assess these plans. NORTHCOM faces challenges in three key planning areas. First, NORTHCOM has difficulty identifying requirements for capabilities it may need in part because NORTHCOM does not have more detailed information from the Department of Homeland Security (DHS) or the states on the specific requirements needed from the military in the event of a disaster. Second, NORTHCOM has few regularly allocated forces and few capabilities allocated to its plans. DOD could allocate forces to NORTHCOM and assign specific forces to the command's plans, but this would not guarantee that those forces would not have to be deployed elsewhere. However, it would provide DOD and the NORTHCOM commander with a better basis on which to assess the risk that the command would be unable to successfully execute one or more of its missions. Third, NORTHCOM has difficulty monitoring the readiness of military units for its civil support mission because its plans do not specify mission tasks against which units can be assessed. NORTHCOM has undertaken mitigation efforts to address each challenge, and new national planning guidance may further assist NORTHCOM and DOD in addressing the challenges. Nevertheless, NORTHCOM and DOD can take additional actions to reduce the risk from these gaps and reduce the risk due to the overall uncertainty that stems from the nature of its mission. NORTHCOM has an adequate number of planning personnel, and the command is pursuing opportunities to expand the experience and training for staff needed to perform the command's planning function. NORTHCOM's planning staff is filled at over 96 percent of its authorized positions. NORTHCOM's military planning staff receives the same planning training and education as planners in other combatant commands. To draw upon experience in planning and conducting domestic operations, NORTHCOM has integrated National Guard and U.S. Coast Guard personnel into its headquarters staff. NORTHCOM has also developed a curriculum for required mission-related training courses. Although NORTHCOM has taken actions to improve coordination of its homeland defense and civil support plans and operations with federal agencies, it lacks formalized guidance and procedures--such as memorandums of understanding or charters--to help ensure that interagency coordination efforts or agreements that are reached can be fully relied on. This is important because responding to a major disaster in the United States--natural or man-made--is a shared responsibility of many government agencies with states often requiring federal assistance from DHS and DOD. |
LSC was established in 1974 as a private, nonprofit, federally funded corporation to provide legal assistance to low-income people in civil matters. LSC provides the assistance indirectly, through grants to about 260 competitively selected local programs. Grantees may receive additional funding from non-LSC sources. In fiscal years 1998 and 1999, LSC received appropriations of $283 million and $300 million, respectively. eligibility, clients’ income, in general, is not to exceed 125 percent of the federal poverty guidelines. LSC regulations require that grantees (1) adopt a form and procedure to obtain eligibility information and (2) preserve that information for audit by LSC. With respect to citizenship/alien eligibility, only citizens and certain categories of aliens are eligible for services. For clients who are provided services in person, a citizen attestation form or documentation of eligible alien status is required. For clients who are provided services via the telephone, documentation of the inquiry regarding citizenship/alien eligibility is required. LSC uses a Case Service Reporting system to gather quantifiable information from grantees on the services they provide that meet LSC’s definition of a case. The CSR Handbook is LSC’s primary official guidance to grantees on how to record and report cases. LSC relies on such case information in its annual request for federal funding. Audit reports issued by LSC’s OIG between October 1998 and July 1999 reported that five grantees misreported the number of cases they had closed during calendar year 1997 and the number of cases that remained open at the end of that year. The OIG found that all five grantees overstated the number of closed cases, while four overstated and one understated open cases. In June 1999, in response to Congress’ request for information on whether the 1997 case data of other LSC programs had problems similar to those reported by LSC’s OIG, we issued a report on our audit of five of LSC’s largest grantees: Baltimore, Chicago, Los Angeles, New York City, and Puerto Rico. We conducted a file review of a random sample of cases at each of these grantees to determine the extent to which they made overreporting errors in reporting cases closed during 1997 and cases open on December 31, 1997. We found similar types of reporting errors to those the OIG found and estimated that, overall, about 75,000 (+/- 6,100) of the approximately 221,000 cases that the five grantees reported to LSC for 1997 were questionable. Three grantees identified about 30,000 of their cases as misreported prior to our case file review. The primary causes for these self-identified overreporting errors were (1) improperly reporting to LSC cases that were wholly funded by other sources, such as states, and (2) problems related to case management reporting systems, such as grantee staffs’ difficulty in transitioning to new automated systems. Our case file review deemed approximately 45,000 additional cases questionable for one of the following reasons: The grantee reported duplicate cases for the same legal service to the same client. Some case files did not contain any documentation supporting the grantee’s determination that the client was either a U.S. citizen or eligible alien. For cases reported as closed in 1997, some case files showed no activity during the 12 months before the case was closed. For cases reported as open as of December 31, 1997, some cases showed no grantee activity during calendar year 1997. Some case files did not contain any documentation that the grantee had determined that the client was financially eligible for LSC services. LSC regulations did not require specific documentation of these determinations in all cases. However, they required that grantees (1) adopt a form and procedure to obtain eligibility information and (2) preserve that information for audit by LSC. LSC officials and executive directors of the five grantees told us that they had taken or were planning to take steps to correct these case reporting problems. LSC issued a new, 1999 CSR Handbook and distributed other written communications intended to clarify reporting requirements to its grantees. The 1999 handbook, which replaced the 1993 edition, instituted changes to some of LSC’s reporting requirements and provided more detailed information on other requirements. In responding to a GAO telephone survey, most grantees indicated that the new guidance helped clarify LSC’s reporting requirements, and virtually all of them indicated that they had or planned to make program changes as a result of the requirements. Many grantees, however, identified areas of case reporting that remained unclear to them. The 1999 CSR Handbook included changes to (1) procedures for timely closing of cases; (2) procedures for management review of case service reports; (3) procedures for ensuring single recording of cases; (4) requirements to report LSC-eligible cases, regardless of funding source; and (5) requirements for reporting cases involving private attorneys separately. On November 24, 1998, LSC informed its grantees that two of the changes in the 1999 CSR Handbook were to be applied to the 1998 case data. The two changes pertained to timely closing of cases and management review of case service reports. The remaining new provisions of the 1999 CSR Handbook were not applicable to 1998 cases. For example, for 1998, there was no requirement for grantees to ensure that cases were not double counted. For 1999, LSC is requiring the use of automated case management systems and procedures to ensure that cases involving the same client and specific legal problem are not reported to LSC more than once. For 1998, grantees could report only those cases that were at least partially supported by LSC funds. For 1999, LSC is requiring grantees to report all LSC-eligible cases, regardless of funding source. LSC intends to estimate the percentage of activity spent on LSC service by applying a formula that incorporates the amount of funds grantees receive from other funding sources compared with the amount they receive from LSC. In addition to changing certain reporting requirements, the 1999 handbook also provides more detailed guidance to grantees than the 1993 handbook. For example, the 1999 handbook provides more specific definitions of what constitutes a “case” and a “client” for CSR purposes. The 1999 handbook also addresses documentation requirements that were not discussed in the 1993 handbook. Based on our survey of executive directors of 79 grantees, we estimate that over 90 percent of grantee executive directors viewed the changes in the 1999 CSR Handbook as being clear overall, and virtually all of them indicated that they planned to or had made at least one change to their program operations as a result of the revised case reporting requirements. These changes included revising policies and procedures, providing staff training, modifying forms and/or procedures used during client intake, implementing computer hardware and software changes, and increasing reviews of cases. citizenship/alien eligibility documentation, single recording of cases, and who can provide legal services. LSC sought to determine the accuracy of grantees’ case data by requiring that grantees complete self-inspections of their open and closed caseload data for 1998. Grantees were required to determine whether the error rate in their data exceeded 5 percent. According to LSC, about three-fourths of the grantees certified that the error in their data was 5 percent or less. LSC used the results of the self-inspections to estimate the total number of case closings in 1998. Our review of LSC’s self-inspection process raised concerns about the accuracy and interpretation of the results, and what the correct number of certifying programs should be. On May 14, 1999, LSC issued a memo to all grantees instructing them to complete a self-inspection procedure by July 1, 1999. The purpose of the self-inspection was to ensure that (1) grantees were properly applying instructions in the 1999 edition of the CSR Handbook that were applicable to the 1998 data, and (2) LSC had accurate case statistical information to report to Congress for calendar year 1998. LSC provided detailed guidance to grantees on the procedures for the self- inspection. Each grantee was to select and separately test random samples of open and closed cases to determine whether the number of cases it reported to LSC earlier in the year was correct. Grantees were to verify that the case file contained a notation of the type of assistance provided, the date on which the assistance was provided, and the name of the case handler providing the assistance. Grantees were also to determine whether assistance had ceased prior to January 1, 1998; was within certain service categories as defined by the 1999 handbook; was provided by an attorney or paralegal; and was not prohibited or restricted. Finally, grantees were to verify that each case had eligibility information on household income, size, assets, citizenship attestation for in-person cases, and indication of citizenship/alien status for telephone-only cases. identified one or more problems in the random sample and corrected their entire 1998 database so that the problems no longer appeared. If, by correcting the problems, the error rate in the data was reduced to 5 percent or less, the grantees could resubmit their 1998 data along with a signed certification attesting to the substantial accuracy of the resubmitted data. In this way, grantees who were unable to certify at one point in time could certify at a later point in time. According to LSC officials, about three-fourths of the grantees certified the accuracy of their 1998 case data. As of August 26, 1999, LSC documents indicated that 199 of 261 grantees (76 percent) reported substantially correct CSR data to LSC. The remaining 62 grantees (24 percent) did not certify to LSC that their CSR data were substantially correct. According to LSC, 30 of the 50 largest grantees did not certify their 1998 data. LSC officials told us they were surprised that such a large number of grantees certified their 1998 CSR data. They attributed the results to the following factors: (1) the self-inspection did not attempt to identify duplicate cases; (2) grantees received the new 1999 handbook in November 1998 and had already implemented some of the new requirements; and (3) grantees were less likely to report as cases telephone referrals in which no legal advice had been given and/or clients’ eligibility had not been determined because they were aware that the OIG identified this as a problem. On the basis of the self-inspection results, LSC estimated that grantees closed 1.1 million cases in 1998. Our review raised some concerns about LSC’s interpretation of the self- inspection results and about the accuracy of the data provided to LSC by grantees. As a result, we could not assess the accuracy of LSC’s estimate of the number of certified programs and case closures for 1998. LSC did not issue standardized procedures for grantees to use in reporting the results of their self-inspections. Grantees that could not certify their data wrote letters to LSC that contained varying degrees of detail about data errors that they found. Since LSC did not have a standard protocol for collecting the results of the self-inspections, LSC officials in some cases had to rely on their own interpretations of grantees’ descriptions of the problems they had discovered. We are uncertain how many programs should have been counted as certified because we are uncertain if LSC applied a consistent definition of “certification.” Most programs that were on LSC’s certification list determined that they had error rates of 5 percent or less for both open and closed cases. However, LSC placed some programs on the certified list if the program’s overall error rate for closed cases was 5 percent or less, even if the overall error rate actually was higher than 5 percent. In two instances, executive directors told us that they did not certify their CSR data because their overall error rate exceeded 5 percent. However, these programs appeared on LSC’s list of certified programs. When we asked an LSC official about this, he told us that they advised grantees that if their closed case error rate did not exceed 5 percent, they should “partially certify” their data. In response to our inquiry, the official reviewed the certification letters submitted by nearly 200 grantees, and identified 5 certified programs whose error rates for open cases exceeded 5 percent. Given that some grantees submitted only an overall estimate of data error, we do not know how many programs qualified to be certified overall, just for closed cases, or just for open cases. We are also concerned that LSC’s instructions to grantees on how to conduct the self-inspections may have led some of the smaller grantees to select too few test cases to make a reliable assessment of the proportion of error in their case data. Because these were smaller grantees, this limitation would have had little effect on LSC’s estimate of the total closed caseload. However, it could have affected LSC’s count of the number of certified programs. LSC does not know how well grantees conducted the self-inspection process, nor how accurate the results are. We spoke with several executive directors who did not correctly follow LSC’s reporting requirements. Incorrect interpretations of LSC guidance may have resulted in some programs certifying their 1998 data when they should not have, and other programs not certifying their 1998 data when they should have. An LSC official told us that, although they have conducted CSR training sessions for grantee executive directors, thousands of case handlers in grantee offices have not received such training. The official acknowledged that written guidance and telephone contacts with grantees may not be sufficient to ensure correct and consistent understanding of reporting requirements, and that LSC plans to consider alternative ways of providing training to staff. LSC officials told us that the self-inspection was valuable and that LSC plans to have grantees complete self-inspections again early next year as part of the 1999 CSR reporting process. LSC’s 1999 CSR Handbook and other written communications have improved the clarity of reporting requirements for its grantees. However, many grantees remained unclear about and/or misunderstood certain aspects of the reporting requirements. LSC’s practice of disseminating guidance primarily by written or telephone communications may not be sufficient to ensure that grantees correctly and consistently interpret the requirements. LSC sought to determine the accuracy of grantees’ 1998 case statistics by requiring grantees to conduct self-inspections. However, we do not know the extent to which the results of the self-inspection process are accurate. The validity of the results are difficult to determine because LSC did not standardize the way that grantees were to report their results, some of the grantees used samples that were too small to assess the proportion of error in their data, some grantees did not correctly follow LSC’s reporting guidance, and LSC had done no verification of the grantees’ self-inspection procedures. We do not believe that LSC’s actions, to date, have been sufficient to fully resolve the case reporting problems that occurred in 1997. develop a standard protocol for future self-inspections to ensure that grantees systematically and consistently report their results for open and closed cases; direct grantees to select samples for future self-inspections that are sufficient to draw reliable conclusions about magnitude of case data errors; and finally, ensure that procedures are in place to validate the results of LSC’s 1998 self-inspection, as well as of any future self-inspections. In a written response to a draft of our report, the President of LSC generally agreed with our findings and noted that he plans to implement our recommendations to the fullest extent possible. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions that you or other Members of the Committee may have. Contacts and Acknowledgement For further information regarding this testimony please contact Laurie E. Ekstrand or Evi Rezmovic on (202) 512-8777. Individuals making key contributions to this testimony included Mark Tremba and Jan Montgomery. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch- tone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO discussed the two reviews that it has completed pertaining to case service reporting (CSR) by the Legal Services Corporation (LSC), focusing on: (1) what efforts LSC and its grantees have made to correct case reporting problems; and (2) whether these efforts are likely to resolve the case reporting problems that occurred in 1997. GAO noted that: (1) LSC issued a new, 1999 CSR Handbook and distributed other written communications intended to clarify reporting requirements to its grantees; (2) the 1999 Handbook, which replaced the 1993 edition, instituted changes to some of LSC's reporting requirements and provided more detailed information on other requirements; (3) in responding to a GAO telephone survey, most grantees indicated that the new guidance helped clarify LSC's reporting requirements, and virtually all of them indicated that they had or planned to make program changes as a result of the requirements; (4) many grantees, however, identified areas of case reporting that remained unclear to them; (5) the 1999 CSR Handbook included changes to: (a) procedures for timely closing of cases; (b) procedures for management review of case service reports; (c) procedures for ensuring single recording of cases; (d) requirements to report LSC-eligible cases, regardless of funding source; and (e) requirements for reporting cases involving private attorneys separately; (6) based on GAO's survey of executive directors of 79 grantees, GAO estimates that over 90 percent of grantee executive directors viewed the changes in the 1999 CSR Handbook as being clear overall, and virtually all of them indicated that they planned to or had made at least one change to their program operations as a result of the revised case reporting requirements; (7) although most of the grantee executive directors reported that the new LSC guidance helped clarify requirements, many of them also indicated that they were still unclear about certain requirements and that additional clarification was needed; (8) LSC sought to determine the accuracy of grantees' case data by requiring that grantees complete self-inspections of their open and closed caseload data for 1998; (9) on May 14, 1999, LSC issued a memo to all grantees instructing them to complete a self-inspection procedure by July 1, 1999; (10) according to LSC officials, about three-fourths of the grantees certified the accuracy of their 1998 case data; (11) as of August 26, 1999, LSC documents indicated that 199 of 261 grantees reported substantially correct CSR data to LSC; (12) the remaining 62 grantees did not certify to LSC that their CSR data were substantially correct; and (13) GAO's review raised some concerns about LSC's interpretation of the self-inspection results and about the accuracy of the data provided to LSC by grantees. |
American Samoa, the only U.S. insular area in the southern hemisphere, is located about 2,600 miles southwest of Hawaii. American Samoa consists of five volcanic islands and two coral atolls, covering a land area of 76 square miles, slightly larger than Washington, D.C. According to American Samoa Department of Commerce data, in 2005, the population of American Samoa was about 65,500. Unlike residents born in CNMI, Guam, and USVI, residents born in American Samoa are nationals of the United States, but many become naturalized U.S. citizens. Like residents of the other insular areas, residents of American Samoa have many of the rights of citizens of the 50 states, but cannot vote in U.S. national elections and do not have voting representation in the final approval of legislation by the full Congress. According to Census Bureau data for 2000, the median household income in American Samoa was $18,200, less than half of the U.S. median household income of almost $41,000. American Samoa does not have an organic act that formally establishes its relationship with the United States. Two deeds of cession were initially completed between Samoan chiefs, or matai, and the United States in 1900 and 1904 and ratified by the federal government in 1929. In these deeds, the United States pledged to promote peace and welfare, to establish a good and sound government, and to preserve the rights and property of the people. The U.S. Navy was initially responsible for federal governance of the territory. Then, in 1951, federal governance was transferred to the Secretary of the Interior, which continues today. The Secretary exercises broad powers with regard to American Samoa, including “all civil, judicial, and military powers” of government in American Samoa. American Samoa has had its own constitution since 1960, and since 1983, the local American Samoa constitution may only be amended by an act of Congress. The American Samoa Constitution provides for three separate branches of government—the executive, the legislative, and the judicial. Since 1977, a popularly elected Governor heads the American Samoa executive branch for 4-year terms. Nearly 40 American Samoa departments, offices, and other entities within the executive branch of the American Samoa government provide public safety, public works, education, health, commerce, and other services. The Governor has responsibility for appointing the Attorney General, Director of Public Safety, and other executive branch agency leaders. The legislature, or Fono, is comprised of 18 senators and 20 representatives. Each of the senators is elected in accordance with Samoan custom by the city councils of the counties that the senator represents. Each of the representatives is popularly elected from the representative districts. American Samoa exercises authority over its immigration system through its own locally adopted laws. In fiscal year 2007, a total of almost $105 million in federal funds were provided from a variety of federal agencies, including the Departments of the Interior, Education, Agriculture, Transportation, and Health and Human Services. Specifically, DOI provided funds that same year in the amount of $22.9 million for American Samoa government operations, including the High Court of American Samoa. In addition to these federal funds, a portion of the funding for American Samoa government operations comes from local revenues. American Samoa Judiciary The American Samoa judiciary, as provided in the American Samoa Constitution and American Samoa Code, consists of a High Court and a local district court under the administration and supervision of the Chief Justice. The High Court consists of four divisions—the trial division; the family, drug, and alcohol division; the land and titles division; and the appellate division. The trial division, which consists of the Chief Justice, the Associate Justice, and associate judges, is a court of general jurisdiction, empowered to hear, among other things, felony cases and civil cases in which the amount in controversy exceeds $5,000. The Chief Justice and the Associate Justice are appointed by the U.S. Secretary of the Interior and are required to be trained in the law. There are six associate judges, who are appointed by the Governor and are not required to have formal legal training. The associate judges are matai, or chiefs, and they preside over cases in the High Court, playing a more significant role in deciding issues of matai titles and land. There is one local district court judge, who is appointed by the Governor and must also have formal legal training, who hears matters, such as misdemeanor criminal offenses and civil cases in which the matter in controversy does not exceed $5,000. The Chief and Associate Justices, and the local district and associate judges hold office for life with good behavior. The American Samoa judiciary has a public defender, probation officers, translators, and marshals. Since the 1970s the Secretary of the Interior has appointed federal judges, usually from the Ninth Circuit, to serve temporarily as Acting Associate Justices in the appellate division of the High Court of American Samoa. American Samoan customs and traditions have an influence over the local legal system. The distinctive Samoan way of life, or fa’a Samoa, is deeply imbedded in traditional American Samoa history and culture. Fa’a Samoa is organized around the concept of extended family groups—people related by blood, marriage, or adoption—or aiga. Family members acknowledge allegiance to the island leader hierarchy comprised of family leaders, or matai (chiefs). Matai are responsible for the welfare of their respective aiga and play a central role in protecting and allocating family lands. About 90 percent of land in American Samoa is communally owned and controlled by matai, and there are limits in American Samoa law regarding the transfer of property. The concept of fa’a Samoa extends to the governance structures in American Samoa and, thus, most high- ranking government officials, including judges, are matai. Further, Samoan law allows for a custom of ifoga, or ceremonial apology, whereby if a member of one family commits an offense against a member of another family, the family of the offender proceeds to the headquarters of the family of the offended person and asks for forgiveness. After appropriate confession of guilt and ceremonial contrition by the offending family, the family offended against can forgive the offense. If the offender is convicted in court, the court may reduce the sentence of the offender if it finds that an ifoga was performed. The issue of establishing a federal court in American Samoa is not new. This issue has arisen within the larger question of defining the political status of American Samoa and its relationship with the United States. For example, in the 1930s, Congress considered legislation that would provide an avenue of appeal from the High Court of American Samoa to the U.S. District Court of Hawaii, during its deliberation of an organic act for American Samoa. However, this initiative was not enacted by Congress. Further, since 1969, there have been three American Samoa commissions convened to study the future political status of American Samoa. These commissions have studied, among other things, the necessity of an organic act. The most recent commission’s report, published in January 2007, did not recommend any changes in American Samoa’s political status as an unorganized and unincorporated territory of the United States, with the intent that American Samoa could continue to be a part of the United States and also have the freedom to preserve Samoan culture. In addition, in the mid-1990s, the Department of Justice (DOJ) proposed legislative options for changing the judicial structure of American Samoa, including establishing a federal court within the territory. These proposals were developed in response to growing concerns involving white-collar crime in American Samoa, which were detailed in a December 1994 DOJ crime assessment report. However, while the House Committee on Resources held hearings on the DOJ report in August 1995, and judicial committees studied various legislative options, Congress did not take any actions on the proposals. Then, in February 2006, the Delegate from American Samoa introduced legislation in the U.S. Congress to establish a federal court in American Samoa and later that month, the American Samoa Fono held a joint legislative public hearing to solicit public comments on the bill. No congressional actions were taken on the bill and the Delegate from American Samoa withdrew the legislation after he and others requested the June 2008 GAO report. The federal courts in the insular areas of CNMI, Guam, and USVI were established under Article IV of the Constitution, whereas U.S. district courts elsewhere in the United States were established under Article III of the Constitution. Article IV courts are similar to Article III courts, but differ in terms of specific jurisdiction and tenure of the judges. Article IV courts generally exercise the same jurisdiction as Article III courts and may also exercise jurisdiction over local matters. Article IV judges are appointed by the President, with the advice and consent of the Senate, serve terms of 10 years, and can be removed by the President for cause. Article III judges are appointed by the President, with the advice and consent of the Senate, and serve with Article III protections of life tenure for good behavior and immunity from reductions in salary. Article IV judges hear both federal and bankruptcy cases, whereas Article III courts generally have a separate unit to hear bankruptcy cases. An Article III judge can be designated by the Chief Judge of the Circuit Court of Appeals or the Chief Justice of the United States to sit on an Article IV court. However, an Article IV judge can be designated to sit only as a magistrate judge on an Article III court. The federal courts in CNMI, Guam, and USVI were established at different times, but developed in similar ways. The District Court for the Northern Mariana Islands was established in 1977 as specified in the 1975 agreement, or covenant, between the Northern Mariana Islands and the United States. The District Court of Guam was established when the federal government passed an Organic Act for Guam in 1950. The District Court of the Virgin Islands, as it currently exists, was established by an Organic Act in 1936. Each of these federal courts initially had jurisdiction over federal, as well as local, issues. Over time, however, the federal courts were divested of jurisdiction over local issues, with the exception of the District Court of the Virgin Islands, which maintains jurisdiction over cases involving local offenses that have the same underlying facts as federal offenses. Similarly, each of the federal courts had appellate jurisdiction over the local trial courts until the local government established a local appellate court. CNMI, Guam, and USVI have all established local Supreme Courts, so that the federal courts no longer have appellate jurisdiction over local cases. As such, the jurisdiction of each of the three federal courts currently resembles that of district courts of the United States, which include federal question jurisdiction, diversity jurisdiction, and the jurisdiction of a bankruptcy court. Decisions of the District Court for the Northern Mariana Islands and the District Court of Guam may be appealed to the U.S. Court of Appeals for the Ninth Circuit, and decisions of the District Court of the Virgin Islands may be appealed to the U.S. Court of Appeals for the Third Circuit. An Article IV judge—two Article IV judges in the case of the Virgin Islands—sits on each of the federal courts and is appointed by the President with the advice and consent of the Senate, for a term of 10 years, but may be removed by the President for cause. Unlike other insular areas, such as CNMI, Guam, and USVI, American Samoa does not have a federal court. As a result, federal law enforcement officials have pursued violations of federal criminal law arising in American Samoa in the U.S. district courts in Hawaii or the District of Columbia. In the absence of a federal court in American Samoa, federal law has provided federal jurisdiction to the High Court of American Samoa in areas such as food safety and shipping issues, which is quite narrow compared to the comprehensive federal jurisdiction granted to federal courts in other insular areas. With regard to its local judicial structure, American Samoa is different from other U.S. insular areas. The judicial system in American Samoa consists only of local courts that handle limited federal matters, whereas the judicial system in CNMI, Guam, and USVI are composed of local courts and federal courts that operate independently from each other. Also, whereas the justices of the High Court in American Samoa are appointed by the Secretary of the Interior, the judges of the local courts in CNMI, Guam, and USVI are appointed by the Governors of each insular area. Further, although decisions of the appellate division of the High Court of American Samoa have been appealed to the Secretary of the Interior, federal law provides that, 15 years after the establishment of a local appellate court, decisions of the local appellate courts in CNMI, Guam, and USVI may be appealed to the U.S. Supreme Court. As stated earlier, because there is no federal court in American Samoa, matters of federal law arising in American Samoa have generally been adjudicated in either the District of Hawaii (Honolulu, Hawaii) or the District of Columbia (Washington, D.C.). With regard to criminal matters, although federal criminal law extends to American Samoa, questions surrounding the proper jurisdiction and venue of cases have posed complex legal issues when violations of federal law occurred solely in American Samoa. However, since a 2001 precedent- setting case involving human trafficking, DOJ prosecutors told us that some of the legal issues regarding jurisdiction and venue that had been unsettled in the past have been resolved. For example, federal law provides that the proper venue for a criminal case involving a federal crime committed outside of a judicial district is: (1) the district in which the defendant is arrested or first brought; or (2) if the defendant is not yet arrested or first brought to a district, in the judicial district of the defendant’s last known residence; or (3) if no such residence is known, in the U.S. District Court for the District of Columbia. Prior to this 2001 case, most cases arising in American Samoa were brought in the U.S. District Court for the District of Columbia. In this 2001 case, prosecutors used the “first brought” statute to establish venue in the District of Hawaii, since the defendant was arrested and “first brought” to Hawaii and then indicted in the District of Hawaii. Based on the facts and arguments presented, the Ninth Circuit upheld this application of the “first brought” statute. Following this case, most defendants who have been charged with committing federal offenses in American Samoa have been charged in one of two venues—the U.S. district courts in Hawaii or the District of Columbia, because there is no federal court in American Samoa. In 2006 and 2007, DOJ attorneys prosecuted defendants in the U.S. district courts in both Hawaii and the District of Columbia for civil rights violations and public corruption cases arising in American Samoa. DOJ prosecutors told us that their approach is adjusted depending on the facts of each case, legal challenges presented, and prosecutorial resources available. With regard to certain federal civil matters, when both the plaintiff and the defendant reside in American Samoa, and the events giving rise to the civil action occurred in American Samoa, there may be no proper federal venue, meaning there may be no federal court that may hear the case. However, some civil cases have been brought against the Secretary of the Department of the Interior (DOI) alleging that the Secretary’s administration of the government of American Samoa violated the U.S. Constitution. In such cases, the U.S. District Court for the District of Columbia has been the appropriate forum, given that DOI is headquartered in Washington, D.C. Bankruptcy relief is not available in American Samoa since federal law has not explicitly extended the U.S. Bankruptcy Code to American Samoa, and there is not a federal court in American Samoa in which bankruptcy claims may be adjudicated. However, U.S. bankruptcy courts may exercise jurisdiction over petitions for relief filed by American Samoan entities under certain circumstances, such as if the entities reside or do business in a judicial district of the United States and the court finds that exercising jurisdiction would be in the best interest of the creditors and the debtors. Despite the absence of a federal court in American Samoa, federal law provides that the local court—the High Court of American Samoa—has limited federal civil jurisdiction. However, the federal jurisdiction of the High Court of American Samoa is very limited compared to comprehensive federal jurisdiction in federal courts located in CNMI, Guam, and USVI. In particular, federal law has explicitly granted the High Court of American Samoa federal jurisdiction for certain issues, such as food safety, protection of animals, conservation, and shipping issues. Although the High Court does not keep data on the number of federal cases it handles, the Chief Justice of the High Court told us that, on occasion, these federal matters, particularly maritime cases, have taken a significant amount of the court’s time. The Chief Justice noted that the piecemeal nature of the High Court’s federal jurisdiction sometimes creates challenges. For example, although the High Court has jurisdiction to hear certain maritime cases, the High Court does not have the authority under certain federal statutes to enjoin federal court proceedings or to transfer a case to a federal court. Such a situation may lead to parallel litigation in the High Court and a federal court. In addition to the limits of federal jurisdiction, there are differences in the way federal matters are heard in the High Court compared to the federal courts in other insular areas. For example, whereas the Secretary of the Interior asserts authority to review High Court decisions under federal law, the U.S. Courts of Appeals have appellate review of decisions of the federal courts in CNMI, Guam, and USVI. Also, as stated earlier, whereas the Justices of the High Court of American Samoa are appointed by the Secretary of the Interior, the judges of the federal courts in CNMI, Guam, and USVI are appointed by the President, with the advice and consent of the U.S. Senate. While various proposals to change the current system of adjudicating matters of federal law in American Samoa have been periodically discussed and studied, controversy remains regarding whether any changes are necessary and, if so, what options should be pursued. In the mid-1990s, various proposals to change the current system were studied by judicial committees and federal officials. Issues that were raised at that time, such as protecting American Samoan culture and traditions, resurfaced during our interviews with federal and American Samoa government officials, legal experts, and in group discussions and public comments we received. Reasons offered for changing the existing system focus primarily on the difficulties of adjudicating matters of federal law arising in American Samoa, along with the goal of providing American Samoans with more direct access to justice in their place of residence. Reasons offered against changing the current system of adjudicating matters of federal law focus largely on concerns about the impact of an increased federal presence on Samoan culture and traditions, as well as concerns regarding the impartiality of local juries. The issue of changing the system for adjudicating matters of federal law in American Samoa has been raised in the past in response to a government audit and subsequent reports, which cite problems dating back to the 1980s. These reports cited problems with deteriorating financial conditions, poor financial management practices, and vulnerability to fraudulent activities in American Samoa. In March 1993, the newly elected Governor of American Samoa requested assistance from the Secretary of the Interior to help investigate white-collar crime in American Samoa in response to a projected $60 million deficit uncovered by a DOI Inspector General audit. As a result of this request, a team from DOJ spent 3 months assessing the problem of white-collar crime in American Samoa and completed its report in December 1994. The report concluded that white-collar crime—in particular, public corruption—was prevalent in American Samoa and provided details on the difficulties with enforcing federal law in American Samoa. The report discussed three possible solutions: (1) establishing a district court in American Samoa, (2) providing the U.S. District Court of Hawaii with jurisdiction over certain matters of federal law arising in American Samoa, or (3) providing the High Court of American Samoa with federal criminal jurisdiction. By August 1995, the U.S. Congress held hearings on the 1994 DOJ report and possible alternatives to provide for the prosecution of federal crimes arising in American Samoa. At the hearing, some American Samoa government officials opposed suggestions for changing the judicial system in the territory and views were expressed regarding increased federal presence, the desire to retain self-determination over the judicial structure, and the need to protect and maintain the matai title and land tenure system in American Samoa. The American Samoa Attorney General at that time testified that his office and the Department of Public Safety had created a Joint Task Force on Public Corruption that investigated and prosecuted several white-collar offenses, including embezzlement, bribery, fraud, public corruption, forgery, and tax violations. For several months following the 1995 congressional hearings, different legislative options were studied by judicial committees within Congress and federal officials. One bill was drafted that would have given the U.S. District Court of Hawaii limited jurisdiction over federal cases arising in American Samoa. The bill proposed that one or more magistrate judges may sit in American Samoa, but district judges of the U.S. District Court of Hawaii would presumably preside over trials in Hawaii. The bill was opposed by some federal judicial officials citing an unfair burden that would be placed on the District of Hawaii, as well as on defendants, witnesses, and juries due, in part, to the logistical difficulties in transporting them between American Samoa and Hawaii. By 1996, the proposed legislation was revised to establish an Article IV court in American Samoa with full staff accompaniments and limited federal jurisdiction that would exclude cases that would put into issue the office or title of matai and land tenure. While DOJ sent the legislation to the President of the Senate and Speaker of the House in October 1996, it was never introduced into the 104th Congress or in subsequent congressional sessions. While the mid-1990’s legislative proposals were primarily concerned with white-collar crime in American Samoa, different types of criminal activities have more recently emerged. Prior to 1999, FBI officials told us that allegations of criminal activity in American Samoa were investigated by agents based in its Washington, D.C. field office and, due to the distance and costs involved, very few investigations were initiated. Around mid-1999, FBI began to assign Hawaii-based agents to investigations in American Samoa in response to increasing reports of criminal activity. Then, due to growing caseload and a crime assessment, in December 2005, FBI opened a resident agency in American Samoa. According to an FBI official, other than a National Park Service fish and wildlife investigator affiliated with the National Park of American Samoa, the FBI agents were the first federal law enforcement agents to be stationed in American Samoa. FBI’s increased activities over the past 8 years, and establishment of a resident agency, have targeted a growing number of crimes in American Samoa, including public corruption of high-ranking government officials, fraud against the government, civil rights violations, and human trafficking. Among the most notable was U.S. v. Lee, which was the largest human trafficking case ever prosecuted by DOJ, as reported in 2007. This 2001 case involved about 200 Chinese and Vietnamese victims who were held in a garment factory. In 2003, Lee was convicted in the U.S. District Court of Hawaii of involuntary servitude, conspiring to violate civil rights, extortion, and money laundering. Another federal case in 2006 resulted in guilty pleas from the prison warden and his associate for conspiring to deprive an inmate of rights, by assaulting him and causing him bodily injury. In December 2004, we found that American Samoa’s failure to complete single audits, federal agencies’ slow reactions to this failure, and instances of theft and fraud limited accountability for 12 key federal grants supporting essential services in American Samoa. We recommended, among other things, that the Secretary of the Interior coordinate with other federal agencies to designate the American Samoa government as a high-risk grantee until it completed all delinquent single audits. In June 2005, DOI designated the American Samoa government as a high-risk grantee. The American Samoa government subsequently completed all overdue audits and made efforts to comply with single audit act requirements. Later, in December 2006, we reported that insular area governments, including American Samoa, face serious economic, fiscal, and financial accountability challenges and that their abilities to strengthen their economies were constrained by their lack of diversification in industries, scarce natural resources, small domestic markets, limited infrastructure, and shortages of skilled labor. Again, we cited the long-standing financial accountability problems in American Samoa, including the late submission of the reports required by the Single Audit Act, the inability to achieve unqualified (“clean”) audit opinions on financial statements, and numerous material weaknesses in internal controls over financial reporting and compliance with laws and regulations governing federal grant awards. We made several recommendations to the Secretary of the Interior, including increasing coordination activities with officials from other federal grant-making agencies on issues such as late single audit reports, high-risk designations, and deficiencies in financial management systems and practices. DOI agreed with our recommendations, but we have not yet assessed its progress toward implementing them. In addition to these GAO reviews, FBI and various inspector general agents have conducted a broad investigation into federal grant-related corruption in American Samoa, which yielded guilty pleas in October 2005 from four former American Samoa government officials, including the Director of Procurement, the Director of the Department of Education, the Director of the Department of Health and Social Services, and the Director of the School Lunch Program. Additionally, recent audits and investigations by the Inspector General offices of the Departments of Homeland Security, Education, and the Interior indicate that the American Samoa government has inadequate controls and oversight over federal funds, that federal competitive bidding practices have been circumvented, and that American Samoan officials have abused federal funds for personal benefit. For example, in September 2007, officials from the U.S. Department of Education designated the American Samoa government as a high-risk grantee due to serious internal control issues raised in previous single audits, and cited a number of underlying fiscal and management problems. Due to the department’s concerns about the American Samoa government’s ability to properly administer and provide services with its funds, the department imposed several special conditions, including restrictions on the drawdown of grant funds. Also, the American Samoa legislature, or Fono, has been assisting federal agencies in their efforts to investigate public corruption and other crimes. Specifically, in early 2007, the Fono established a Senate Select Investigative Committee to review and investigate any unlawful, improper, wasteful, or fraudulent operations involving local and federal funds or any other misconduct involving government operations within all departments, boards, commissions, committees, and agencies of the American Samoa government. An official stated the committee reviews and investigates complaints, holds senate hearings with relevant witnesses, and can refer cases to either the American Samoa Attorney General or FBI for investigation and prosecution. As was the case in the 1990s, and was repeated in the interviews we conducted and e-mail comments we received, the reasons offered for changing the American Samoa judicial system principally stem from challenges associated with adjudicating matters of federal law arising in American Samoa and the desire to provide American Samoans with greater access to justice. Federal law enforcement officials have identified a number of issues that limit their ability to pursue matters of federal law arising in American Samoa. These include logistical challenges related to American Samoa’s remote location. Proponents of changing the judicial system of American Samoa also cite reasons, such as providing American Samoans more direct access to justice as in other insular areas, serving as a possible deterrent to crime, and providing a means to alleviate the shame, embarrassment, and costs associated with being taken away to be tried more than 2,000 miles from American Samoa. While the main areas of concern in the mid-1990s and in our discussions were related to criminal matters arising in American Samoa, there were also concerns regarding civil matters, such as federal debt collection, although these were not addressed in much detail. Without a federal court in American Samoa, investigators and federal prosecutors whom we interviewed said they were limited in their ability to conduct investigations and prosecute cases due to logistical obstacles related to working in such a remote location. In addition to high travel costs, and infrequent flights into and out of American Samoa, DOJ officials said they face difficulties involving effective witness preparation and difficulties communicating with agents during a small window of time each day (due to the 7-hour time difference between Washington, D.C. and American Samoa). In some cases, search warrants or wiretaps were not used by the prosecutors to the extent that they would have been if American Samoa were in closer proximity to Washington, D.C. or Honolulu, Hawaii. Federal prosecutors told us that far fewer witnesses have been called to testify in front of the grand jury, given the burden of high travel costs from American Samoa. Federal prosecutors also told us that they must rely on witness observations and summaries from federal agents stationed in American Samoa rather than meet key witnesses face to face before bringing charges or issuing subpoenas, as they would typically do. Further, according to DOJ officials, the cost related to managing these cases has limited the number of cases they are able to pursue. Federal law enforcement agents told us that a federal court located in American Samoa could bring additional investigative and prosecutorial resources so that they would be able to pursue more cases. Although some have suggested that judicial and prosecutorial resources from the judicial districts of CNMI and Guam be deployed to American Samoa, the high travel costs and logistical obstacles would not be any less, given that there are no direct flights between American Samoa and Guam or between American Samoa and CNMI. Another key reason offered for changing the system for adjudicating matters of federal law in American Samoa is that a federal court would provide residents with more direct access to justice and the ability to more easily pursue cases in the federal court system. Currently, the ability to adjudicate federal cases exists only in very limited cases through the High Court, at a significant cost of time and money to travel to U.S. District Courts in Hawaii or Washington, D.C.; or not at all, in the case of some civil matters and bankruptcy. Proponents state that the establishment of a federal court would provide American Samoa parity with other insular areas, such as CNMI, Guam, and USVI, which have federal courts. Further, a legal expert said that a federal court in American Samoa would provide the community with an opportunity to see first hand how parties can come together to resolve their differences with regard to federal matters. For example, some have asserted that if public corruption trials were held in American Samoa, they would act as a deterrent to others contemplating fraudulent behavior; increase accountability with regard to government spending; and provide satisfaction in witnessing wrong doers brought to justice. Some stated in the February 2006 public hearing held by the Fono and in e-mail comments we received that they have felt shame and embarrassment when defendants are taken to distant courts and in our group discussions, it was stated that American Samoa is perceived by others as unable to render justice to its own residents. Further, some officials have noted the significant costs that defendants’ families must bear in traveling great distances to provide support during trials. This burden is exacerbated by the comparatively low family incomes in American Samoa, which, as stated earlier, are less than half of the U.S. median household income, according to 2000 Census Bureau data. Finally, some people we met with stated that the current system of holding federal criminal trials outside of American Samoa subjects defendants to possible prejudices by jurors in other locations. They cited the relative unfamiliarity of the judges and jurors in Washington, D.C. or Honolulu, Hawaii regarding American Samoan cultural and political issues and suggested that American Samoans would receive a fairer trial in American Samoa than in these locations. This issue had also been discussed in the mid-1990s. For example, in his testimony during August 1995 congressional hearings, the then-Governor of American Samoa stated that the people of American Samoa have the ability to deliver just verdicts based on the evidence presented. He noted that for almost 20 years prior, the trial division of the High Court had successfully conducted six-person jury trials as evidence that American Samoan customs and family loyalties had not prevented effective law enforcement. Views in support of changing the current system were also reflected in some comments made during the group discussions we held in American Samoa and in some of the e-mail responses we received. Some members of the public expressed discontent over the significant costs associated with American Samoan defendants and their families having to travel to Hawaii or Washington, D.C. for court matters and they expressed the importance of having a jury of their peers deciding their cases. Other members of the public and a local community group expressed their belief that a federal court in American Samoa may act as a deterrent for the abuse of federal funds and public corruption, and provide opportunities for American Samoans to pursue federal legal matters, such as bankruptcy. While there was no consensus opinion, certain members of the local bar association mentioned that having a federal court could be beneficial for economic development, by attracting qualified attorneys and court staff to American Samoa. Additionally, one member stated that a federal court may lighten the workload and reduce the backlog of the High Court by taking over its federal maritime and admiralty matters. One of the key reasons offered against changing the current judicial system is the concern that a federal court would impinge upon Samoan culture and traditions. The most frequent concerns raised were related issues— that the system of matai chiefs and the land tenure system could be jeopardized. In raising these issues, some cited the deeds of cession which specify that the United States would preserve the rights and property of the Samoan people. Further, some law enforcement officials we met with also opposed a change to the current system for prosecuting federal cases arising in American Samoa because they were concerned that, given the close familial ties in American Samoa, it would be difficult to obtain convictions from local jurors. During the February 2006 Fono hearings, in e-mail comments we received, and in statements by American Samoa government officials we interviewed, concerns were voiced that the establishment of a federal court in American Samoa could jeopardize the matai and land tenure system of American Samoa. As noted above, matai hold positions of authority in the community; for example, only matai may serve as senators in the American Samoa legislature, and matai control the use and development of the communal lands and allocate housing to their extended family members. The land tenure system of American Samoa is such that the majority of the land in American Samoa is communally owned, and the sale or exchange of communally owned land is prohibited without the consent of the Governor. Also prohibited is the sale or exchange of communally owned and individually owned property to people with less than one-half Samoan blood. American Samoa government officials assert that the land tenure system fosters the strong familial and community ties that are the backbone of Samoan culture and that limits on the transfer of land are important to preserve the lands of American Samoa for Samoans and protect the Samoan culture. Currently, cases regarding matai titles and land issues, such as disputes over the rightful successor to a matai or land use or improvements, are heard by the land and titles division of the High Court of American Samoa. This division is composed of the Chief Justice and Associate Justice, as well as associate judges, who are appointed based on their knowledge of Samoan culture and tradition. Pursuant to the federalist structure of the U.S. judiciary, if a federal court were established in American Samoa, most cases arising under local law, such as matai and land disputes, would likely continue to be heard by the local court. However, some American Samoa officials stated that they are concerned that if a federal court was established in American Samoa, a federal judge, without the requisite knowledge of Samoan culture and tradition, would hear land and title cases. They stated that they would like to keep matai title and land tenure issues within the jurisdiction of the High Court. Another concern that was raised by government officials and residents of American Samoa is that the presence of a federal court in American Samoa may generate constitutional challenges to the matai and land tenure system. Though such challenges may currently be brought in existing venues, some voiced concerns that the establishment of a federal court in American Samoa may make such challenges less costly and, perhaps, more likely. To this day, our native land tenure system remains at the very core of our existence: our culture, our heritage and our way of life. Without our native land tenure system, our matai or chieftain system will fade over time—along with our language, our customs and our culture….we, as a people, have an overriding desire to keep the fabric of our society (i.e., our Samoan culture) intact. No other U.S. state or territory enjoys the total and complete preservation of its people’s culture as American Samoa. I fear that the imposition of a federal court system in American Samoa may have a destructive impact on our culture. Some have raised concerns regarding the establishment of a federal jury system, given the potentially small pool of U.S. citizens in American Samoa and the extended family ties among American Samoans. Federal law provides that federal jurors must be U.S. citizens. As discussed earlier, American Samoans are U.S. nationals, not U.S. citizens, although they may apply and become U.S. citizens. Neither the U.S. Census Bureau nor the American Samoa Department of Commerce provides data on the number of U.S. citizens in American Samoa. Thus, the proportion of the American Samoa adult population who are U.S. citizens is unknown. If the number of U.S. citizens is fairly small, then the pool from which to select federal jurors would be fairly small without a statutory change. In addition, law enforcement officials have speculated that extended family ties in American Samoa may limit the government’s ability to successfully prosecute cases. Specifically, they raised the issue of jury nullification— the rendering of a not guilty verdict even though the jury believes that the defendant committed the offense—as a potential problem that may occur if jury trials were held in American Samoa, due to the influence of familial ties or other societal pressures on jurors. Federal law enforcement officials we met with added that some witnesses involved in testifying against others in previous federal criminal cases have relocated outside of American Samoa and have lost their jobs and housing as a result of their participation in cases. These officials stated that they believe that similar societal pressures will be imposed on jurors if trials were held in American Samoa. These officials concluded that the current system of federal criminal trials taking place away from American Samoa is the best way to get unbiased juries. Views expressing opposition to changing the current system were also reflected in some comments we received from the group discussions we held in American Samoa and from e-mail responses. Some members of the public expressed concerns over an increased federal presence in American Samoa and the potential legal challenges which could be brought regarding the land tenure system and matai title traditions. Further, some expressed concerns about non-Samoans filing discrimination lawsuits over their inability to own land. Some stated that the current system operates well and they did not see a need for change. Others expressed opposition to a federal court in American Samoa due to their concerns about impartial jurors. They stated that if a federal court were established in American Samoa, jurors may not be able to be impartial because of the close relations through family, culture, church, government, or business. Finally, others expressed concerns about the U.S. government pushing and imposing its will on American Samoa, and their belief that changes to the current system should come not from the federal government but from American Samoans themselves. Based on our review of legislative proposals considered during the mid- 1990s, testimonies and reports, and through discussions with legal experts and American Samoa and federal government officials, we identified three potential proposals, or scenarios, if a change to the judicial system of American Samoa were to be made. These scenarios are (1) establishing an Article IV district court in American Samoa, (2) establishing a district court in American Samoa that would be a division of the District of Hawaii, or (3) expanding the federal jurisdiction of the High Court of American Samoa. Each scenario would require a statutory change and present unique operational issues to be addressed. To the extent possible, we cited written documents and knowledgeable sources in the discussion of these issues. See appendix I for detailed information on our scope and methodology. Based on our review of past legislative proposals, testimonies, and reports, and through discussions with legal experts and American Samoa and federal government officials, we identified three potential scenarios for establishing a federal court in American Samoa or expanding the federal jurisdiction of the High Court of American Samoa: 1. establishing an Article IV district court in American Samoa, 2. establishing a district court in American Samoa that would be a division of the District of Hawaii, or 3. expanding the federal jurisdiction of the High Court of American Samoa. These scenarios are similar to those discussed in the 1990s, and are described in more detail in attachment I. Each scenario would require a statutory change and each presents unique operational issues that would need to be resolved prior to implementation. Some issues to be resolved include determining: what jurisdiction would be granted to the court; what type of courthouse facility and detention arrangements would be needed and to what standards, including security standards; and what jury eligibility requirements would apply. The original structure of this scenario came from draft legislation submitted by DOJ to the Speaker of the U.S. House of Representatives and the President of the U.S. Senate in October 1996, which proposed the creation of a new federal court in American Samoa. The legislation specified that the court would have limited jurisdiction that would exclude matters pertaining to matai title and land tenure issues. Under this scenario, federal law would authorize a federal court structure that most closely resembled federal courts in CNMI, Guam, and USVI. It would include an Article IV district court with a district judge, court clerk, and support staff. Below is a description of the key issues under this scenario. Jurisdiction: The statute creating the Article IV district court would specify the court’s jurisdiction. It could be limited to criminal cases only, or may or may not include bankruptcy, federal question, and diversity jurisdiction. American Samoa officials and others whom we interviewed were divided on whether the law establishing a district court in American Samoa should explicitly exclude matai and land tenure issues from the court’s jurisdiction. Another possibility is that, as in other insular area federal courts, the federal jurisdiction of the court could grow over time. For example, while the District Court of Guam began with jurisdiction over cases arising under federal law in 1950, subsequent federal laws expanded its jurisdiction to include that of a district court of the United States, including diversity jurisdiction, and that of a bankruptcy court. Appeals process: The process for appealing decisions would be the same as in other Article IV district courts. Appeals would first go to the U.S. Court of Appeals for the Ninth Circuit and then to the U.S. Supreme Court. Judges: The judge would be appointed in the same manner as federal judges for the other insular areas, who are appointed by the President, with the advice and consent of the Senate, for 10-year terms. Associated Executive and Judicial Branch staff: Probation and Pretrial services staff, U.S. Attorney and staff, and U.S. Marshals staff would establish stand-alone offices. Defender services could be provided, at least initially, through the Federal Public Defender Organization personnel based in the District of Hawaii and/or Criminal Justice Act (CJA) panel attorneys. CJA panel attorneys are designated or approved by the court to furnish legal representation for those defendants who are financially unable to obtain counsel. Physical facilities: Under this scenario, a new courthouse facility would need to be built to provide the courtroom, judge’s chambers, office space for federal court staff, and a holding area for detaining defendants during trials. It is not clear if a detention facility for detaining defendants pretrial and presentencing would need to be built or if a portion of the existing local prison could be upgraded to meet federal standards. According to the U.S. Marshals Service, the current local prison in American Samoa does not meet federal detention standards. Operational issues: Several judicial officials and experts we met with stated that this scenario is the most straightforward option because it would be modeled after the federal courts in other insular areas, which would place residents of American Samoa in a position that is equitable with residents of the other insular areas. Other judicial officials we met with stated, however, that this is potentially the most costly scenario of the three, given the relatively small caseload expected. However, the Pacific Islands Committee stated in its 1995 Supplemental Report that new federal courts historically have drawn business as soon as they open their doors, and it is likely that growth in the court caseload would result. This scenario would create a new division of American Samoa within the District of Hawaii. There are potentially several arrangements which could be devised to handle court matters. Since the U.S. District Court of Hawaii is an Article III court, a judge assigned to a Division of American Samoa would also presumably be an Article III judge, which would differ from the Article IV courts in CNMI, Guam, and USVI. Another possibility would be to assign an Article IV judge to American Samoa. Regardless of the arrangement, a clerk of the court and support staff would be needed in American Samoa to handle the work of the court. Jurisdiction: As with scenario 1, the statute creating the division in the District of Hawaii would specify the court’s jurisdiction. It could be limited to criminal cases only, or may or may not include bankruptcy, federal question, and diversity jurisdiction. Appeals process: The process for appealing decisions would be the same as the District of Hawaii, to the U.S. Court of Appeals for the Ninth Circuit and then to the U.S. Supreme Court. Judges: An Article III or Article IV judge would be appointed by the President, with the advice and consent of the Senate, and serve either a life term with good behavior (Article III) or a 10-year term (Article IV) as is true in Guam, CNMI, and USVI. Associated Executive and Judicial Branch staff: Probation and Pretrial services, U.S. Attorney, and U.S. Marshals could provide the minimum staff required in American Samoa and share support functions with their offices in the District of Hawaii. Defender services could be provided, at least initially, through Federal Public Defender Organization personnel based in the U.S. District Court of Hawaii and/or CJA panel attorneys. Physical facilities: As with scenario 1, a new courthouse facility would need to be built to provide the courtroom, judge’s chambers, office space for federal court staff, and a holding area for detaining defendants during trials. Also, similar to scenario 1, it is unclear whether a new detention facility would need to be built or if a portion of the existing local prison could be upgraded to meet federal standards. Operational issues: Some federal and judicial officials we interviewed told us that this scenario may be less costly than scenario 1 because as a division of the District of Hawaii, some administrative functions and resources may be able to be shared with Hawaii. Other federal and judicial officials told us that costs for staff to travel between American Samoa and Hawaii and additional supervisory staff which may be needed in Hawaii may make scenario 2 just as costly, or possibly more costly than scenario 1. Although this scenario would allow for trials to be held in American Samoa, there may be issues to be resolved concerning the status of any judges that would serve in the court and the degree to which resources could or would be shared with the U.S. District Court of Hawaii. For example, some judicial officials have raised questions of equity about the possibility of Article IV judges being assigned to federal courts in CNMI, Guam, and USVI while an Article III judge was assigned to the federal court in American Samoa. This scenario would expand the federal jurisdiction of the High Court of American Samoa rather than establish a new federal court. This would be a unique structure, as local courts typically do not exercise federal criminal jurisdiction. As a result, a number of unresolved issues associated with this scenario would have to be resolved should this scenario be pursued. Jurisdiction: The jurisdiction of the High Court would be expanded to include additional federal matters, such as federal criminal jurisdiction. This would be a unique structure, as local courts generally do not exercise federal criminal jurisdiction. While there is a history of federal courts in insular areas with jurisdiction over local offenses, there has never been the reverse—a local court with jurisdiction over both local and federal offenses. Appeals process: The appellate process for federal matters under such a scenario is unclear. The current process for the limited federal cases handled by the High Court has five levels of appellate review: (1) to the Appellate Division of the High Court, (2) to the Secretary of the Interior, (3) to the U.S. District Court for the District of Columbia, (4) to the U.S. Court of Appeals for the District of Columbia Circuit, and (5) to the U.S. Supreme Court. Whether the appeals process would be amended to match that of the federal courts in CNMI, Guam, and USVI would have to be determined. Judges: The Chief Justice of the High Court stated that the High Court may need an additional judge to handle the increased caseload. Alternatively, in our discussions, Pacific Island Committee members with whom we met suggested that the Secretary of the Interior or the Chief Judge of the Ninth Circuit could designate active and senior district judges within the Ninth Circuit to handle any court workload in American Samoa. They point out that they designated judges from the Ninth Circuit to the District of Guam for over 2 years, when there was an extended judge vacancy. Further, the Ninth Circuit has designated local judges to handle federal matters, when necessary. For example, the judges from the Districts of CNMI and Guam routinely use local Superior Court or Supreme Court judges to handle federal court matters and trials, in cases when they must recuse themselves from a court matter or in the case of a planned or emergency absence. However, Pacific Island Committee members with whom we met stated that presumably federal judges would only handle federal court matters. It was unclear whether High Court justices would handle federal and local court matters and what implications might arise from such a structure. Associated Executive and Judicial Branch staff: It is unclear whether Probation and Pretrial services, U.S. Attorneys, and U.S. Marshals would be established, since these staff are only provided to a district court. Similarly, the authority under the CJA to authorize a federal defender organization to provide representation or to compensate panel attorneys is vested in the district court. The Department of Justice would need to determine whether it would establish a federal prosecutor position in American Samoa to prosecute certain federal cases in the High Court. There are local Public Defender and Attorney General Offices in American Samoa and the extent to which they could assist with cases is unknown. According to the Chief Justice of the High Court, it is unlikely that the existing probation and pretrial or court security staff would be able to handle an increased workload. Currently the High Court has three probation officers who work part-time as translators for the court, and two marshals, one for each of the High Court’s two courtrooms. Physical facilities: The extent to which federal detention and courtroom security requirements would apply is uncertain. Until this issue is resolved, activities could possibly continue in existing courthouse and detention facilities. However, the High Court justices and clerk said that current courtroom facilities are already used to capacity without the added caseload that federal jurisdiction could bring. Operational issues: This scenario may be the lowest-cost scenario and may alleviate concerns about the threat to the matai and land tenure systems. It is potentially the lowest-cost scenario because some of the existing court facilities and staff may be used. Some leaders within the American Samoa government believe this is the best option and supporters of this scenario note that the High Court has a history of respecting American Samoa traditions and so they have fewer concerns that issues of matai titles and land tenure would be in jeopardy. At the same time, as it is unprecedented to give federal criminal jurisdiction to a local court, this scenario could face the most challenges of the three, according to federal judges and other judicial officials. Legal experts with whom we met told us that, because this is a unique arrangement, the High Court and U.S. judiciary may be faced with having to constantly solve unique problems and develop solutions on a regular basis. For example, judicial officials stated that the High Court Justices would have to be cognizant of their roles and responsibilities when shifting from the duties of a local High Court Justice to the duties of a federal judge. A judicial official also noted that the High Court justices may have to become familiar with federal sentencing guidelines, which require a considerable amount of training. In the August 1995 hearing, the DOJ Deputy Assistant Attorney General stated that vesting federal jurisdiction in the High Court runs counter to well-established legislative policy that district courts should have exclusive jurisdiction over certain types of proceedings to which the United States is a party. For example, federal law states that U.S. district courts have exclusive jurisdiction over all offenses against the criminal laws of the United States and with respect to the collection of debts owed to the United States, provides for an exclusive debt collection procedure in the courts created by Congress. Similarly, federal regulatory statutes often provide for enforcement and judicial review in the federal courts. Another issue to be resolved is the appointment process for justices of the High Court. While none of the judicial officials with whom we met had concerns about the independence of the current justices, some expressed concerns about the differences in the way judges are appointed—while federal judges are generally appointed by the President, the justices in American Samoa are appointed by the Secretary of the Interior. As such, they suggested that the justices in American Samoa may not be subject to the same vetting process and protected by the same constitutional and statutory provisions—such as salary guarantees—as are district judges. The potential cost elements for establishing a federal court in American Samoa include agency rental costs, personnel costs, and operational costs; most of which would be funded by congressional appropriations. We collected likely cost elements, to the extent possible, for scenario 1 and 2 from the various federal agencies that would be involved in establishing a federal court in American Samoa. We did not collect cost data for scenario 3 because of its unique judicial arrangement and because there was no comparable existing federal court structure upon which to estimate costs. For scenario 1 and 2, AOUSC officials told us that a new courthouse would need to be built. GSA officials told us that court construction and agency rental costs would be comparatively high—about $80 to $90 per square foot for a new courthouse, compared to typical federal government rental charges for office space in American Samoa of around $45 to $50 per square foot in 2007. Funding sources for the judiciary and DOJ derive primarily from direct congressional appropriations and funding for a federal courthouse in American Samoa would likely be funded similarly. We found the data for scenarios 1 and 2 sufficiently reliable to provide rough estimates of the possible future costs for these scenarios for establishing a federal court in American Samoa, with limitations as noted. Due to limitations on existing buildings and potential land restrictions— about 90 percent of American Samoan land is communally owned—GSA officials told us that a new courthouse in American Samoa would likely use a build-to-suit lease construction arrangement rather than government-owned construction and that construction and consequent rental costs would be comparatively high. GSA provided initial construction and rental costs for the hypothetical courthouse in American Samoa, based on a floor plan submitted for a proposed new one-judge courthouse in CNMI. According to GSA officials, there are no buildings in American Samoa suitable for use as a federal courthouse. Further, officials from the High Court of American Samoa told us that its two-courtroom High Court building and its one-courtroom local district court building are frequently used to capacity. Under build-to-lease construction, the government contracts with a private developer to build the courthouse and, in this case, GSA leases the completed building based on the amortization of a 20-year construction loan. GSA would then rent portions of the building to the tenant federal agencies, such as AOUSC, EOUSA, and USMS. GSA officials gave very preliminary rent estimates of $80 to $90 per square foot, based on requirements similar to an existing build-to-suit lease prospectus for a new courthouse in CNMI. Further, GSA officials told us that federal agencies would be responsible for up-front payments for the particular courthouse governmental features, such as holding cells, and blast protection for security. GSA officials indicate that the accuracy of the initial American Samoa court construction may vary by as much as -20 to +80 percent, thereby influencing rental costs. The GSA Assistant Regional Administrator for Region IX Pacific Rim stated that there are many factors that could affect construction costs and, therefore, the tenant agencies’ rental costs. For example, any cost increases associated with the condition of an unknown site or escalation in construction costs beyond what has been anticipated will have a direct and proportional impact on the rental costs, as well as the up-front costs that agencies may be required to pay. Preliminary rental costs of $80 to $90 per square foot for a new courthouse with specialized building requirements would exceed typical federal government rental charges for offices in American Samoa at the prevailing market rates of $45 to $50 per rentable square foot in 2007. District court costs: For yearly district court costs under scenario 1, AOUSC provided us with district court cost estimates of about $1.5 million for personnel costs, including the costs of one district court judge and the full-time equivalent salaries of 2 law clerks and 1 secretary, 11 district clerk’s office staff, 1 pro se law clerk, 1 court reporter, and recruitment and training costs. Operational costs were estimated at $0.1 million, which includes judge’s law books, stationery, forms, new case assignment and jury management systems, travel, postage and delivery charges, and consumables for both the first year and recurring years. Information technology and other equipment costs were estimated at $0.1 million. Space and facilities costs ranged between $2.6 million to $2.9 million and include necessary alterations and renovations, signage, furnishings, furniture, and estimated GSA rental costs. Probation and pretrial services costs: For the yearly cost of probation and pretrial services, AOUSC provided us with personnel and benefits costs estimated at $0.3 million, which includes the full-time equivalent salaries of one Chief Probation Officer, one probation officer, and one administrative support staff. Operational costs were estimated at $0.1 million, including travel, training, transportation, postage, printing, maintenance, drug dependent offender testing and aftercare, pretrial drug testing, mental health treatment services, monitoring services, DNA testing, notices/advertising, contractual services, supplies, awards, firearms, and protective equipment. Information technology and other equipment costs were estimated at about $16,000 (i.e., equipment, maintenance, purchase of copy equipment, computer training, phone communications, supplies, computers, phones, data communications equipment, printers, scanner, and computer software). Space and facilities costs were estimated at $0.4 million to $0.5 million, which includes furniture and fixture purchases, as well as GSA rental costs. Federal Defender costs: AOUSC officials did not estimate costs for a Federal Defender’s office, since it is unlikely that the hypothetical court in American Samoa would, at least initially, reach the minimum 200 appointments per year required to authorize a Federal Defender Organization or the number of cases that would warrant the creation of a Federal Public Defender Organization headquartered in the District of Hawaii. The court in American Samoa, as an adjacent district, might be able to share the Federal Public Defender Organization staff based in Hawaii, or the court could rely solely on a CJA panel of attorneys. The costs to the Federal Public Defender Organization in Hawaii and the costs of reimbursing CJA attorneys would vary based on the caseload of the court. District Court costs: According to AOUSC, the estimated district court costs for scenario 2 could be similar to the estimated costs for scenario 1. An AOUSC official indicated that there may not be a need for a clerk, financial/procurement officer, jury clerk, or information technology specialist in American Samoa under scenario 2, as those functions may be handled out of the District of Hawaii office, leading to some possible reductions in personnel salaries. However, some judicial officials stated that any decrease in staff costs for this scenario may be offset by increased costs for travel between Hawaii and American Samoa. GSA rental costs would be comparable to scenario 1. Probation and pretrial services costs: Probation and Pretrial Services officials did not provide any cost differences between scenarios 1 and 2. Federal Defender costs: Either the Office of the Federal Public Defender in Hawaii or a CJA panel may provide defender services in American Samoa under both situations, thereby also not leading to any significant change in cost estimates between scenarios 1 and 2. For the Department of Justice, an EOUSA official provided U.S. Attorney’s Office cost estimates and a USMS official provided security cost estimates for both scenario 1 and scenario 2. Scenario 1 costs: EOUSA officials calculated the cost of a U.S. Attorney’s office based on a partial first year and a complete second year. Modular personnel costs are $0.6 million for the first year and $1.0 million for the second year, which includes one U.S. Attorney, three attorneys, and two support staff. Operational costs ranged from $0.5 million to $0.9 million, including travel and transportation, utilities, advisory and assistance services, printing and reproduction, and supplies and materials. Information technology costs were estimated at $0.1 million for equipment and the operation and maintenance of equipment. Space and facilities costs range between $1.3 million and $1.4 million and include the operation and maintenance of facilities and rent to GSA and others. Scenario 2 costs: EOUSA officials calculated U.S. Attorney’s office personnel costs for a partial first year and a complete second year. Modular personnel costs rose from $0.6 million in the first year to $1.0 million throughout the second year, which includes four attorneys and two support staff. Operational costs remain consistent at $0.2 million for both the first and second years, reflecting travel and transportation, litigation costs, supplies, and other miscellaneous costs. Information technology and equipment costs were estimated to be approximately $0.1 million for both years. Yearly rental rates may also be comparable in the initial years. Personnel and operations costs for scenario 2 were estimated to be less than for scenario 1 because scenario 2 does not include a separate U.S. Attorney for American Samoa. Rather, the costs for scenario 2 are based on the estimated costs and personnel the U.S. Attorney for the District of Hawaii would need to support cases that arise in American Samoa. Scenario 1 costs: USMS officials estimated that personnel costs were $0.8 million, based on fiscal year 2008 salaries, benefits, and law enforcement availability pay for all supervisory (one U.S. Marshal, one Chief Deputy, one Judicial Security Inspector) and nonsupervisory (two Deputy Marshals and one administrative) personnel that would be needed. Operational costs were estimated to be $0.8 million based on fiscal year 2008 standard, nonpersonnel costs for district operational and administrative positions (including vehicles, weapons, protective gear, communications equipment, and operational travel costs), and $0.7 million for defendant transport (including guard wages, airfare, per diem meals, and lodging). Information technology and equipment costs were estimated at $0.6 million for the installation of a computer network and telephone system to all USMS offices, and $0.2 million for yearly service on the wide-area network to American Samoa. Space and facilities costs were estimated between $1.1 million and $1.3 million for rent, plus variable defendant detention facility housing costs. Scenario 2 costs: With regard to scenario 2, USMS officials estimated that yearly personnel costs would be $0.5 million. Since a U.S. Marshal, Chief Deputy, and Judicial Security Officer would be shared with the USMS in Hawaii and not be located in American Samoa, personnel costs for this scenario are estimated to be approximately $0.4 million less than scenario 1. Operational costs (reflecting the standard, nonpersonnel costs for operational and administrative positions) under scenario 2 were estimated to be $0.5 million, or about $0.3 million less than scenario 1. The operational cost differential between the two scenarios with respect to prisoner transport is unclear. While the USMS did not specifically address information technology costs and other equipment costs with respect to scenario 2, the same types of costs in scenario 1 would be involved if a computer network and telephone system would need to be established. With respect to space and facilities, if the USMS were housed in the same court building as used for scenario 1, rental costs should be comparable (between $1.1 million and $1.3 million.) If, however, under scenario 2, the USMS were housed in an office building rather than a courthouse, then the resulting cost may be lower than scenario 1. Additionally, to the extent that defendants are detained in the same facilities as in scenario 1 (e.g., the Bureau of Prisons detention facility in Hawaii), detention facility costs should be comparable. Funding for the federal judiciary and DOJ agencies derives primarily from direct congressional appropriations to each agency and funding for a federal court in American Samoa would likely be funded similarly. In fiscal year 2006, about 94 percent of the total court salary and expense obligations were obtained through direct judiciary funding. The remaining 6 percent was obtained through offsetting collections, such as fees. In that same year, about 95 percent of the total Probation and Pretrial Services obligations were obtained through direct congressional appropriations. With regard to DOJ, in fiscal year 2006, 96 percent of the U.S. Attorneys’ obligations to support district court activities were obtained through direct congressional appropriations and the remaining 4 percent were obtained through other sources, such as asset forfeitures. In fiscal year 2008, USMS used direct congressional appropriations to cover the expenses for staff hiring, payroll, relocation, personnel infrastructure, rent, and utilities. The Office of the Federal Detention Trustee funds 100 percent of prisoner detention, meals, medical care, and transportation. AOUSC funds 100 percent of the court security officers, magnetometers, and security measures at courthouse entrances. We are not making recommendations regarding whether the current system and structure for adjudicating matters of federal law in American Samoa should be changed. Also, given the multiple limitations on available cost data, we are not making any determinations as to whether the current system is more or less costly than the different scenarios for change presented in this report. Rather, our purpose in reporting the issues has been to provide decision makers with information regarding the issues associated with potential scenarios for change. While the cost data are very limited, in the end, the controversy surrounding whether and how to create a venue for adjudicating matters of federal law emanating from American Samoa is not principally focused on costs, but on other factors, such as equity, justice, and cultural preservation. Thus, policy considerations, other than an analysis of cost effectiveness, are more likely to be the basis for deciding whether and how to establish a court with federal jurisdiction in American Samoa. Madame Chairwoman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Subcommittee may have at this time. For further information about this statement, please contact William O. Jenkins, Jr. at (202) 512-7777 or jenkinswo@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Staff making key contributions to this statement were Christopher Conrad, Assistant Director, Nancy Kawahara, and Tracey King. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | American Samoa is the only populated U.S. insular area that does not have a federal court. Congress has granted the local High Court federal jurisdiction for certain federal matters, such as specific areas of maritime law. GAO was asked to conduct a study of American Samoa's system for addressing matters of federal law. This testimony discusses: (1) the current system for adjudicating matters of federal law in American Samoa and how it compares to those in the Commonwealth of the Northern Mariana Islands (CNMI), Guam, and the U.S. Virgin Islands (USVI); (2) the reasons offered for or against changing the current system for adjudicating matters of federal law in American Samoa; (3) potential scenarios and issues associated with establishing a federal court in American Samoa or expanding the federal jurisdiction of the local court; and (4) the potential cost elements and funding sources associated with implementing those different scenarios. This testimony is based on GAO work performed from April 2007 to June 2008. Because American Samoa does not have a federal court like the CNMI, Guam, or USVI, matters of federal law arising in American Samoa have generally been adjudicated in U.S. district courts in Hawaii or the District of Columbia. Reasons offered for changing the existing system focus primarily on the difficulties of adjudicating matters of federal law arising in American Samoa, principally based on American Samoa's remote location, and the desire to provide American Samoans more direct access to justice. Reasons offered against any changes focus primarily on concerns about the effects of an increased federal presence on Samoan culture and traditions and concerns about juries' impartiality given close family ties. During the mid-1990s, several proposals were studied and many of the issues discussed then, such as the protection of local culture, were also raised during the GAO study. Based on previous studies and information gathered for its June 2008 report, GAO identified three potential scenarios, if changes were to be made: (1) establish a federal court in American Samoa under Article IV of the U.S. Constitution, (2) establish a district court in American Samoa as a division of the District of Hawaii, or (3) expand the federal jurisdiction of the High Court of American Samoa. Each scenario would present unique issues to be addressed, such as what jurisdiction to grant the court. The potential cost elements for establishing a federal court in American Samoa include agency rental costs, personnel costs, and operational costs, most of which would be funded by congressional appropriations. Exact details of the costs to be incurred would have to be determined when, and if, any of the scenarios were adopted. The controversy surrounding whether and how to create a venue for adjudicating matters of federal law in American Samoa is not principally focused on an analysis of cost effectiveness, but other policy considerations, such as equity, justice, and cultural preservation. |
Zika virus is a member of the flavivirus family related to dengue virus, yellow fever virus, and West Nile virus, and its primary mode of transmission is via mosquito, most notably by the Aedes aegypti mosquito. Aedes albopictus mosquitoes are a potential vector for the Zika virus. The disease was first identified in the Zika Forest in Africa in the 1940s, from which it subsequently moved eastwards over the following decades, though the Pacific Islands until it reached Brazil, where the population had no indigenous immunity against the virus. According to CDC officials with whom we spoke, the United States also has no indigenous immunity to the Zika virus. A major factor contributing to the declaration of Zika virus disease as a Public Health Emergency of International Concern by the World Health Organization (WHO) is a possible link between Zika virus and microcephaly as well as Guillain-Barré syndrome. Microcephaly—an abnormally small head due to failure of brain growth—is a concern because children with microcephaly can experience impaired cognitive development, delayed motor function and speech, seizures, and reduced life expectancy. To better understand the spread of this disease worldwide, epidemiological studies are required. Epidemiology is concerned with the patterns of disease occurrence in human populations and of the factors that influence these patterns. The goals of epidemiologic study, and more specifically outbreak investigations, are to determine the extent and distribution of the disease in the population, the causes and factors associated with the disease and its modes of transmission, the natural history of disease, and the basis for developing preventive strategies or interventions (see figure 1). The United States has played a significant role in improving global disease surveillance and response capacity. In the mid-1990s, recognizing the threat posed by previously unknown infectious diseases, the United States and other countries initiated a broader effort to ensure that countries can detect disease outbreaks that may constitute a Public Health Emergency of International Concern. The United States has participated in the WHO’s efforts to develop and implement the International Health Regulations, currently an agreement among 196 countries, to develop and maintain global capabilities to detect and respond to disease and public health threats. The CDC has helped to define and establish the International Health Regulations and has been designated by the WHO as a key partner in helping to implement the critical capacities for detecting and responding to emerging infectious disease outbreaks. The recent Ebola outbreak in West Africa has highlighted the importance of further improving the U.S. government’s global disease surveillance efforts. To help ensure that such threats are addressed early and at their source, the “National Health Security Strategy and Implementation Plan 2015-2018” released by the U.S. Department of Health and Human Services prioritizes efforts to strengthen national capacities and capabilities globally to detect disease in a timely manner, prevent the global spread of public health threats and diseases, and respond to public health emergencies. For example, CDC’ s Global Disease Detection and Field Epidemiology Training programs aim to strengthen laboratory systems for the rapid detection and control of emerging infectious diseases and train epidemiologists to effectively detect, investigate, and respond to health threats. While several countries have reported outbreaks of Zika virus disease, unanswered questions remain regarding the epidemiology and the transmission of the disease. Many factors, including a large number of asymptomatic patients, mild symptoms, a lack of a consistent international case definition of Zika virus disease, as well as of microcephaly, and a lack of validated diagnostic tests complicate our understanding of the virus and may hinder our response to the current outbreak. Questions also remain regarding the strength of the association between Zika virus infection and microcephaly or Guillain-Barré syndrome. Since the 1960s, the Zika virus has been known to occur within a narrow equatorial belt from Africa to Asia. In 2007, the virus was detected in Yap Island, the first report that the virus spread outside of Africa and Indonesia to Pacific Islands. In 2014, the virus spread east across the Pacific Ocean to French Polynesia, then to Easter Island. According to the WHO, the virus has continued to spread to the Americas, with the outbreak in Brazil that began in May 2015 and is ongoing. Zika has spread to Mexico, Central America, the Caribbean, and South America, where the outbreak has reached epidemic levels (see figure 2). Recent outbreaks have also been reported in Puerto Rico, as well as the Cape Verde Islands. According to CDC documentation, Zika virus disease is now a nationally notifiable disease. As of February 24, 2016, 107 cases of continental U.S. travel-associated Zika virus disease have been reported, according to CDC. Although CDC documentation states that Zika virus has not yet seen local mosquito-borne spread in the continental United States, some states have mosquito species potentially capable of transmitting the virus. Incidences of mosquito-borne transmission have been reported in the Commonwealth of Puerto Rico, the U.S. Virgin Islands, and American Samoa. The first locally-acquired case of Zika virus disease in Puerto Rico was reported in December of 2015. Through late January of 2016, about 30 additional laboratory-confirmed cases were identified in Puerto Rico, including one pregnant woman. In January of 2016, the CDC issued travel guidance for travel to affected countries, including the use of enhanced precautions for all travelers, as well as the recommendation that pregnant women postpone travel to affected areas. According to CDC documentation, there are a few known routes of transmission of the Zika virus to and among people. These include mosquitoes, mother to child, sexual contact and blood transfusions. The Zika virus is transmitted to people primarily through the bite of infected Aedes species mosquitoes (primarily Aedes aegypti and possibly Aedes albopictus). These are the same mosquitoes that spread dengue and chikungunya viruses. These mosquitoes typically lay eggs in and near standing water in containers like buckets, bowls, animal dishes, flower pots, and vases. They prefer to bite people, and live both indoors and outdoors. Mosquitoes that spread dengue, chikungunya, and Zika are aggressive daytime biters, but also bite at night. Mosquitoes can become infected when they feed on a person already infected with the virus. According to the CDC, the Zika virus is rarely transmitted from an infected mother to child, and there have been no reports of infants contracting the Zika virus through breastfeeding. It is possible but rare that an infected mother would pass the virus to a newborn at delivery. However, an infected mother can pass the Zika virus to her fetus during pregnancy. According to the CDC, it is possible for the Zika virus to be spread by a man to his sexual partners. A few recent cases of Zika virus transmission were reported through sexual contact. In December 2013, during a Zika virus outbreak in French Polynesia, Zika was isolated from the semen of a patient. In one known case of likely sexual transmission, the virus was spread before symptoms developed. The virus appears to be present in semen longer than in blood. Sexual transmission of the disease— acquired outside of the United States—has been reported in the United States. As of February 23, 2016, the CDC and state public health departments are investigating 14 additional reports of possible sexual transmission of the virus, including several involving pregnant women. Zika virus can also be transmitted through blood transfusion, according to U.S. Food and Drug Administration (FDA) documents. While there have been no reports to date of Zika virus entering the U.S. blood supply, the risk of blood transmission is considered high based on the most current scientific research of how Zika virus and similar viruses (i.e. flaviviruses) are spread, as well as recent reports of transfusion-associated infection outside of the United States, according to the FDA. CDC reports that there have been reports of possible blood transfusion transmission cases in Brazil. During the French Polynesian outbreak in 2013, 2.8 percent of blood donors tested positive for Zika. The maximum time the virus remains in the bloodstream is unknown, but scientists estimate that it is less than 28 days. On February 16, 2016, as a safety measure against the emerging Zika virus outbreak, the FDA issued new guidance recommending that blood donors be deferred for four weeks if they have been to areas with active Zika virus transmission, potentially have been exposed to the virus, or have had a confirmed Zika virus infection. While scientific studies have identified Zika viral components in saliva and urine, they did not report disease transmission from those bodily fluids. It is not currently known if infection with the Zika virus causes, facilitates, or is otherwise associated with the development of certain neurologic and auto-immune conditions. Although strongly suspected, a report suggests the causal relation between in-utero exposure to Zika and microcephaly is yet to be established. Scientific literature has identified several possible linkages, including the presence of Zika virus in fetal brain tissue, as well as evidence of the virus crossing the placental barrier, suggesting a causal effect is plausible, but not yet proven. For example, a fetal autopsy identified an abnormally small brain, as well physical markers of developmental delays, along with the Zika virus in the brain. The mother in the case reported an illness with a fever and rash at the end of the first trimester of pregnancy while she was living in Brazil. A retrospective analysis was reported to the WHO in 2015-2016 of a previous outbreak of Zika virus disease in 2013-2014 in French Polynesia, also established elevated numbers of neurological disorders for that outbreak. The potential association of Guillain-Barré syndrome and Zika virus disease was suspected prior to the recent Brazilian Zika disease outbreak. According to the European Center for Disease Control, the 2013 to 2014 French Polynesian outbreak of Zika virus disease was reportedly the largest documented outbreak at that time. According to the European Center for Disease Control, over 8,000 suspected cases of Zika Virus infection had been reported by February 2014. Notably, there were nearly 40 cases of Guillain-Barré syndrome reported, with all cases following disease episodes compatible with Zika virus infection. Historically, there had been 10 or fewer cases of Guillain-Barré syndrome in Polynesia annually. The European Center for Disease Control stated that further investigations could be conducted to establish the relationship between neurological and auto-immune complications and Zika virus infection. Researchers have reported that an estimated 80 percent of the individuals infected with the Zika virus are asymptomatic, that is, they have the virus but do not manifest clinical symptoms. Since diagnosis of suspected Zika virus disease is often based on clinical symptoms, and in light of the fact that clinical symptoms are usually non-existent or mild, experts told us that many individuals who are infected with the Zika virus may not seek medical care, and thus are not counted as a case, resulting in significant underestimation of the true incidence of infection. An accurate count of the number of Zika virus disease cases requires a consistent case definition, or set of uniform criteria to define the disease for public health surveillance and to determine who is included in the count and who is excluded. However, establishing a definition is problematic for Zika virus disease. If Zika cases are diagnosed based on serology data, then the incidence count may include people who have been infected with the virus, but do not show clinical symptoms. On the other hand, if cases are defined by clinical symptoms only, with no serology testing, then the incidence could be higher or lower than that count obtained from serology testing only, because people who present with clinical symptoms may or may not actually test positive for Zika virus. According to the WHO Zika Response Strategy, there is currently a need to establish a uniform case definition for Zika virus disease, as well as historical rates, or baselines, for associated conditions. The Council of State and Territorial Epidemiologists currently has a case definition for Zika virus disease—under arboviral diseases—with two tiers: a “probable case” definition based on clinical signs and symptoms as well as the presence of certain anti-Zika antibodies, and a “confirmed case” definition for laboratory-confirmed cases based on laboratory analysis. However, because other countries may be using different testing protocols, it is unclear whether their results would be consistent with the CDC case definitions, complicating epidemiological analysis. Engaging international cooperation to establish uniform case definitions and baselines for diagnosing Zika virus disease and microcephaly can facilitate discovery of modes of transmission and causal links between Zika virus disease and microcephaly or Guillain-Barré syndrome. When the WHO declared a Public Health Emergency of International Concern on February 1, 2016, it acknowledged that there was no international standard surveillance case definition for microcephaly. Problems with changing case definitions, lack of sufficient information on underlying causes and brain pathology, and lack of baseline data make it difficult to accurately determine the level of increase of microcephaly in Brazil, and how much is due to the Zika virus. Some researchers offered several possible explanations for the observed increase in microcephaly cases, other than actual increase of cases as a result of Zika virus infection. First, because of the recent attention, newborn babies with visible cranial deformities are likely to be fast- tracked for in-depth examination. This temporal increase in suspected cases of microcephaly could also be distorted given both raised awareness with more children than usual being measured and reported, and the changing definition of microcephaly over time. Although there is evidence of an increased number of cases of microcephaly in Brazil, these authors demonstrated that the number of suspected cases relied on a screening test that had very low specificity and therefore overestimated the actual number of cases. According to the CDC, there are currently two Zika diagnostic tests available in the United States: reverse transcription polymerase chain reaction (RT-PCR) and Immunoglobulin M (IgM) followed by the Plaque Reduction Neutralization Test (PRNT) test. The current RT-PCR test can detect infection only during the period of illness when the virus is present. According to an NIH official, the PRNT diagnostic test is the most specific for antibody detection, but is cumbersome and not suitable for screening a large number of individuals. When detecting antibodies, diagnosing cases of Zika virus disease and differentiating it from other diseases caused by other flaviviruses, such as dengue or yellow fever, is difficult if someone has been infected by another flavivirus. Some tests for Zika virus antibodies suffer from cross- reactivity with antibodies to similar viruses, such as from dengue virus disease, meaning that tests using these antibodies for detection are not specific for the Zika virus. For example, a person previously infected with another flavivirus such as dengue could be falsely identified as also having been exposed to the Zika virus (and vice-versa). In addition, new outbreaks like Zika may not have known patterns or trends, making effective surveillance challenging. For example, after the onset of illness, Zika virus remains in the blood for about 5-7 days, according to the CDC. After this period, called viremia, diagnosis of Zika virus disease at this time relies on detection of antibodies against the Zika virus. Since antibodies in the blood may persist longer than the virus, a positive result for antibodies against the Zika virus indicates only that the patient was previously exposed to the Zika virus. Thus, the window for detecting the actual virus is small. It is not clear whether an antibody test could determine how long ago the patient was exposed. According to the CDC, while there are no commercially-available diagnostic tests for Zika, an antibody-based test for the Zika virus (Zika MAC-ELISA) was recently authorized for Emergency Use by the FDA. One of the main limitations of this test, among others, is its inability to differentiate between infection with Zika and other closely related flaviviruses such as dengue. This test in its current form may confuse the practitioners because of its lack of specificity. Since closely related flaviviruses such as dengue may also be present in Zika outbreak countries, the utilization of this assay could wrongly identify non-Zika- virus associated infections, thus putting extra burden on the laboratory and health care systems, and distort the epidemiological analyses. Adding to the limitations of these diagnostic systems are limited numbers of facilities able to perform definitive confirmatory testing, particularly in the developing world. The WHO is undertaking an analysis of diagnostics under development, developing target product profiles, facilitating the preparation and characterization of reference reagents, and setting up an Emergency Use Assessment and Listing mechanism for priority Zika diagnostics. Because Zika virus disease cannot yet be prevented by drugs or vaccines, vector (mosquito) control remains a critical factor in mitigating risks associated with this disease. The Aedes aegypti and Aedes albopictus mosquitoes are present around the world, as well as in the United States. Figure 3 shows a predicted distribution and intensity of the Aedes aegypti and Aedes albopictus mosquitoes, respectively, and indicates that the southeastern United States, particularly the Gulf Coast states, could be at risk of exposure in the near future. Figure 4 shows the approximate distribution of Aedes aegypti and Aedes albopictus mosquitoes in the United States in more detail. There are both large scale and personal methods for mosquito control. We provide a brief preliminary overview of some population-scale control methods identified in the literature, agency documents, and interviews with industry officials and academicians, which include three potentially- overlapping categories: (1) standing water treatment, (2) insecticides, and (3) emerging technologies. For some of these emerging techniques, their effectiveness remains to be demonstrated, but they have the potential to be additional tools in mosquito control. Personal methods, such as use of repellents, nets or long-sleeved clothing and staying indoors in air- conditioned locations are out the scope of this report. Standing water treatment for mosquito control can be achieved by the physical reduction of bodies of water or by treating the water with chemicals that kill mosquito larvae or interfere with their development. According to CDC documentation, these treatments include use of certain bacteria or insecticides that mimic mosquito hormones, which prevents mosquitoes from maturing or kills them as larvae. Another method involves coating water with a thin film of oil to suffocate immature larvae. According to CDC documentation, insecticide dispersal relies on various techniques such as space spraying by trucks or aircraft, or residual spraying, which entails coating surfaces with insecticide. Spraying is one method currently used in the United States. CDC documentation and scientific literature have established that in the long-term, the effectiveness of spraying may be diluted due to insecticide resistance, concerns over environmental exposure, and questionable efficacy of externally-delivered wide-area fogging or spraying. Additionally, the WHO notes that reactive space spraying during emergencies has a low impact unless integrated with other control strategies. Researchers are developing new chemicals that are more targeted towards mosquitoes, and are attempting to alleviate human toxicity issues. The use of insecticides as control methods effectively reduced mosquito- borne diseases, including malaria and yellow fever, in most of the world in the 1940s-1960s. The WHO determined that maintaining vector control after a disease subsides is complicated by dwindling resources. Indeed, by the 1980s-1990s, many dangerous vector-borne diseases re-emerged or spread to new regions. Additionally, the spread of some diseases, such as dengue virus disease, can be attributed to a combination of mosquito, viral, and human factors. To address the resulting complexities of such disease transmissions, the WHO uses “Integrated Vector Management,” which leverages multiple control methods based on surveillance and evaluation of involved insects and disease epidemiology. Emerging technologies include (1) use of biological control methods, (2) genetically-modified mosquitoes, and (3) auto-dissemination traps. Based on scientific literature, these technologies show some promise in studies to overcome issues associated with use of insecticides, such as insecticide resistance. Many of these methods have been tested in smaller-scale controlled field trials internationally. We have not done an independent, comprehensive evaluation of these technologies, due to time limitations. Biological control methods include introducing natural predators of mosquitoes or their eggs and using bacteria to prevent disease transmission to humans. For example, certain small crustaceans and certain fish eat mosquito larvae. The suitability of this approach for the Zika virus is uncertain because the primary mosquito species identified for Zika transmission are the Aedes aegypti and Aedes albopictus, which can breed in very small volumes of water, such as those found in tin cans or in plates under potted plants. Scientific literature has identified another approach—using bacteria to reduce disease transmission from mosquitoes to humans. This bacteria, called Wolbachia, can be transferred from mosquitoes to their eggs, thereby propagating this effect to future generations. This tactic has been demonstrated in laboratory environments and is undergoing field trials internationally, particularly in areas affected by the dengue virus. Control of the disease-transmitting mosquito population using genetically modified mosquitoes can be potentially achieved in different ways. Some genetically-modified mosquitoes are engineered with a “lethal” gene that constantly makes a protein that kills the mosquito larvae. According to one company creating these mosquitoes, the use of genetically modified mosquitoes allows for population control by introducing male genetically-modified mosquitoes that transfer this lethal gene to the female mosquito’ s eggs. As a result, mosquito larvae with the inherited lethal gene die. The company claims it has achieved a 90 percent reduction in mosquito populations using its method of releasing the modified male mosquitoes 1-3 times weekly over a period of months. The modified mosquitoes do not persist in the environment, as released mosquitoes generally die out on their own. Given the estimated 200- meter range of a male mosquito during its lifetime, scalability may be challenging. Another genetically-modified mosquito method incorporates viral resistance into mosquitoes, with the goal of replacing existing populations of mosquitoes with one less capable of disease transmission, rather than reducing the number of mosquitoes. Scientific literature indicates there may be public opposition to release of genetically-modified mosquitoes for either procedure due to uncertainties about their effect on people and the environment, or other unknown consequences. Mosquito control by auto-dissemination traps functions similarly to insecticides, but relies on containers coated with similar chemicals. After a mosquito lands on these containers, they are contaminated with the chemicals and subsequently transfer them to the location where they lay eggs, which may result in larval death. In the case of the Aedes aegypti mosquito, WHO documentation indicates these locations tend to be small containers, so auto-dissemination is particularly suited to these mosquitoes. Auto-dissemination traps have shown 42-98 percent decreases in Aedes aegypti mosquito population in field trials. NIH and CDC have identified areas of high priority for research related to the Zika virus. In addition, the Administration’s $1.9 billion emergency funding request to combat the spread of the Zika virus is intended to provide resources to both NIH and CDC to this end. If approved, the NIH would receive $130 million to support vaccine development and other work related to Zika and other mosquito-borne diseases such as chikungunya. The CDC would receive $828 million, including $225 million for grants and technical assistance, in part to expand mosquito or vector control. The FDA would receive $10 million, in part to support the approval of new diagnostic tests and evaluation of treatment efficacy. NIH has identified areas of high priority including: Basic research to understand viral replication, pathogenesis, and transmission, as well as the biology of the mosquito vectors; potential interactions with co-infections such as dengue and yellow fever viruses; animal models of Zika virus infection; and novel vector control methods; and Pursuing Zika virus research to develop sensitive, specific, and rapid clinical diagnostic tests; drug treatments for Zika virus as well as broad spectrum therapeutics that treat multiple flaviviral infection; and effective vaccines and vaccination strategies. On February 5th, 2016, several NIH institutes issued a notice to researchers indicating NIH’ s interest in supporting research to understand transmission of the Zika virus, optimal screening and management in pregnancy, and the mechanisms by which the Zika virus affects the developing nervous system, including potential links to microcephaly. This notice was followed by a Funding Opportunity Announcement issued on February 19th, 2016, to create an expedited mechanism for funding exploratory and developmental research projects on these topics. The CDC has identified priority areas of research focus on Zika including: Determining the link between Zika virus infections and the birth defect microcephaly and measuring changes in incidence rates of the birth defect over time; Improving diagnostics for the Zika virus, including advanced methods to refine tests and support advanced developments for vector control; and Enhancing international capacity for virus surveillance, expanding laboratory testing, and health care provider testing in countries at highest risk of Zika virus outbreaks. These research activities are intended to supplement other activities for response, readiness and surveillance. The priority research areas identified by NIH and CDC are ambitious and agencies may face some challenges in implementing this agenda, including: Given that there are few known cases in the United States, NIH and CDC may have to rely on the cooperation of other countries with sufficient number of cases in order to carry on the proposed research. However, data from other countries may be different due to different definitions of Zika virus disease and microcephaly. Demonstrating the link between the Zika virus and microcephaly may depend not only on the presence of the virus, but also on environmental and nutritional factors. In addition, shifting case definitions and a lack of baseline data makes it difficult to determine the increase, if any, in microcephaly and how much can be attributed to the Zika virus. The presence of a high percentage of asymptomatic cases makes it difficult to conduct epidemiological studies, both in identifying exposed and unexposed individuals for case control studies. Prior infection or co-infection with another virus such as dengue may complicate any analyses. NIH officials told us that prior work on similar viruses has allowed them to make rapid progress on both a DNA-based vaccine (based on prior work on West Nile Virus), and a live attenuated virus vaccine modeled after the dengue virus vaccine that is currently in phase 3 clinical trials in Brazil. NIH officials told us that prior “platform” work on similar viruses has expedited response to the Zika outbreak, and in the development of diagnostic tests and vaccines. With regard to the dengue vaccine, NIH provided us with the following timeline: It has taken more than 16 years, and trials are not yet complete. NIH officials told us that given their past experience with the development of vaccine for dengue fever, a vaccine for Zika could be ready for use in an emergency situation in three to four years, in the best case scenario. However, when we asked NIH about this estimate, NIH stated that the National Institute of Allergy and Infectious Diseases plans to begin a Phase I clinical trial within this calendar year. If a candidate vaccine shows promise in Phase I testing, additional clinical testing could begin by 2017 in countries where the disease is found, if the outbreak is still ongoing. The progress of these additional tests, and whether they can contribute to successful licensure, depends on a number of factors, including scientific and technical progress, as well as the size of any ongoing Zika outbreaks during clinical testing. For this reason, it is difficult to provide an exact estimate for the time it will take to develop a Zika vaccine from preclinical studies through clinical testing and licensure. Zika virus disease poses new challenges to vaccine development and testing. This disease has specific and important implications for pregnant women. There are substantial knowledge gaps in current understanding of Zika, irrespective of the affected population. Since Zika virus disease is associated with, and may cause, adverse fetal outcomes, pregnant women are at particular risk and may benefit from measures such as vaccines. There are several current scientific and structural barriers to developing and testing vaccines for pregnant women. Overcoming these barriers may extend timeframes for vaccine testing and approval. The information we have from NIH and our prior work suggests that development of a Zika virus vaccine may take longer than anticipated by NIH. Biosurveillance: Ongoing Challenges and Future Considerations for DHS Biosurveillance Efforts. GAO-16-413T. Washington, D.C.: February 11, 2016. Air Travel and Communicable Diseases: Comprehensive Federal Plan Needed for U.S. Aviation System’s Preparedness. GAO-16-127. Washington, D.C.: December 16, 2015. Emerging Animal Diseases: Actions Needed to Better Position USDA to Address Future Risks. GAO-16-132. Washington, D.C.: December 15, 2015. Climate Change: HHS Could Take Further Steps to Enhance Understanding of Public Health Risks. GAO-16-122. Washington, D.C.: October 5, 2015. Biosurveillance: Challenges and Options for the National Biosurveillance Integration Center. GAO-15-793. Washington, D.C.: September 24, 2015. Biosurveillance: Additional Planning, Oversight, and Coordination Needed to Enhance National Capability. GAO-15-664T. Washington, D.C.: July 8, 2015. Federal Veterinarians: Efforts Needed to Improve Workforce Planning. GAO-15-495. Washington, D.C.: May 26, 2015. Biological Defense: DOD Has Strengthened Coordination on Medical Countermeasures but Can Improve Its Process for Threat Prioritization. GAO-14-442. Washington, D.C.: May 15, 2014. National Preparedness: HHS Has Funded Flexible Manufacturing Activities for Medical Countermeasures, but It Is Too Soon to Assess Their Effect. GAO-14-329. Washington, D.C.: March 31, 2014. National Preparedness: HHS Is Monitoring the Progress of Its Medical Countermeasure Efforts but Has Not Provided Previously Recommended Spending Estimates. GAO-14-90. Washington, D.C.: December 27, 2013. Homeland Security: An Overall Strategy Is Needed to Strengthen Disease Surveillance in Livestock and Poultry. GAO-13-424. Washington, D.C.: May 21, 2013. National Preparedness: Efforts to Address the Medical Needs of Children in a Chemical, Biological, Radiological, or Nuclear Incident. GAO-13-438. Washington, D.C.: April 30, 2013. Influenza: Progress Made in Responding to Seasonal and Pandemic Outbreaks. GAO-13-374T. Washington, D.C.: February 13, 2013. Homeland Security: Agriculture Inspection Program Has Made Some Improvements, but Management Challenges Persist. GAO-12-885. Washington, D.C.: September 27, 2012. Biosurveillance: Nonfederal Capabilities Should Be Considered in Creating a National Biosurveillance Strategy. GAO-12-55. Washington, D.C.: October 31, 2011. National Preparedness: Improvements Needed for Acquiring Medical Countermeasures to Threats from Terrorism and Other Sources. GAO-12-121. Washington, D.C: October 26, 2011. Homeland Security: Challenges for the Food and Agriculture Sector in Responding to Potential Terrorist Attacks and Natural Disasters. GAO-11-946T. Washington, D.C.: September 13, 2011. Homeland Security: Actions Needed to Improve Response to Potential Terrorist Attacks and Natural Disasters Affecting Food and Agriculture. GAO-11-652. Washington, D.C.: August 19, 2011. Influenza Vaccine: Federal Investments in Alternative Technologies and Challenges to Development and Licensure. GAO-11-435. Washington, D.C.: June 27, 2011. Influenza Pandemic: Lessons from the H1N1 Pandemic Should Be Incorporated into Future Planning. GAO-11-632. Washington, D.C: June 27, 2011. Live Animal Imports: Agencies Need Better Collaboration to Reduce the Risk of Animal-Related Diseases. GAO-11-9. Washington, D.C.: November 8, 2010. Biosurveillance: Efforts to Develop a National Biosurveillance Capability Need a National Strategy and a Designated Leader. GAO-10-645. Washington, D.C.: June 30, 2010. Veterinarian Workforce: The Federal Government Lacks a Comprehensive Understanding of Its Capacity to Protect Animal and Public Health. GAO-09-424T. Washington, D.C.: February 26, 2009. Influenza Pandemic: Sustaining Focus on the Nation’s Planning and Preparedness Efforts. GAO-09-334. Washington, D.C: February 26, 2009. Veterinarian Workforce: Actions Are Needed to Ensure Sufficient Capacity for Protecting Public and Animal Health. GAO-09-178. Washington, D.C.: February 4, 2009. Influenza Pandemic: HHS Needs to Continue Its Actions and Finalize Guidance for Pharmaceutical Interventions. GAO-08-671. Washington, D.C.: September 30, 2008. Emergency Preparedness: States Are Planning for Medical Surge, but Could Benefit from Shared Guidance for Allocating Scarce Medical Resources. GAO-08-668. Washington, D.C.: June 13, 2008. Influenza Pandemic: Efforts Underway to Address Constraints on Using Antivirals and Vaccines to Forestall a Pandemic. GAO-08-92. Washington, D.C.: December 21, 2007. Influenza Vaccine: Issues Related to Production, Distribution, and Public Health Messages. GAO-08-27. Washington, D.C: October 31, 2007. Global Health: U.S. Agencies Support Programs to Build Overseas Capacity for Infectious Disease Surveillance. GAO-08-138T. Washington, D.C.: October 4, 2007. Agricultural Quarantine Inspection Program: Management Problems May Increase Vulnerability of U.S. Agriculture to Foreign Pests and Diseases. GAO-08-96T. Washington, D.C.: October 3, 2007. Global Health: U.S. Agencies Support Programs to Build Overseas Capacity for Infectious Disease Surveillance. GAO-07-1186. Washington, D.C.: September 27, 2007. Influenza Pandemic: DOD Combatant Commands’ Preparedness Efforts Could Benefit from More Clearly Defined Roles, Resources, and Risk Mitigation. GAO-07-696. Washington, D.C.: June 20, 2007. Influenza Pandemic: Efforts to Forestall Onset Are Under Way; Identifying Countries at Greatest Risk Entails Challenges. GAO-07-604. Washington, D.C.: June 20, 2007. Avian Influenza: USDA Has Taken Important Steps to Prepare for Outbreaks, but Better Planning Could Improve Response. GAO-07-652. Washington, D.C.: June 11, 2007. Influenza Pandemic: DOD Has Taken Important Actions to Prepare, but Accountability, Funding, and Communications Need to be Clearer and Focused Departmentwide. GAO-06-1042. Washington, D.C.: September 21, 2006. Homeland Security: Management and Coordination Problems Increase the Vulnerability of U.S. Agriculture to Foreign Pests and Disease. GAO-06-644. Washington, D.C.: May 19, 2006. Influenza Vaccine: Shortages in 2004-05 Season Underscore Need for Better Preparation. GAO-05-984. Washington, D.C.: September 30, 2005. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Emerging infectious diseases constitute a clear and persistent threat to the health and well-being of people and animals around the world. The Zika virus, which at present appears to be primarily transmitted to humans by infected mosquitos, can cause symptoms including fever, rash, and joint pain. A large ongoing outbreak is occurring in Brazil that started in May 2015. As of February 24, 2016, over 100 cases of U.S. travel-associated Zika virus disease cases have been reported. Due to concerns about its potential impact, you asked GAO to present preliminary observations on the Zika virus. This statement addresses (1) the epidemiology and transmission of the Zika virus disease, including reporting on the incidence of disease and what is known about its link to microcephaly; (2) detection and testing methods; (3) methods for mosquito control; and (4) the proposed federal research agenda as it relates to the Zika virus and Zika virus disease. To report on these questions, GAO reviewed relevant peer-reviewed scientific literature, epidemiological alerts, agency documents, and prior GAO work from 2003-2016 on related topics; consulted experts in the fields of virology, infectious diseases, and vector control, including industry representatives; and interviewed officials of the CDC and NIH. While several countries have reported outbreaks of Zika virus disease—which appear to be primarily transmitted to humans by mosquitos—unanswered questions remain regarding the epidemiology and transmission of the disease. Many factors—including a large number of asymptomatic patients and patients with mild symptoms, and a lack of a consistent international case definition of Zika virus disease—complicate understanding of the virus and may hinder responses to the current outbreak. For example, an estimated 80 percent of individuals infected with the Zika virus may not manifest clinical symptoms. As a result, incidence of the infection may be underestimated. Questions also remain regarding the strength of the association between Zika virus infection and two other conditions: microcephaly and Guillain-Barré syndrome. A lack of validated diagnostic tests, consistent international case definitions, and trend information may also contribute to difficulty in estimating the prevalence of the virus. The United States uses two diagnostic tests for Zika, and according to the U.S. Centers for Disease Control and Prevention (CDC), while there are no commercially-available diagnostic tests for Zika, an antibody-based test for Zika virus was recently authorized for Emergency Use by the U.S. Food and Drug Administration. Diagnosing Zika virus infection is also complicated because it is difficult to differentiate it from other similar diseases, such as dengue or yellow fever. For example, a person previously infected with dengue could be falsely identified as also having been exposed to the Zika virus (and vice-versa). Moreover, the World Health Organization has acknowledged the need for a consistent case definition—that is, a set of uniform criteria to define the disease for public health surveillance and to determine who is included in the count and who is excluded. Additionally, a lack of pattern and trend data has made surveillance challenging. Because Zika virus disease cannot yet be prevented by drugs or vaccines, vector (mosquito) control remains a critical factor in preventing and mitigating the occurrence of this disease. There are three methods for mosquito control: (1) standing water treatment, (2) insecticides, and (3) emerging technologies. Mosquito control has been achieved in some locations by methods such as reducing or chemically-treating water sources where mosquitoes breed or mature, or by insecticide dispersal. Emerging technologies, including biological control methods—such as infecting mosquitoes with bacteria— genetically-modified mosquitoes, and auto-dissemination traps, show some promise but are still in development and testing phases. The National Institutes of Health (NIH) and the CDC have identified several high priority areas of research. Research priorities include basic research to understand viral replication, pathogenesis, and transmission, as well as the biology of the mosquito vectors; potential interactions with co-infections such as dengue and yellow fever viruses; linkages between Zika and the birth defect microcephaly; improving diagnostic tests; vaccine development; and novel vector control methods. These efforts are ambitious, and agencies may face challenges in implementing this agenda. GAO is not making recommendations at this time. |
Proliferation networks use commercial and business practices to obtain materials, technology, and knowledge to further nuclear, chemical, biological, and radiological programs. Nuclear proliferation networks seek to circumvent national and international restrictions against procuring the technologies necessary for developing nuclear weapons programs. These networks exploit weak export control systems, procure dual-use goods with both nuclear and common industrial uses, and employ deceptive tactics such as front companies and falsified documents, according to the Department of Energy. The A.Q. Khan network, established by the former head of Pakistan’s nuclear weapons program, supplied Pakistan with nuclear technology for its national weapons program. However, it became a network that provided nuclear technology to any state for profit. The development of this network illustrates how determined proliferators can effectively circumvent existing export controls to acquire sensitive nuclear-related and dual-use technologies. According to Energy, the A.Q. Khan case illustrates the scope and magnitude of the threat of nuclear networks— how both weak export control systems and system gaps allowed a network to procure sensitive materials from states worldwide. The network also highlighted the role that companies in several countries, such as Malaysia, played in unwittingly facilitating sales as suppliers of technology or points of transit. According to open-source reporting, countries where A.Q. Khan proliferation network activities occurred included Germany, Japan, Malaysia, the Netherlands, Pakistan, Republic of Korea, Singapore, South Africa, Turkey, United Arab Emirates (UAE), and United Kingdom. The multilateral nonproliferation regime, which, among other purposes, attempts to counter nuclear networks, consists of the Non-Proliferation Treaty (NPT), International Atomic Energy Agency (IAEA) inspection regime, United Nations (UN) Security Council Resolution 1540, Nuclear Suppliers Group (NSG), and the Proliferation Security Initiative (PSI). The regime also includes multilateral and national assistance programs and national export controls and laws. Entered into force on March 5, 1970, NPT obligates nuclear weapon states not to transfer nuclear weapons or other nuclear explosive devices to any recipient, and not to assist, encourage, or induce any nonnuclear weapon state to manufacture or otherwise acquire nuclear weapons or other nuclear explosive devices. Under the treaty, each nonnuclear weapon state pledges not to receive, manufacture, or otherwise acquire nuclear weapons or other nuclear explosive devices, and not to seek or receive assistance in their manufacture. NPT also obliges each nonnuclear weapon state to accept comprehensive international safeguards, including inspection, through agreements negotiated with IAEA. The intent of these safeguards is to deter and detect the diversion of nuclear material for nuclear explosive purposes. Relevant U.S. assistance programs on export and border controls include EXBS and INECP. State’s EXBS program assists foreign governments in strengthening their export controls by improving their legal and regulatory frameworks, licensing processes, border control and other enforcement capabilities, outreach to industry, and interagency coordination. The mission of Energy’s INECP is to prevent the proliferation of WMD and WMD-related material, equipment, and technology by helping other countries develop effective national export control systems. Total EXBS funding for fiscal years 2003 through 2006 was about $175 million and for INECP was about $30 million. Since the terrorist attacks of September 11, 2001, and the exposure of the A.Q. Khan nuclear proliferation network, the President and U.S. government agencies involved in national enforcement activities have emphasized the importance of preventing WMD proliferation, including nuclear proliferation. On a national level, the United States endeavors to counter nuclear proliferation by enforcing laws that control the export of materials—including dual-use items—that could be used to make a nuclear weapon and by applying criminal or administrative penalties to proliferators. The Departments of Commerce, Homeland Security, Justice, State, and Treasury have responsibilities for enforcing various laws that relate to nuclear proliferation. The U.S. government’s control over the export of defense nuclear and dual-use items is primarily divided between two departments—State and Commerce, respectively. Support for enforcement activities comes primarily from Commerce, through its Bureau of Industry and Security’s Office of Export Enforcement; DHS, through its Customs and Border Protection (CBP) and Immigration and Customs Enforcement (ICE); and Justice, through the Federal Bureau of Investigation (FBI) and the United States Attorneys Office. Export enforcement involves inspecting items to be shipped, investigating potential violations of export control laws, and punishing export control violators. The United States has initiated a range of multilateral efforts and proposals to counter nuclear proliferation networks. Although multilateral organizations have adopted some U.S. proposals that would help address illicit nuclear proliferation networks, they have not adopted others. First, the United States negotiated the passage of UN Security Council Resolution 1540 that obligated all member states to adopt laws and regulations prohibiting the proliferation of WMD. Second, the U.S. government led NSG to conduct several activities aimed at combating proliferation networks, including development of watch lists; however, two U.S. proposals to NSG have not been adopted. Third, with U.S. support, IAEA has taken several actions to address proliferation networks, such as establishing a unit intended to analyze covert nuclear trade activities. However, IAEA has not yet adopted a recommendation drafted in June 2005 that calls on member states to provide IAEA with information on their exports to improve the agency’s ability to detect possible clandestine nuclear activities. Finally, the U.S. government has led efforts to establish the Proliferation Security Initiative (PSI). The United States negotiated the passage of a UN Security Council resolution that obligated all member states to adopt laws and regulations prohibiting the proliferation of WMD. The UN Security Council adopted Resolution 1540 in April 2004, obligating all member states to adopt laws prohibiting proliferation of WMD as well as to maintain and enforce adequate export controls. Under UN Security Council Resolution 1540, all states have three primary obligations relating to nuclear, chemical, and biological weapons, and their delivery systems. They are to (1) refrain from providing support to nonstate actors seeking such items; (2) prohibit nonstate actors from acquiring, using, and attempting to acquire and use such items; and prohibiting nonstate actors from participating in, assisting, or financing such activities; and (3) put in place and enforce effective measures to control these items and related material to prevent their proliferation. Member states have begun implementing its provisions by submitting required reports on their export control laws to a committee designated the 1540 committee. The committee also has been tasked with identifying the assistance needs of countries and coordinating their requests for assistance with offers from other countries. The U.S. government led NSG in several activities to combat proliferation networks, including the development of watch lists. However, NSG has not adopted two U.S. proposals that would commit members to refrain from exporting certain technologies to states that do not already have the capability to use them and to countries that have not agreed to allow IAEA additional rights to inspect any facilities suspected of covert nuclear activities. NSG, established in 1975, is a multilateral export control regime with 45 participating governments. The purpose of NSG is to prevent the proliferation of nuclear weapons through export controls of nuclear and nuclear-related material, equipment, and technology, without hindering international cooperation on peaceful uses of nuclear energy. NSG periodically updates and strengthens its guidelines on how member states should control and license sensitive technologies and maintain lists of the technologies to be controlled. However, NSG, like other multilateral export control regimes, is a consensus-based organization and depends on the like-mindedness or cohesion of its members to be effective. NSG has undertaken several activities to help shut down proliferation networks. For example, in May 2004, NSG noted its concern over the discovery of a covert international proliferation trafficking network, through which sensitive nuclear-related equipment had found its way to Libya. To address this concern, the United States developed national procurement watch lists for all supplier states as a means to help block further procurement of nuclear-relevant items that are not formally controlled by placement on export control lists. To slow down North Korea’s and Iran’s work on their nuclear programs, the watch lists focus on items of interest to those countries, according to Energy. The lists include items that could be used to enrich uranium, reprocess spent nuclear reactor fuel, and fabricate fuel for nuclear reactors. Both NSG members and nonmembers use the lists. Through U.S. leadership, NSG also has conducted outreach to non-NSG members, creating awareness of issues related to the supply of sensitive technology, and pressing for adherence to NSG guidelines. For example, NSG worked with existing international organizations, such as IAEA and the UN Security Council Resolution 1540 committee, and with nonmembers to help close gaps in the nonproliferation regime that proliferation networks seek to exploit. NSG has not adopted two U.S. proposals announced by the President in 2004. The first proposal would commit members to not export certain nuclear technology to states that do not have the capability to develop material for nuclear fuel or nuclear weapons. Also, NSG has not adopted a second proposal under which NSG members would refrain from providing nuclear-related technologies to countries that have not agreed to allow IAEA additional rights to inspect any facilities suspected of covert nuclear activities. The President announced that NSG members should refuse to sell enrichment and reprocessing equipment and technologies to any state that does not already possess full-scale, functioning enrichment and reprocessing plants. This step, according to the President, would prevent new states from developing the means to produce fissile material for nuclear bombs. State and Energy officials stated that the first proposal has not yet been adopted within NSG because it favors states that already have enrichment and reprocessing capability over those that do not. According to State officials, states in the European Union (EU) are opposed to this proposal because it violates EU internal free trade policies. However, we could not independently determine why NSG has not adopted these proposals because State did not facilitate our travel to meet with representatives of NSG members in Vienna, Austria. NSG also has not yet adopted the second U.S. proposal announced in 2004 to restrict exports of nuclear-related technology to countries that have not adopted IAEA’s more stringent safeguards inspection agreements. In 2004, the President proposed that by the next year, only states that have signed the Additional Protocol would be allowed to import equipment for their civilian nuclear programs. However, other countries have been hesitant to implement the Additional Protocol for various reasons, including an unwillingness to submit to intrusive inspections. The U.S. government supported IAEA’s establishment of several activities over the past several years to help combat nuclear proliferation trafficking and network activities. However, IAEA has not yet adopted a recommendation that calls for member states to provide it with export data that would allow the agency to better detect covert nuclear activities. IAEA is responsible for inspecting civilian nuclear facilities worldwide to ensure they are used exclusively for peaceful purposes. In 1997, IAEA adopted a new arrangement, called the Additional Protocol, for existing safeguards agreements under NPT that is designed to give IAEA a stronger role and more effective tools for conducting worldwide inspections. IAEA established several activities supported by the Unites States to help combat nuclear proliferation trafficking and network activities. These included the following: Nuclear Trade and Technology Analysis Unit. Following the revelations about extensive covert networks procuring and supplying sensitive nuclear technology, IAEA established a new unit in November 2004. It was intended to help analyze patterns and trends in nuclear trade to identify covert nuclear trade activities. Illicit Trafficking Database. IAEA established IAEA Illicit Trafficking Database in 1995 to facilitate exchange of authoritative information on incidents of illicit trafficking and other related unauthorized activities involving nuclear and other radioactive materials among states. It contains information, which has been confirmed by the states involved, about incidents of illicit trafficking and related unauthorized activities involving nuclear and other radioactive materials. Nuclear Security Fund. IAEA established a fund in March 2002 to support its expanded nuclear security program, including developing international standards and providing training and assistance to combat nuclear smuggling. Through 2006, pledges from IAEA members totaled nearly $74 million, with about $34 million from the United States. IAEA has not yet implemented a draft recommendation that member states provide it with relevant information on their exports so IAEA can improve its ability to detect possible undeclared nuclear activities. Under this recommendation, members would provide information on their exports of specified equipment and nonnuclear material, procurement enquiries, export denials, and relevant information from commercial suppliers, according to State officials. However, there is no current mandate to do this, according to State officials. The United States established and gained support for PSI, a U.S.-led effort to work with other countries to interrupt the transfers of sensitive items to proliferators. PSI is a global effort to stop trafficking of WMD, their delivery systems, and related materials to and from states and nonstate actors of proliferation concern worldwide. Launched by the President on May 31, 2003, PSI is a set of voluntary activities, not a formal treaty-based organization, to stop proliferation-related shipments of WMD technologies. PSI interdiction training exercises and other operational efforts are intended to help participating states work together in a coordinated and effective manner to stop, search, and seize shipments. In September 2003, the countries participating in PSI at that time agreed to its statement of interdiction principles. The statement identifies specific steps participants can take to effectively interdict WMD-related trafficking and prevent proliferation. As of July 2007, PSI participants conducted 28 exercises (maritime, air, land, or combined) to practice interdictions, held 15 operational experts group meetings to discuss proliferation concerns and plan future exercises, and hosted 4 workshops to acquaint industries with PSI goals and principles. State lists several countries as PSI participants that open-source reporting also names as locations of nuclear proliferation network activity. Listed PSI participants are Germany, Japan, Singapore, Turkey, UAE, and United Kingdom. PSI nonparticipants are Malaysia, Pakistan, Republic of Korea, and South Africa. (See our September 2006 classified report on PSI.) The U.S. government has focused on bilateral export control assistance to foreign countries to combat the sale of illicit nuclear-related technology through proliferation networks. Three programs, operated by State, Energy, and Defense provide this assistance. However, the impact of this assistance is difficult to determine because State did not evaluate either the proliferation risk for all of the countries in which network activities are alleged to have occurred or the results of its assistance efforts. In contrast, Energy performed risk analyses and program assessments for all of its 45 participating countries. Although there were limitations in the assessments of the programs, officials from Energy and State said that some positive changes occurred as a result of U.S. export and border control assistance. To combat nuclear networks, State officials said they focused on addressing export control problems in other countries. State’s EXBS assists foreign governments in strengthening their export controls by improving their legal and regulatory frameworks, licensing processes, border control and other enforcement capabilities, outreach to industry, and interagency coordination. EXBS partners with a number of U.S. agencies and the private sector to provide capacity-building training, technical exchanges and workshops, regional conferences and seminars, and inspection and interdiction equipment. For example, EXBS completed an advanced workshop on regulations in July 2006 with Pakistani officials and sponsored a forum on technical aspects of regulations in September 2006 through a private contractor. In Malaysia, EXBS sponsored a workshop on legal aspects of regulations in August 2005 and another workshop with Malaysian officials in Washington, D.C., on export licensing in February 2007. Commerce conducted these workshops. In addition, DHS stated that ICE is the primary law enforcement partner to EXBS for training its counterpart agencies to investigate, conduct surveillance and undercover operations, detect, and interdict unauthorized transfers of WMD-related items. During 2007 and 2008, according to DHS, ICE conducted or planned to conduct training in several countries where A.Q. Khan network activities reportedly occurred, including Malaysia, Pakistan, Singapore, Republic of Korea, Turkey, and UAE. Energy’s INECP provides bilateral assistance to governments to prevent the proliferation of WMD and WMD-related material, equipment, and technology by working with governments worldwide to develop effective national export control systems. INECP receives funding from and collaborates with the EXBS and Homeland Security’s CBP and also works with other agencies such as the Coast Guard. For example, in Turkey, INECP conducted training to help customs inspectors identify nuclear- related commodities in March 2004 and September 2005. INECP has conducted similar training in Pakistan, Singapore, and Republic of Korea. In addition, DOD’s International Counterproliferation Program (ICP) offers equipment, training, and advice to help countries prevent and counter WMD proliferation, including border control assistance. The majority of ICP’s programs have been in countries in the former Soviet Union, the Balkans, and the Baltics, with total funding of about $29 million for fiscal years 2003 through 2006. ICP provided about $86,000 for training in Singapore in fiscal year 2006. Overall, the U.S. provided about $234 million dollars in export control assistance to 66 countries between fiscal years 2003 and 2006 through these three programs, with EXBS as the largest contributor to U.S. export control assistance (see fig. 1). From fiscal years 2003 through 2006, the U.S. government provided about $9 million, or 4 percent of the overall total, to seven countries in which A.Q. Khan network activities reportedly occurred: Malaysia, Pakistan, Republic of Korea, Singapore, South Africa, Turkey, and the UAE. From fiscal years 2003 to 2006, EXBS provided about $7 million to six of these countries, while INECP provided nearly $2 million to the seven countries in our study. Turkey was the largest recipient of assistance among the countries in our study, and Pakistan was the second largest (see fig. 2). Despite U.S. government efforts to provide bilateral assistance to countries to help them improve their export control systems, it is difficult to determine the impact of these programs because State did not consistently conduct or document risk analyses as a basis for countries to receive assistance and has not assessed the program performance. Although Energy and State officials said they are unable to systematically establish that their assistance has effected positive change in countries that received U.S. assistance, they said some positive change occurred during the period in which assistance was provided. While both State’s and Energy’s assistance programs conduct risk analyses on a country-by-country basis to prioritize assistance efforts, State did not conduct one such analysis for each country in its program and did not document the ones it conducted. The EXBS strategic plan indicates EXBS prioritizes assistance in accordance with five proliferation threat categories for which most, but not all, EXBS countries are assessed (see table 1). The EXBS strategic plan, which provides guidance for EXBS, provided a risk analysis summary for five of the six countries in our study to which it provided assistance, but did not provide a risk assessment for one country. The strategic plan indicated that two of the countries in our study are at risk in all five categories, and a third country is at risk in all but category 1. A fourth country is at risk in categories 2, 4, and 5, and a fifth country is at risk in categories 3 and 5. State did not respond to our request for a risk assessment for the sixth country. Overall, the EXBS strategic plan did not provide a risk analysis for 11 of the 56 countries to which it provided assistance between fiscal years 2003 and 2006. Furthermore, EXBS officials could not provide us with documentation showing the basis for which they determined the risk categories for the countries that appear in the strategic report and said the risk analyses are not updated annually. INECP assesses country risk by measuring proliferation threat based on the capacity of the recipient country to supply or be a conduit for WMD- related goods. The assessment also takes into consideration the vulnerability of the recipient country’s export control system to illicit procurement. INECP places the countries receiving assistance into one of four categories based on that countries’ production capacity and export control system (see table 2). All of the countries in our study to which INECP provided assistance fell into category 2: having potentially weak export control systems and high commodity production capacity. While we did not evaluate the methodology that EXBS and INECP use to perform risk assessments or prioritize their assistance, we observed that each INECP risk analysis we reviewed was more thoroughly documented than the EXBS risk analyses. For example, INECP provided us with country plans for each of the countries in our scope, which document and identify the sources of information used to determine the status of the country’s export control system and its potential to supply or be a conduit for nuclear-related materials. In addition, an INECP official noted that one of the purposes of the country plans is to document the data that inform their risk analyses. Despite U.S. government efforts to provide bilateral assistance to countries to help them improve their export control systems, it is difficult to determine the impact of these programs because State has not assessed their performance. Specifically, State’s EXBS has not performed annual program assessments for all countries receiving EXBS assistance, as required by program guidance, and has not received required data for some assessments that were conducted. INECP also requires annual program assessments, which it conducted for all of its 45 assistance recipients for fiscal years 2003 through 2006. EXBS program assessments characterize features of a country’s export control system but do not evaluate the impact of U.S. training on the country. EXBS guidance specifies that recipient countries should be assessed using a revised assessment tool, which contains questions intended to determine whether the country is committed to developing an effective export control system and identify the weaknesses in the country’s current system. Categories in the EXBS assessment tool, which was implemented by contractors, include an examination of various aspects of the recipient country’s dual-use and munitions licensing, the country’s ability to enforce its regulations, and a review of industry- government relations. In contrast, federal guidance for evaluating human capital training calls for assessing the extent to which training and development efforts contribute to improved performance and results. State contractors performed assessments in 2004 for only two of the six countries in the scope of our review that received EXBS funding, Turkey and UAE. According to a State official, these assessments were not useful for State’s purposes because the contractor provided the results of the evaluations but not the data that EXBS officials said would be necessary to measure the progress of these countries in improving their export control systems. The official said the data were omitted because State did not require them in the contract. Therefore, EXBS did not receive the information it needed to construct a baseline against which to evaluate the progress of these countries. State has contracted for future assessments to be used as a baseline for determining countries’ future progress. Overall, State received assessments for 34 countries—about 60 percent of the countries that received EXBS funding between 2003 and 2006—though none of these contained baseline data, according to State officials. In commenting on a draft of this report, State said that EXBS program planning takes into account other information, including open source information, diplomatic reporting from posts, intelligence community products, and assessments and information from other U.S. government agencies. As State commented, however, these and other information sources are intended to substitute for the assessment tool only when State determines it is infeasible or impractical to use it. INECP also produces country plans that serve as program assessments for all of the 47 countries to which it provided assistance in this period. An INECP official said that the country plans are updated on an annual basis in order to track the history of assistance with each partner country and to enforce a standard process for tracking and reviewing the combined results of assistance efforts and of countries’ independent efforts to implement system reforms. INECP officials provided us with updated annual assessments for all seven countries, which contain an analysis of each country’s export control system, and proposals for future assistance. While we did not evaluate the quality of Energy’s assessments, INECP has updated assessments for all of its program participants, and the assessments contain the baseline data necessary for measuring future progress and are updated on an annual basis. In addition, we noted that the INECP country plans we reviewed assess the country’s progress in improving its export control systems and contain recommendations for future activities. Energy and State officials said they are unable to systematically establish that their assistance has effected positive change in countries to which they provided assistance, because actions such as changing laws and implementing new regulations are undertaken by sovereign governments and are not always directly attributable to assistance efforts. However, officials from both programs said some positive change occurred during this period. For example, officials from both EXBS and INECP cited some improvements in assistance recipients’ export controls that occurred after training or other types of assistance were provided. In 2006, after exchanges and consultations regarding licensing and regulations with EXBS program officers, Pakistan strengthened its export controls by further expanding its control lists, according to State officials. In addition, officials reported that Malaysia, UAE, and Pakistan drafted export control legislation during the period of EXBS engagement in each of these countries. Pakistan passed its export control law in 2004. Furthermore, INECP officials reported that their engagement with Singapore has led its government to amend its control list to adhere to all the multilateral control lists, and INECP also helped Pakistan complete adoption of the European Union control list. In addition, they said that the Republic of Korea has reported that INECP training led to several high- level investigations of illegal transfers and greater industry awareness of dual-use items. U.S. agencies engaged in export control enforcement activities are impaired from judging their progress in preventing nuclear proliferation networks because they cannot readily identify basic information on the number, nature, or details of all their enforcement activities involving nuclear proliferation. While facing this limitation, the U.S. government since 2003 has made several changes to its policies and procedures related to national enforcement activities that may strengthen its ability to prevent nuclear proliferation networks. U.S. agencies engaged in export control enforcement activities are impaired from judging their progress in preventing nuclear proliferation networks because they cannot readily identify basic information on the number, nature, or details of all their enforcement activities involving nuclear proliferation. Most of these agencies do not collect or store their data in a manner that would allow them to reliably identify which of their enforcement actions involved nuclear proliferation. This makes it difficult for agencies to determine the level of resources expended in countering nuclear proliferation networks, as well as the results obtained from these efforts. Since 2005, Commerce and ICE have taken steps to facilitate more reliable identification of their enforcement activities involving nuclear proliferation. Most of the agencies engaged in export control enforcement activities— DHS, Justice, and Treasury—could not readily produce reliable data representing their respective agency’s enforcement actions related to nuclear proliferation. Enforcement data, such as data collected on inspections, seizures, investigations, arrests, indictments, and penalties applied, were often stored according to the law that had been violated or by a category or code describing the item corresponding to the enforcement action, such as the type of good seized. Consequently, agencies compiling enforcement data related to nuclear proliferation often depended on conducting searches of agency databases using key words (e.g., “nuclear”) or key codes (e.g., the ICE code for dual-use items is “06”). An accurate compilation of such data depends on several factors, including (1) selecting appropriate key words or key codes for searching the database, (2) use of appropriate words or codes to describe the nature of the enforcement action when agency officials record it in the database, and (3) mandatory completion of the data fields that would identify the enforcement action as being related to nuclear proliferation. For example, we asked agencies engaged in export control enforcement activities for data on their activities related to nuclear proliferation, with the following results: CBP compiled data on enforcement activities (seizures) related to nuclear proliferation by engaging in keyword searches of its database. However, a CBP official noted there is not a specific category for dual-use seizures, so these seizures would not be included in the statistics. Moreover, the official stated that one would need to look beyond seizures, for example to inspections, to get a complete picture of CBP activities conducted to combat nuclear proliferation. However, CBP does not have data on inspections conducted for nuclear or WMD proliferation purposes unless the inspection led to a seizure of goods or involved nuclear material, according to DHS officials. ICE performed a key-code search of its database to produce statistics on closed investigations involving nuclear proliferation. An ICE official said the statistics that ICE compiled likely undercounted the number of investigations involving nuclear proliferation because there is not one single code agents can use to represent nuclear proliferation cases. Rather, there are multiple codes that represent nuclear proliferation, but agents are not required to enter all of them. The ICE official concluded that it would be difficult to correctly identify all nuclear proliferation-related ICE investigations. In response to our request for enforcement statistics, FBI produced two conflicting sets of statistics on open investigations related to nuclear proliferation. One Bureau official noted that identifying enforcement actions related to nuclear proliferation is not straightforward; rather, it requires Bureau analysts to interpret information about the enforcement action to judge whether it involves nuclear proliferation. In technical comments on a draft of this report, Justice stated that FBI has a classification which defines proliferation investigative activities. This classification can be used to search the FBI’s automated case system to determine the exact number of investigative activities and obtain a report on the nature and details of these activities, according to Justice. However, two FBI officials told us that it is not possible to search the database to identify all cases related to nuclear proliferation. Compiling data such as the number of cases involving nuclear proliferation and deciding whether cases are related to WMD or nuclear proliferation requires an interpretation of the data. Finally, Justice (Executive Office for United States Attorneys) stated its case management database could not sort cases according to nuclear proliferation networks, nuclear proliferation, or WMD proliferation, due to the way the data are stored, but can sort export enforcement data. Furthermore, some agencies that maintain lists of individuals and companies that have violated export control laws or engaged in WMD proliferation could not identify which parties were placed on the lists for nuclear proliferation reasons. For example, Treasury, which maintains a specially designated nationals list containing the individuals and entities that have been designated under its Office of Foreign Assets Control’s (OFAC) various sanctions programs, reported it cannot identify all entities that have been placed on the list for nuclear proliferation reasons. Treasury officials said that they maintain records on the rationale for placing an entity on the list, but do not necessarily denote the type of WMD proliferation entities are engaged in or support. In addition, Treasury confirmed that none of the entities publicly identified in relation to the A.Q. Khan nuclear proliferation network appears on the specially designated nationals list or in the Annex to Executive Order 13382. Commerce stated that it does not maintain readily available information that would allow it to identify individuals or entities placed on its denied persons list for nuclear proliferation reasons. This list includes individuals and entities that have been denied export privileges. In contrast, State reported periodically to Congress that, between 2003 and 2006, it had sanctioned foreign persons for engaging in nuclear proliferation activities with Iran or Syria. Several agencies stated they use their enforcement data to make resource allocation decisions. However, without enforcement data that accurately reflect actions taken to prevent nuclear proliferation, agencies would not be able to make informed resource decisions. Without the ability to reliably identify their enforcement activities involving nuclear proliferation, it is difficult for agencies to accurately track the amount of time and resources expended in countering nuclear proliferation networks, as well as the results obtained from these efforts. Most of these agencies lack performance metrics for assessing the results obtained from their efforts to prevent nuclear proliferation. In contrast, federal standards for internal control state that management should have procedures in place to create performance indicators, monitor results, track achievements in relation to agency plans, and ensure adequate communications with external stakeholders that may significantly impact achieving the agency’s goals. Since 2005, two agencies have taken steps to facilitate more reliable identification of their enforcement activities involving nuclear proliferation. In fiscal year 2005, Commerce began classifying enforcement data to identify enforcement actions involving nuclear proliferation. In June 2007, an ICE official proposed modifying ICE’s case data collection process to more precisely identify investigations involving nuclear proliferation. Thus, the official stated, if implemented, this proposal would allow ICE to better track its performance in combating nuclear proliferation, as well as respond to congressional inquiries for information. Since 2003, the U.S. government has made several changes to the policies and procedures governing national enforcement activities that may strengthen agencies’ ability to combat nuclear proliferation networks. On a national level, the United States endeavors to counter nuclear proliferation by enforcing laws that control the export of materials that could be used to make a nuclear weapon, including dual-use items, and applying criminal or administrative penalties to proliferators. Commerce, DHS, Justice, State, and Treasury carry out these enforcement activities, often in collaboration. Two changes to policies and procedures governing national enforcement activities created new penalties and increased existing penalties for export control violations. In addition, draft legislation developed by the executive branch is intended to further increase penalties and provide some new authorities for one enforcement organization. First, Executive Order 13382, announced in 2005, created an additional nonproliferation sanction program that allows Treasury and State to target the assets of proliferators and those who assist them. Under the executive order, Treasury and State designate individuals or entities that are WMD proliferators, deny them access to the U.S. financial system, and have all their property or interests in property blocked. Initially, the sanction program applied to eight organizations in Iran, North Korea, and Syria. As additional WMD proliferators are designated, they are added to Treasury’s specially designated nationals list, which contains the names of individuals and entities that have been sanctioned under OFAC’s various sanctions programs. U.S. persons and entities are prohibited from providing support to these proliferators and can be punished with criminal or civil penalties if they are found to be in violation of this prohibition. The executive order is designed to cut off support to proliferators from front companies, financiers, logistical supporters, and suppliers. As of June 15, 2007, 43 persons or entities were on Treasury’s specially designated nationals list pursuant to the executive order. Second, the USA Patriot Improvement and Reauthorization Act of 2005 increased the maximum penalties that can be imposed on certain export control violations from $10,000 to $50,000 per violation. Maximum prison sentences increased from 10 years to 20 years. However, according to Commerce statements, these increased penalties are not high enough to deter violators or to provide incentives for violators to cooperate with law enforcement. The Assistant Secretary of Commerce for Export Enforcement recently noted that significantly increased penalty provisions are needed. Third, the congress enacted a law that that increased penalties and the executive branch drafted a legislative proposal intended to further increase penalties and provide some new authorities for one enforcement organization. The International Emergency Economic Powers Enhancement Act was enacted into law on October 16, 2007, and increased the civil and criminal penalties applicable to the violation of OFAC sanctions. In addition, the executive branch drafted a legislative proposal, the Export Enforcement Act of 2007, to revise and enhance the Export Administration Act (EAA) and be in effect for 5 years after the date of its enactment. The legislative proposal would increase penalties for export control violations while enhancing Commerce’s law enforcement authorities to combat illicit exports of dual-use items. For example, criminal penalty amounts in the proposal would be increased to $1,000,000 per violation or a fine and imprisonment for not more than 10 years, for each violation by an individual, and $5,000,000 or up to 10 times the value of the exports involved, whichever is greater, per violation by a person other than an individual. The civil penalty amounts would be increased to $500,000 for each violation of EAA or any regulation, license, or order issued under that act. According to Commerce, the increased penalty amounts would provide an enhanced deterrent effect. The proposal also would provide Commerce’s special agents with statutory overseas investigative authority and expanded undercover authorities and expand the list of criminal violations upon which a denial of export privileges may be based. In 2006, the FBI created a WMD directorate to support and consolidate FBI’s WMD components. The directorate was designed to prevent and disrupt foreign nations or individuals from obtaining WMD capabilities and technologies and using them against the United States, according to FBI documents. In addition, FBI officials reported the initiation of several initiatives designed to prevent WMD proliferation. These initiatives include a program focused on dual-use nuclear technology, as well as country- specific WMD counterproliferation efforts in national labs and other U.S. entities. However, FBI did not provide information on the impact of these activities on FBI’s ability to counter WMD and nuclear proliferation. In technical comments on a draft of this report, Justice stated that FBI has information to provide but was not given the opportunity to do so. FBI’s WMD Directorate can provide information on this impact by providing limited information on accomplishments and statistics on a number of proliferation investigations and operations, according to Justice. However, on June 15, 2007, we asked FBI officials about the impact of either the establishment of the WMD directorate or the WMD initiatives on FBI’s ability to counter WMD and nuclear proliferation, but they provided no answer nor would they meet with us to discuss related issues. In late June, FBI provided us with a written response that included no specific information that answered our request. To respond to the threat of nuclear proliferation, Justice is preparing a national export enforcement initiative that department officials stated is intended to improve the investigation and prosecution of persons and corporations violating U.S. export control laws. The initiative follows the 2006 creation of the National Security Division within Justice to strengthen the effectiveness of its national security efforts and, according to a Justice official, to respond to the threat of WMD proliferation. As we have previously reported, U.S. Attorneys Offices have many competing priorities, including prosecuting cases involving terrorism, counterterrorism, and government contractor fraud, and the level of interest and knowledge of export control laws varies among assistant U.S. Attorneys. According to the U.S. Attorney General, one of the key elements of the initiative will be to provide federal prosecutors with the assistance, training, and expertise they need to undertake export control prosecutions. For example, Justice held a national export control conference in May 2007. The following month, Justice appointed its first National Export Control Coordinator, who will be responsible for coordinating with other U.S. agencies the enforcement of export controls and development of training materials for prosecutors in an effort to enhance their capacity and expertise. The impact of the export enforcement initiative on Justice’s ability to prosecute export control cases is yet to be demonstrated as the initiative has just begun. Although the U.S. government has announced that countering nuclear proliferation and nuclear networks is a high priority, it lacks the necessary information to assess the impact of its multiple efforts to do so. While U.S. assistance to foreign governments to help them strengthen their laws and regulations against nuclear proliferation networks has the potential for positive impact, U.S. agencies are not sufficiently monitoring aid recipients’ actions to assess what U.S. assistance is accomplishing. State’s assistance program is not completing and documenting risk analyses or program assessments, as required by program guidance. In addition, U.S. government agencies that engage in enforcement activities to counter nuclear proliferation networks are impaired from judging their progress in this effort because they cannot readily identify basic information on the number, nature, or details of their enforcement activities involving nuclear proliferation. Without such information, agencies cannot identify what their efforts are, assess how their efforts are working, or determine what resources are necessary to improve their effectiveness. Developing such information would be a necessary first step for U.S. agencies in beginning to assess how well their efforts to combat nuclear proliferation networks are working. As of October 2007, these agencies may not know whether their capabilities for addressing the problem of nuclear proliferation networks have improved. To help assess the impact of the U.S. response to the threat of nuclear proliferation networks, we recommend that the Secretary of State take the following two actions: (1) comply with its guidance to conduct periodic assessments of proliferation risk and the export control system for each country receiving EXBS funding and (2) document each risk analysis conducted to evaluate the progress made in alleviating those risks. To help assess how U.S. government agencies that engage in export control enforcement activities are accomplishing their stated goal of combating nuclear proliferation, we recommend that the Secretaries of Commerce, Homeland Security, and Treasury, and the U.S. Attorney General individually direct that their respective agency’s data collection processes be modified to support the collection and analysis of data that clearly identify when enforcement activities involve nuclear proliferation. For example, each agency could consider designating appropriate categories or codes for nuclear proliferation for staff to use when recording information in the databases and mandating completion of relevant data fields that would identify an enforcement action as related to nuclear proliferation. We provided copies of this report to Commerce, Defense, DHS, Energy, Justice, State, and Treasury. Commerce, DHS, State, and Treasury provided written comments. Justice provided us with technical comments that we incorporated in the report, as appropriate. Defense and Energy did not comment on the draft. In its comments on a draft of this report, Commerce stated, first, that the report did not identify what it means by enforcement activities involving nuclear proliferation. Second, Commerce stated that the report should present the President’s 2004 nonproliferation proposals to NSG exactly as stated. Finally, Commerce stated that the recommendation to modify relevant databases to support the collection and analysis of data that clearly identify when enforcement activities involve nuclear proliferation should not be directed to it because the report recognizes that it already has this capability. Moreover, it said that Commerce officials could take names from its denied persons list, which does not indicate the reason for listing the name, and query the relevant database to identify whether the name was listed for nuclear proliferation reasons. First, we did identify what is meant by enforcement activities on page 8 of this report to include inspecting items to be shipped, investigating potential violations of export control laws, and punishing export control violators. We asked Commerce officials to identify when such activities involved nuclear proliferation but they indicated certain actions for which they could not. Second, we shortened the description of the President’s 2004 proposals for brevity and clarity. Moreover, Commerce’s description of the proposals does not match the text of the proposals as originally presented in the President’s speech. Finally, while our report recognized that Commerce had developed the capability that we recommend for its database, we included Commerce in the recommendation because its various lists, such as the denied persons list, cannot identify names included for nuclear proliferation reasons. Commerce indicated to us that because the database and denied persons list were not linked, providing such information would have been difficult and require a case-by-case analysis. As a result, Commerce did not provide us with this requested data. In its comments, DHS agreed with the substance of the report and concurred with the overall recommendations. DHS described specific actions that it took in September 2007 to identify seizures in the relevant database that involve nuclear proliferation. It also described modifications that it intends to make by the end of 2007 to identify examinations of cargo involving nuclear proliferation issues. In commenting on a draft of this report, State partially concurred with our recommendation that it should (1) comply with its guidance to conduct periodic assessments of proliferation risk and the export control system for each country receiving EXBS funding and (2) document each risk analysis conducted to evaluate the progress made in alleviating those risks. State commented that it recognizes the value of taking a more standardized approach to assessing program countries on a regular basis as a means of refining assistance efforts and evaluating progress. Therefore, State said that it will set clear guidelines for when assessments and reassessments should occur. State also said that it recognizes the value in documenting in one place all risk analyses and the process by which they are reached and will do so in a revised publication of its EXBS program strategic plan. State disagreed with our finding that it did not conduct program assessments for about 60 percent of its participating countries, asserting that it conducted program assessments for all six of the countries in the scope of our review that received EXBS funding. State said that it used various means to assess its program other than its revised assessment tool designed for this purpose. We reiterate our finding that State did not conduct program assessments using its designated tool for two of the six countries in our study that received EXBS assistance. More importantly, these assessments do not evaluate the impact of U.S. training on the country, as recommended by federal guidance for evaluating human capital training. This guidance calls for assessing the extent to which training and development efforts contribute to improved performance and results. State also disagreed with our finding that it did not perform risk analyses for 11 of the 56 countries in its program for fiscal years 2003 through 2006. It stated that the country risk assessment summary in its program strategic plan included only those countries for which funds were requested at the time the plan was prepared and the summary was never intended as a comprehensive source of all risk analyses. However, the State official responsible for EXBS did not provide this explanation and said the risk summary does not change unless there is new information. Furthermore, we found that this explanation of the risk assessment summary is not consistent. At least one country was included in the summary even though it received no EXBS funding throughout this period and at least four other countries were not listed although they did receive EXBS funding. Treasury did not comment on our recommendations. However, Treasury stated that it can and does identify which entities have been designated for nuclear proliferation reasons at the time of designation. However, this statement misses our point. As our report stated, U.S. government agencies that engage in enforcement activities to counter nuclear proliferation networks are impaired from judging their progress in this effort because they cannot readily identify basic information on the number, nature, or details of their enforcement activities involving nuclear proliferation. If Treasury cannot readily retrieve this information, then the information is not useful for assessing the impact of its sanctions specifically on nuclear proliferators. Despite its assertion, Treasury did not provide us with a list of all listed entities designated for nuclear proliferation reasons, as we had requested. In commenting on our finding that Treasury did not designate any entities publicly identified with the A.Q. Khan network, Treasury stated that its designation decisions involve an interagency process that identifies, assesses, and prioritizes targets. Therefore, it appears that Treasury did not designate any A.Q. Khan network entities because an interagency process did not identify and assess them as priority targets. We are sending copies of this report to interested congressional committees and the Secretaries of Commerce, Defense, Energy, Homeland Security, Justice, State, and Treasury. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8979 or at christoffj@gao.gov. Staff acknowledgments are listed in appendix VI. To meet our objectives, we reviewed program documentation and interviewed knowledgeable officials from key U.S. agencies: the Departments of Commerce, Defense (DOD), Energy, Homeland Security (DHS), Justice, State, and Treasury. To identify the status of U.S. efforts to strengthen multilateral controls to counter nuclear proliferation networks, we reviewed program documentation and interviewed knowledgeable officials from key U.S. agencies: DOD, Energy, and State. We also met with acknowledged nonproliferation experts to discuss U.S. proposals announced in 2004 and their applicability to addressing nuclear proliferation networks. The experts included two former Assistant Secretaries of State for Nonproliferation and experts from the following institutions: Center for Contemporary Conflict, National Security Affairs Department, Naval Postgraduate School in Monterey, California; Center for International Trade and Security at the University of Georgia, Athens, Georgia; Center for Nonproliferation Studies at The Monterey Institute of International Studies, Washington, D.C.; Center for Strategic and International Studies, Washington, D.C.; Georgetown University, Edmund A. Walsh School of Foreign Service, Washington, D.C.; Heritage Foundation, Washington, D.C.; Nuclear Threat Initiative, Washington, D.C.; and Wisconsin Project on Nuclear Arms Control, Washington, D.C. We tried to visit the U.S. Mission to the International Atomic Energy Agency, officials of the International Atomic Energy Agency, and foreign government representatives to the Nuclear Suppliers Group, all in Vienna, Austria, to discuss various U.S. proposals and other efforts to strengthen activities to combat nuclear proliferation networks. While State agreed after months of negotiation to facilitate our proposed travel to Vienna, it did not do so within any acceptable time frames. Furthermore, citing diplomatic sensitivities, State proposed restrictions on which U.S. and foreign officials we could meet and on what subjects we could discuss, thus causing considerable delays in completing our work. To assess the impact of U.S. bilateral assistance to help other countries improve their legal and regulatory controls against nuclear proliferation networks, we reviewed program documentation and interviewed knowledgeable officials from key U.S. agencies: DOD, Energy, and State. To evaluate the amount of assistance provided overall and to the seven countries associated with nuclear networks in our study (Malaysia, Pakistan, Republic of Korea, Singapore, South Africa, Turkey, and United Arab Emirates), we obtained and reviewed financial data from DOD, Energy, and State, and interviewed agency officials about these data. We determined that these data were sufficiently reliable for the purposes of this report. Therefore, we reviewed program assessment documentation to the extent that it was available in Washington, D.C. We interviewed knowledgeable DOD, Energy, and State officials about the impact and outcomes of these programs. We also contacted the embassies in Washington, D.C., of the governments of Malaysia, Pakistan, Republic of Korea, Singapore, South Africa, Turkey, and United Arab Emirates to obtain their perspectives on U.S. assistance. However, only the government of Singapore responded to our request for information. To assess the impact of U.S. efforts to strengthen its national enforcement activities to combat nuclear proliferation networks, we reviewed documentation and met with officials of the Departments of Commerce, DHS, Justice, State, and Treasury in Washington, D.C. We also spoke by phone with DHS/Immigration and Customs Enforcement attaches stationed in Bern, Switzerland, and Vienna, Austria, regarding their roles in enforcing U.S. export control laws for cases related to nuclear proliferation. Also, we reviewed statistical data and descriptions of enforcement cases from Commerce, DHS, and Justice, when available, to try to determine how many cases involved nuclear proliferation and how such information was used to assess agencies’ activities. We also reviewed data on Commerce, State, and Treasury sanctions against identified WMD proliferators. The information on foreign law in this report does not reflect our independent legal analysis, but is based on interviews and secondary sources. We focused our review on countries that, according to open-source reporting, are involved in the A.Q. Khan network. These include Malaysia, Pakistan, Republic of Korea, Singapore, South Africa, Turkey, and Dubai in UAE. We did not travel to these countries because State cited foreign policy sensitivities of ongoing diplomatic discussions in these countries. It is important to note that the level of cooperation State provided on this review was erratic and resulted in a delay of several months in completing our work. Nonetheless, with information available from other sources, we were able to address the review’s objectives.. For the purposes of this report, we reviewed U.S. programs and activities that involved export controls and their enforcement, as nuclear networks typically engage in acts that violate or circumvent national and international export controls. We conducted our review from September 2006 through August 2007 in accordance with generally accepted government auditing standards. The following are GAO’s comments on the Department of Commerce’s letter dated October 15, 2007. 1. We agree with Commerce’s statement that the draft report did not identify what it means by “enforcement activities involving nuclear proliferation.” First, we did identify what is meant by enforcement activities on page 8 of this report to include inspecting items to be shipped, investigating potential violations of export control laws, and punishing export control violators. We asked Commerce officials to identify when such activities involved nuclear proliferation but they indicated certain actions for which they could not. 2. We disagree with Commerce’s comment that our description of the President’s proposal to the NSG was not clear. We had simplified and shortened the proposals to make them clear and free from jargon. 3. We disagree with Commerce’s comment that our draft is true but misleading in stating that Commerce does not maintain readily available information that would allow it to identify individuals or entities placed on its denied parties list for nuclear proliferation reasons. Commerce said the purpose of this list is to readily identify persons who are denied export privileges and it further explained that its agents can query names from the list to determine the reason individuals were denied export privileges. However, when we requested that Commerce provide such a list, Commerce indicated that it had not previously conducted such a review, did not maintain readily available information, and it could not readily create a list of individuals who have been denied export privileges for nuclear proliferation reasons. 4. In comments on a draft of this report, Commerce stated that the recommendation to modify its data collection processes to clearly identify when enforcement activities involve nuclear proliferation should not be directed to it. Commerce stated that the report recognized that it already has appropriate categories or codes for nuclear proliferation staff to use when recording information in the databases and already mandates completion of relevant data fields that would identify an enforcement action as related to nuclear proliferation. However, we directed the recommendation to Commerce because its various lists, including the denied persons list, cannot identify when names are listed for nuclear proliferation purposes. Commerce acknowledged this deficiency when it was unable to provide this type of information when we requested it. The following are GAO’s comments on the Department of State’s letter dated October 17, 2007. 1. We disagree with State’s comment explaining why it did not conduct program assessments for about 60 percent of its participating countries. State said that it also relies on an interagency assessment of a country at the early stages of engagement with the program and on a variety of open source information, studies by nongovernmental research organizations, and information from other U.S. agencies. State did not indicate in its comments what percentage of contractor program assessments have been completed and produced no documentation of these other assessments. Moreover, in earlier documents State explicitly informed us that the contractor assessment tool is the current survey tool EXBS uses to provide a formal and full assessment. 2. We disagree with State’s comments that it assesses program progress despite the absence of a contractor assessment. State’s EXBS strategic plan, written responses to our questions, and discussions with the key EXBS official who State designated to meet with us emphasized the contractor program assessments as the tool to be used for a full assessment of a country’s progress, as well as for planning purposes and establishing a baseline of a country’s capabilities and needs. The strategic plan describes the contractor’s assessment tool as compiling data and analysis from all sources to assist State to measure performance broadly by evaluating progress made between assessments. State’s written response to us stated that EXBS tracks the performance of the foreign government in its development of strategic trade controls using the assessment tool. 3. We agree with State’s comment that its program planning takes into account other information, including open source information, diplomatic reporting from posts, intelligence community products, and assessments and information from other U.S. government agencies. We have added language to the report to reflect this. 4. State commented that EXBS officials have access to and factor into their planning process assessments by other U.S. agencies, such as Energy’s INECP which receives some EXBS funding. While we commend such interagency collaboration, we note that any Energy program assessments are relevant only to its training and courses provided in support of EXBS, not to the EXBS program as a whole. Furthermore, the evidence that State provided in its meetings with us, its written response to our questions, and its EXBS strategic plan discusses interagency coordination in planning, but not in assessing the contributions made by the EXBS program to particular countries. 5. We disagree with State’s comment that risk analyses have been conducted and documented for each country that received or is receiving assistance under EXBS and that we based our findings solely on the EXBS strategic plan. In addition to the strategic plan, we relied on State’s written response to questions we posed on the subject and meetings with State EXBS officials. As we stated in our report, the EXBS strategic plan did not identify a risk level for 11 of the 56 countries to which it provided assistance between fiscal years 2003 and 2006. 6. We disagree with State’s comment that our report was inconsistent because it included information on EXBS assistance to six of the seven countries where A.Q. Khan network activity was reported to have occurred as well as other countries receiving EXBS assistance. We included statistical information on the total number of EXBS program assessments to place the data on the seven countries into an overall perspective. 7. We partially agree with State’s comment that it would be more clear to say that the EXBS country risk assessment summary did not include two of the countries in which network activities are alleged to have occurred. We cannot confirm State’s assertion that a risk analysis was done for one of these countries. State provided no documentation to support this point. 8. We disagree with State’s comment that the absence of a country from the risk summary table in the EXBS strategic plan does not mean a risk analysis was not done. State provided no evidence that it had conducted a risk analysis for this country, and the State official designated to speak for the program said there was no documentation for the analyses. 9. We disagree with State’s comment that State subsequently requested and received missing program assessment data in December 2006 that the contractor had not initially provided to support assessment results. State provided no evidence to support this comment and it directly contradicts information provided to us by the cognizant State official. 10. We disagree with State’s comment that its EXBS program assessments generally highlight the relationship between assistance efforts and progress in specific countries. In a written response to our questions in February 2007, State highlighted the difficulties in doing so. Also, during the course of our review, State said that EXBS does not systematically track information on changes to a country’s laws for the purpose of showing the effectiveness of the EXBS program because it is difficult analytically to create a good design for doing so. Nonetheless, State said in its comments on a draft of this report that formal reassessments of countries are needed to more accurately and regularly measure progress. 11. We disagree that State made a sincere and good faith effort to cooperate with our review of nuclear proliferation networks. The level of cooperation State provided on this review was erratic and resulted in a delay of several months in completing our work. While State agreed after months of negotiation to facilitate our proposed travel to Vienna, it did not do so within any acceptable time frames and delayed providing some requested documents for several months. Nonetheless, with information available from other sources, we were able to address the review’s objectives. 12. These findings were not directed to State. The agencies to which they were directed did not raise a concern about access to classified information and none of these agencies disagreed with our recommendation. 13. State commented that our draft should note that Pakistan passed its export control law in 2004. We have added this language to the report. 14. We disagree with State’s comment that referring to PSI as a multilateral body ascribes a formality to the PSI that does not exist and that the U.S. has never sought to create. Given our previous classified report on PSI, we would not ascribe any more formality to PSI than appropriate. We recognized that this lack of formality contributed to management deficiencies in U.S. PSI activities, and congress legislated in Public Law 110-53 that corrective action be taken. 15. We agree with State’s statement that certain states or their governments were not involved in proliferation network activities; only private entities in these countries were reported to have been allegedly involved in proliferation network activities in open sources. We included clarifying language, accordingly. 16. We disagree with State’s comment that we should report that more than 80 countries are PSI participants. As we reported in an unclassified section of our report on PSI, State did not provide us with documentation to demonstrate any precise number of countries that expressed support for PSI. 1. We disagree with Treasury’s statement that because the scope of our study covered countries where A.Q. Khan operated, it likely skewed the results. The request for our review directly asked us to assess the U.S. government response to the A.Q. Khan network. Therefore, it was methodologically appropriate to focus on countries where such network activities reportedly occurred and would have been fruitless to focus a review of the U.S. response to nuclear networks on countries where such activity has not occurred. 2. We disagree with Treasury’s assertion that it is able to identify which of its designations are related to nuclear proliferation and could similarly identify any civil penalties imposed based on the violation of OFAC sanctions. Treasury officials stated to us that they could not conduct a keyword search to identify entities that had been designated for nuclear proliferation reasons. One official emphasized that Treasury lacks the ability to definitively identify whether a given entity was designated for nuclear proliferation reasons. Treasury officials noted that they keep records on the rationale for an entity’s designation, but they do not necessarily record what type of WMD proliferation the entity is involved in, if any. Despite its assertion, Treasury could not readily retrieve this information when we requested it and did not provide us with a complete list of entities designated for nuclear proliferation reasons. 3. Treasury’s statement that it can and does identify which entities have been designated for nuclear proliferation reasons at the time of designation misses our point. It stated that entities or individuals designated under Executive Order 13382 are listed on OFAC’s web site and specially designated nationals’ list with the specific identification of “NPWMD.” During our review, Treasury could not readily retrieve this information specifically for nuclear proliferation designations. 4. In commenting on our finding that Treasury did not designate entities publicly identified with the A.Q. Khan network, Treasury stated that its designation decisions involve an interagency process that identifies, assesses, and prioritizes targets. Given the absence of these names, Treasury’s statement suggests that the interagency process did not identify and assess entities of the A.Q. Khan network as priority targets. 5. We agree with Treasury’s comment on the footnote on OFAC’s sanctions programs and have added clarifying language, accordingly. 6. We have modified the language in the draft to reflect Treasury’s comment. 7. We have changed this language, accordingly. 8. We believe that the language of our draft accurately reflects the meaning of Treasury’s proposed rewording in a more concise fashion. Thus, we have not modified the language of our report. 9. We have modified language in the report to reflect Treasury’s updated information on enactment of the International Emergency Economic Powers Enhancement Act. Muriel J. Forster, Assistant Director; Jeffrey D. Phillips; Leah DeWolf; Jennifer L. Young; Lynn Cothern; Mark B. Dowling; Mark C. Speight; and Martin De Alteriis made key contributions to this report. | For decades, the United States has tried to impede nuclear proliferation networks that provide equipment to nuclear weapons development programs in countries such as Pakistan and Iran. GAO was asked to examine U.S. efforts to counter nuclear proliferation networks, specifically the (1) status of U.S. efforts to strengthen multilateral controls, (2) impact of U.S. assistance to help other countries improve their legal and regulatory controls, and (3) impact of U.S. efforts to strengthen its enforcement activities. GAO's findings focused on seven countries where network activities reportedly occurred. The United States has advocated several multilateral actions to counter nuclear proliferation networks. Although multilateral bodies have adopted some U.S. proposals, they have not adopted others. For example, the United States negotiated passage of a United Nations Security Council resolution that obligated all member states to adopt laws and regulations prohibiting the proliferation of weapons of mass destruction. It also led the development of watch lists of nuclear technologies that are not formally controlled by states and formation of a multilateral unit intended to analyze covert nuclear trade activities. However, one multilateral body has not adopted two key U.S. proposals made in 2004 to commit its members to add new restrictions on exporting sensitive nuclear technologies. Also, one multilateral organization has not adopted a recommendation for member states to provide it with more export data that would allow it to better detect covert nuclear activities. The impact of U.S. bilateral assistance to strengthen countries' abilities to counter nuclear networks is uncertain because U.S. agencies do not consistently assess the results of this assistance. The impact of this assistance is difficult to determine because the Department of State did not evaluate either (1) the proliferation risk for all of the countries in which network activities are alleged to have occurred or (2) the results of its assistance efforts. Between 2003 and 2006, State and the Department of Energy provided about $9 million to improve the export controls of seven countries in which nuclear proliferation network activities reportedly occurred. State did not evaluate either (1) the proliferation risk for all of the countries in which network activities are alleged to have occurred or (2) the results of its assistance efforts. State did not perform risk analyses for 11 of the 56 countries in its program for those years and did not document the basis for each country's proliferation threat level or explain how the risk analyses were done. Of the six countries in our study to which State provided assistance, State performed risk analyses for five. Also, State did not conduct program assessments for about 60 percent of its participating countries and for two of the six countries in our study that received assistance. Moreover, while State's program assessments characterize a country's export control system and its weaknesses, they do not assess how U.S. training efforts contributed to correcting weaknesses. Relevant U.S. agencies are impaired from judging their progress in preventing nuclear networks because they cannot readily identify basic information on the number, nature, or details of all their enforcement activities involving nuclear proliferation. The U.S. government identified the prevention of nuclear proliferation as a high priority. U.S. agencies collect information, maintain lists of companies and individuals that they sanction, and maintain case files on investigations of suspected violations of U.S. law. However, most of these agencies cannot readily identify which enforcement activities involve nuclear proliferation as they cannot ensure that searching their case file databases for words, such as nuclear, would reveal all relevant cases. |
Some of the TARP performance audit recommendations we made were program-specific, while others addressed crosscutting issues such as staffing and communications. Our program-specific recommendations focused on the following TARP initiatives: Bank investment programs: CPP was designed to provide capital to financially viable financial institutions through the purchase of preferred shares and subordinated debentures. Community Development Capital Initiative provided capital to Community Development Financial Institutions by purchasing preferred stock. Capital Assessment Program (CAP) was created to provide capital to institutions not able to raise it privately to meet Supervisory Capital Assessment Program—or “stress test”— requirements. This program was never used. Credit market programs: Term Asset-backed Securities Loan Facility (TALF) provided liquidity in securitization markets for various asset classes to improve access to credit for consumers and businesses. SBA 7(a) Securities Purchase Program provided liquidity to secondary markets for government-guaranteed small business loans in the Small Business Administration’s (SBA) 7(a) loan program. American International Group (AIG) Investment Program (formerly Systemically Significant Failing Institutions Program) provided support to AIG to avoid disruptions to financial markets from AIG’s possible failure. Automotive Industry Financing Program aimed to prevent a significant disruption of the American automotive industry through government investments in the major automakers. Home Affordable Modification Program (HAMP) divides the cost of reducing monthly payments on first-lien mortgages between Treasury and mortgage holders/investors and provides financial incentives to servicers, borrowers, and mortgage holders/investors for loans modified under the program. Principal Reduction Alternative (PRA) pays incentives to mortgage holders/investors for principal reduction in conjunction with a HAMP loan modification for homeowners with a current loan-to-value ratio exceeding 115 percent. The Second-Lien Modification Program (2MP) provides incentives for second-lien holders to modify or extinguish a second-lien mortgage when a HAMP modification has been initiated on the first-lien mortgage for the same property. Home Affordable Foreclosure Alternatives (HAFA) provides incentives for short sales and deeds-in-lieu of foreclosure as alternatives to foreclosure for borrowers unable or unwilling to complete the HAMP first-lien modification process. Housing Finance Agency Innovation Fund for the Hardest Hit Housing Markets (Hardest Hit Fund or HHF) supports innovative measures developed by state housing finance agencies and approved by Treasury to help borrowers in states hit hardest by the aftermath of the housing crisis. Federal Housing Administration’s (FHA) Short Refinance Program provides underwater borrowers—those with properties that are worth less than the principal remaining on their mortgage—whose loans are current and are not insured by FHA with the opportunity to refinance into an FHA-insured mortgage. As of August 22, 2016, our performance audits of the TARP programs resulted in 74 recommendations to Treasury. Treasury implemented 62, or approximately 84 percent. Three of the implemented recommendations were closed based on actions taken by Treasury since we last reported on the status of our TARP recommendations in September 2015—all three recommendations were MHA-related. Five recommendations remain open. Treasury partially implemented three of the recommendations (that is, took some steps toward implementation) and had not taken any steps to implement the remaining two recommendations. Each of the five recommendations for which Treasury took some or no implementation steps were directed at MHA housing programs. Seven recommendations have been closed as not implemented because we determined that they were outdated and no longer applicable due to the evolving nature of the programs. Of these recommendations, one was closed since September 2015 and was directed at CPP. Three of these seven recommendations were related to CPP and two to MHA programs. As of August 22, 2016, Treasury implemented six of nine recommendations for CPP. For example, we recommended that Treasury apply lessons learned from CPP implementation to similar programs, such as the Small Business Lending Fund (SBLF)—specifically, by including a process for reviewing regulators’ viability determinations of eligible applicants. Treasury changed the SBLF process to include additional evaluation by a central application review committee for all eligible applicants who had not been approved by their federal regulator. Treasury also took steps to provide information from its evaluation to the regulator when their views differed. These steps should help ensure that applicants will receive consistent treatment across different regulators. Since August 2015, one of our CPP-related recommendations was closed without being implemented by Treasury. In March 2012, we recommended that the Secretary of the Treasury consider analyzing and reporting on remaining and former CPP participants separately. In particular, we noted that remaining CPP institutions tended to be less profitable and held riskier assets than other institutions of similar asset size. We analyzed financial data on 352 remaining CPP institutions and 256 former CPP institutions that exited CPP and found that the remaining CPP institutions had significantly lower returns on average assets and higher percentages of noncurrent loans than former CPP and non-CPP institutions. They also held less regulatory capital and reserves for covering losses. Although our analysis found differences in the financial health of remaining and former CPP institutions, we noted that Treasury’s quarterly financial analysis of CPP institutions did not distinguish between them. Treasury said it is not likely to consider analyzing and reporting on remaining and former CPP participants separately. Treasury believes that providing information about the financial position of institutions in CPP was unnecessary because it is publicly available to interested parties through regulatory filings or other sources. We closed this recommendation as not implemented because circumstances have changed significantly since we made the recommendation. Specifically, 363 institutions were in CPP around the time we made the recommendation, and the analysis we recommended was intended to provide Treasury useful information about the relative likelihood of remaining institutions repaying their investments and exiting CPP. However, Treasury has been winding down CPP. When we last conducted an analysis of the financial condition of the 16 institutions that remained in CPP as of February 2016, most of them continued to exhibit signs of financial weakness. Treasury officials recognized that the remaining CPP firms generally have weaker capital levels and worse asset quality than firms that exited the program. They further noted that this situation was a function of the life cycle of the program, because stronger institutions had greater access to new capital and were able to exit, while the weaker institutions have been unable to raise the capital needed to exit the program. Treasury believes that the remaining institutions likely will not be able to repay their investments in full. Consequently, we determined that the recommendation was no longer applicable. As of August 22, 2016, Treasury implemented 22 of our 29 MHA-related recommendations. Three of the 22 recommendations were implemented after our September 2015 report on the status of TARP recommendations: In February 2014, we recommended that Treasury ensure that its MHA compliance agent assess servicer compliance with Limited English Proficiency (LEP) relationship management guidance, once it was established. Treasury issued clarifying LEP guidance to MHA program servicers in April 2014. In June 2016, Treasury provided us with copies of the final report on the results of the compliance reviews of the larger MHA program servicers’ implementation of LEP guidance. Treasury also provided specific examination policies and procedures used by MHA Compliance agent in its reviews of program servicers’ implementation of LEP requirements. In October 2014, we recommended that Treasury conduct periodic evaluations to help explain differences among MHA servicers in the reasons for denying applications for trial modifications. Since the issuance of that report, Treasury conducted two denial reason rate reviews in 2015—one looking at 11 MHA servicers with a high concentration of various denial reasons and the other looking at 7 MHA servicers—to understand the prevailing reasons for their use of specific denial reason codes (ineligible mortgage, request incomplete, and offer not accepted/withdrawn). According to Treasury officials, the results of these and other evaluations helped inform Treasury’s decision to implement Streamline HAMP to help address the most common denial reason (i.e., failure to submit required documentation). Treasury also began conducting quarterly compliance reviews at the largest MHA servicers to verify the accuracy of denial reasons reported. In March 2016, we recommended that Treasury review potential unexpended balances by estimating future expenditures of the MHA program, which would impact its lifetime cost estimate for the program. Treasury stated that it historically assumed that all funds obligated for MHA would be spent in furtherance of its mandate under EESA to preserve homeownership and protect home prices. In February 2016, following the enactment of legislation that terminates MHA on December 31, 2016, Treasury lowered the lifetime cost estimate for MHA, from $29.8 billion to $25.1 billion, which Treasury said would continue to be reflected in its public reports on TARP. Treasury has taken some actions to address three of the open recommendations directed at the MHA programs: In June 2010, we recommended that Treasury expeditiously report activity under PRA, including the extent to which servicers determined that principal reduction was beneficial to investors but did not offer it, to ensure transparency in the implementation of this program feature across servicers. Starting with the monthly MHA performance report for the period through May 2011, Treasury began reporting summary data on the PRA program. Specifically, Treasury provides information on PRA trial modification activity as well as median principal amounts reduced for active permanent modifications. In addition, Treasury’s public MHA loan-level data files include information on the results of analyses of borrowers’ net present value under PRA and indicate whether principal reduction was part of the modification. While this information would allow interested users with the capability to analyze the extent to which principal reduction was beneficial but not offered overall, it puts the burden on others to analyze and report the results publicly. Also, the publicly available data do not identify individual servicers and thus cannot be used to assess the implementation of this program feature across servicers. Our recommendation was intended to ensure transparency in the implementation of this program feature across servicers, which would require that information be reported on an individual servicer basis to allow comparison between servicers and highlight differences in the policies and practices of individual servicers. We maintain that Treasury partially implemented this recommendation and should take action to fully implement it. In October 2014, we recommended that Treasury conduct periodic evaluations using analytical methods, such as econometric modeling, to help explain differences among MHA servicers in redefault rates. Such analyses could help inform compliance reviews, identify areas of weaknesses and best practices, and determine the need for additional program policy changes. Treasury subsequently conducted an analysis to compare redefault rates among servicers and to determine whether servicers’ portfolio of HAMP-modified loans performed at, above, or below expectations for the metrics analyzed. Although they performed these analyses, Treasury officials maintained that such analyses are inherently limited and therefore they did not intend to repeat them. However, by not periodically conducting analyses of differences in servicer redefault rates and capitalizing on the information these methods provide, Treasury risks making policy decisions based on potentially incomplete information and may miss opportunities to identify best practices to assist the greatest number of eligible borrowers. Thus, we continue to maintain that Treasury should take action to fully implement this recommendation. This will continue to be important after December 2016, when the HAMP program is closed to new entrants, since borrowers are eligible for up to 6 years of pay-for-performance incentives if they are able to maintain good standing on their modified loan payments. In March 2016, we recommended that Treasury deobligate funds that its review showed likely would not be expended. Treasury’s most recent estimates identified $4.7 billion in potential excess funds, of which Treasury has deobligated $2 billion as of August 22, 2016. For the additional $2.7 billion in potential excess funds Treasury identified, Treasury stated that once servicers report all final transactions to the MHA system of record in late 2017, it plans to calculate the maximum potential expenditures under MHA and deobligate any estimated excess funds at that time, as appropriate. Given the uncertainties in estimating future participation and the associated expenditures—in particular, the effect of Streamline HAMP which was not fully implemented at the time of Treasury’s last program estimate—it will be important for Treasury to update its cost estimates as additional information becomes available and take timely action to deobligate likely excess funds. Finally, Treasury has not taken action to address two open recommendations directed at MHA: In February 2014, we recommended that Treasury require its MHA compliance agent to take steps to assess the extent to which servicers established internal control programs that effectively monitored compliance with fair lending laws applicable to MHA programs. As we noted in the report, both the MHA Servicer Participation Agreement and MHA Handbook require that servicers have an internal control program to monitor compliance with relevant consumer protection and fair lending laws. In April 2014, Treasury officials stated that they planned to continue efforts to promote fair lending policies. However, they noted that they believed that the federal agencies with supervisory authority remain in the best position to monitor servicer compliance with fair lending laws and that they did not plan to implement this recommendation. Representatives of the federal regulators said that their fair lending reviews have a broader overall focus that may not specifically focus on MHA activities. Moreover, our analysis identified some statistically significant differences among four large MHA program servicers in the number of denials and cancellations of trial modifications and in the potential for redefault of permanent modifications for certain protected groups. Evaluating the extent to which servicers have developed and maintained internal controls to monitor compliance with fair lending laws could give Treasury additional assurances that servicers have implemented MHA programs in compliance with fair lending laws. In July 2015, we recommended that Treasury develop and implement policies and procedures to better ensure that changes to TARP- funded housing programs are based on evaluations that comprehensively and consistently met the key elements of benefit- cost analysis. Treasury agreed that it is important to assess the benefits and costs of proposed program improvements, and that it would continue to consider the costs of program enhancements and balance those considerations with the overall objective of helping struggling homeowners. Treasury also noted that, given the scheduled application deadline for MHA on December 30, 2016, it did not anticipate making significant policy changes to the MHA programs. Although the deadline for new MHA program applicants is December 30, 2016, elements of the MHA program will remain in effect after the application deadline. For example, in the case of a HAMP loan modification, borrowers are eligible for program benefits for up to 6 years. As such, we continue to maintain that Treasury should take action to fully implement the partially implemented and open recommendations. We will continue to assess the status of these recommendations considering program activity and actions taken by Treasury. We provided a draft of this report to Treasury for comment. Treasury had no formal or technical comments on the draft report. We are sending copies of this report to the appropriate congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or garciadiazd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. The following table summarizes the status of our TARP performance audit recommendations as of August 22, 2016. We classify each recommendation as implemented, partially implemented (the agency took steps to implement the recommendation but more action would be required to fully implement it), open (the agency had not taken steps to implement the recommendation), and closed, not implemented (the agency decided not to take action to implement the recommendation and we no longer consider the recommendation relevant). The recommendations are listed by report. In addition to the contact named above, Harry Medina (Assistant Director), Jason Wildhagen (Analyst-in-Charge), Anne Akin, Bethany Benitez, John Karikari, Barbara Roesmann, Mathew J. Scirè, Jena Sinkfield, and Karen Tremba made key contributions to this report. | The Emergency Economic Stabilization Act of 2008 (EESA) authorized the creation of TARP to address the most severe crisis that the financial system had faced in decades. Treasury has been the primary agency responsible for TARP programs. EESA provided GAO with broad oversight authorities for actions taken under TARP and included a provision that GAO report at least every 60 days on TARP activities and performance. This 60-day report describes the status of GAO's prior TARP performance audit recommendations to Treasury as of August 2016. In particular, this report discusses Treasury's implementation of GAO's recommendations focusing on two programs: CPP and MHA. GAO's methodologies included assessing relevant documentation from Treasury, interviewing Treasury officials, and reviewing prior TARP reports issued by GAO. As of August 2016, GAO's performance audits of the Troubled Asset Relief Program (TARP) activities have resulted in 74 recommendations to the Department of the Treasury (Treasury). Treasury has implemented 62 of the 74 recommendations, some of which were aimed at improving the transparency and internal controls of TARP. Five recommendations remain open, all pertaining to the Making Home Affordable (MHA) program, a collection of housing programs designed to help homeowners avoid foreclosure. Of the five: Treasury has partially implemented three open MHA recommendations—that is, it has taken some steps toward implementation but needs to take more actions. For example, in March 2016, GAO recommended that Treasury deobligate funds that its review showed would likely not be expended. Treasury's most recent estimates identified $4.7 billion in potential excess funds, of which Treasury has deobligated $2 billion as of August 2016. Two additional MHA recommendations remain open—that is, Treasury has not taken steps to implement them. GAO recommended that Treasury take steps to assess the extent to which servicers have established internal control programs that monitor compliance with fair lending laws applicable to MHA programs. GAO also recommended that Treasury establish a standard process to better ensure that changes to TARP-funded MHA programs are based on comprehensive cost-benefit analyses. Treasury told GAO they would consider this recommendation but has noted that it plans no major program policy changes given the December 30, 2016, application deadline for the MHA program. Seven recommendations have been closed but were not implemented. Five were related to the Capital Purchase Program (CPP) and MHA and two to other TARP activities. Generally, these recommendations were closed because GAO determined that the recommendations were no longer applicable. GAO continues to maintain that Treasury should take action to fully implement the three partially implemented and two open MHA recommendations. GAO will continue to assess the status of these recommendations considering new program activity and any further actions taken by Treasury. |
Depository institutions—banks, thrifts, and credit unions—have attained a unique and central role in U.S. financial markets through their deposit- taking, lending, and other activities. Individuals have traditionally placed a substantial amount of their savings in federally insured depository institutions. In addition, the ability to accept deposits transferable by checks and other means has allowed depository institutions to become principal agents or middlemen in many financial transactions and in the nation’s payment system. Depository institutions typically offer a variety of savings and checking accounts, such as ordinary savings, certificates of deposits, interest-bearing checking, and noninterest-bearing checking accounts. Also, the same institutions may offer credit cards, home equity lines of credit, real estate mortgage loans, mutual funds, and other financial products. In the United States, regulation of depository institutions depends on the type of charter the institution chooses. The various types of charters can be obtained at the state or national level and cover: (1) commercial banks, which originally focused on the banking needs of businesses but over time broadened their services; (2) thrifts, which include savings banks, savings associations, and savings and loans and which were originally created to serve the needs—particularly the mortgage needs—of those not served by commercial banks; and (3) credit unions, which are member-owned cooperatives run by member-elected boards with a historic emphasis on serving people of modest means. All depository institutions have a primary federal regulator if their deposits are federally insured. State regulators participate in the regulation of institutions with state charters. Specifically, the five federal banking regulators charter and oversee the following types of depository institutions: OCC charters and supervises national banks. As of December 30, 2006, there were 1,715 commercial banks with national bank charters. These banks held the dominant share of bank assets, about $6.8 trillion. The Federal Reserve serves as the regulator for state-chartered banks that opt to be members of the Federal Reserve System and the primary federal regulator of bank holding companies, including financial holding companies. As of December 30, 2006, the Federal Reserve supervised 902 state member banks with total assets of $1.4 trillion. FDIC supervises all other state-chartered commercial banks with federally insured deposits, as well as federally insured state savings banks. As of December 30, 2006, there were 4,785 state-chartered banks and 435 state-chartered savings banks with $1.8 trillion and $306 billion in total assets, respectively. In addition, FDIC has backup examination authority for federally insured banks and savings institutions of which it is not the primary regulator. OTS charters and supervises federally chartered savings associations and serves as the primary federal regulator for state-chartered savings associations and their holding companies. As of December 30, 2006, OTS supervised 761 federally chartered and 84 state chartered thrifts with combined assets of $1.4 trillion. NCUA charters, supervises, and insures federally chartered credit unions and is the primary federal regulator for federally insured state chartered credit unions. As of December 30, 2006, NCUA supervised 5,189 federally chartered and insured 3,173 state chartered credit unions with combined assets of $710 billion. These federal regulators conduct on-site examinations and off-site monitoring to assess institutions’ financial condition and compliance with federal banking and consumer laws. Additionally, as part of their oversight the regulators issue regulations, take enforcement actions, and close failed institutions. Regulation DD, which implements TISA, became effective with mandatory compliance in June 1993. The purpose of the act and its implementing regulations is to enable consumers to make informed decisions about their accounts at depository institutions through the use of uniform disclosure documents. These disclosure documents are intended to help consumers “comparison shop” by providing information about fees, annual percentage yields, interest rates, and other terms for deposit accounts. The regulation is supplemented by “staff commentary,” which contains official Federal Reserve staff interpretations of Regulation DD. Since the initial implementation date for Regulation DD, several amendments have been made to the regulation and the corresponding staff commentary. For example, the Federal Reserve made changes to Regulation DD, effective July 1, 2006, to address concerns about the uniformity and adequacy of information provided to consumers when they overdraw their deposit accounts. Credit unions are governed by a substantially similar regulation issued by NCUA. Regulation E, which implements the Electronic Fund Transfer Act, became effective in May 1980. The primary objective of the act and Regulation E is the protection of individual consumers engaging in electronic funds transfers (EFT). Regulation E provides a basic framework that establishes the rights, liabilities, and responsibilities of participants in electronic fund transfer systems such as ATM transfers, telephone bill-payment services, point-of-sale terminal transfers in stores, and preauthorized transfers from or to consumer's bank accounts (such as direct deposit and Social Security payments). The term “electronic fund transfer” generally refers to a transaction initiated through an electronic terminal, telephone, computer, or magnetic tape that instructs a financial institution either to credit or to debit a consumer's asset account. Regulation E requires financial institutions to provide consumers with initial disclosures of the terms and conditions of EFT services. The regulation allows financial institutions to combine the disclosure information required by the regulation with that required by other laws such as TISA as long as the information is clear and understandable and is available in a written form that consumers can keep. Paying or honoring customers’ occasional or inadvertent overdrafts of their demand deposit accounts has long been an established practice at depository institutions. As shown in figure 1, depository institutions have four options when a customer attempts to withdraw or access funds from an account that does not have enough money in it to cover the transaction, and fees can be assessed for each of these options. The institution can (1) cover the amount of the overdraft by tapping a linked account (savings, money market, or credit card) established by the customer; (2) charge the overdraft to a linked line of credit; (3) approve the transaction (if electronic) or honor the customer’s check by providing an ad hoc or “courtesy” overdraft; or (4) deny the transaction or decline to honor the customer’s check. The first two options require that customers have created and linked to the primary checking account one or more other accounts or a line of credit in order to avoid overdrafts. The depository institution typically waives fees or may charge a small fee for transferring money into the primary account (a transfer fee). Depository institutions typically charge the same amount for a courtesy overdraft (an overdraft fee) as they do for denying a transaction for insufficient funds (an insufficient funds fee). In addition to fees associated with insufficient funds transactions, institutions may charge a number of other fees for checking and savings account services and transactions. As shown in table 1, these fees include periodic service charges associated with these accounts and special service fees assessed on a per-transaction basis. Our analysis of data from private vendors showed that a number of bank fees—notably charges for insufficient funds and overdraft transactions— have generally increased since 2000, while others have decreased. In general, banks and thrifts charged higher fees than credit unions for checking and savings account services, and larger institutions charged more than smaller institutions. During this same period, the portion of depository institutions revenues derived from noninterest sources— including, but not limited to, fees on savings and checking accounts— increased somewhat. Changes in both consumer behavior and practices of depository institutions are likely influencing trends in fees, but limited data exist to demonstrate the effect of specific factors. FDIC is currently conducting a special study of the overdraft programs that should provide important insights on how these programs operate, as well as information on characteristics of customers who pay overdraft bank fees. Data we obtained from vendors—based on annual surveys of hundreds of banks, thrifts, and credit unions on selected banking fees indicated that some checking and savings account fee amounts generally increased between 2000 and 2007, while a few fell, notably monthly maintenance fees. For example, as shown in figure 2, average insufficient funds and overdraft fees have increased by about 11 percent, stop payment order fees by 17 percent, and return deposited item fees by 49 percent since 2000. Across all institutions, average insufficient funds and overdraft fees were the highest dollar amounts, on average, of the fees reported. For example, the average insufficient funds fee among the institutions surveyed by Moebs $ervices in 2006 was $24.02, while among the institutions surveyed by Informa Research Services it was $26.07. Data from Informa Research Services also indicated that since 2004 a small number of institutions (mainly large banks) have been applying tiered fees to certain transactions, such as overdrafts. For example, an institution may charge one amount for the first three overdrafts in a year (tier 1), a higher rate for overdrafts four to six of that year (tier 2), and an even higher rate for overdrafts seven and beyond in a single year (tier 3). Of the institutions that applied tiered fees in 2006, the average overdraft fees were $26.74, $32.53, and $34.74 for tiers 1, 2, and 3, respectively. The data from these vendors also indicate that fee amounts for some transactions or services varied or generally declined during this period. For example: The average ATM surcharge fee (assessed by a depository institution when its ATM is used by a nonaccount holder) among institutions surveyed by Moebs $ervices was $0.95 in 2000, rising to $1.41 in 2003, and declining to $1.34 in 2006. This variability was also evident in the fees charged by institutions surveyed by Informa Research Services. The average foreign ATM fee (assessed by a depository institution when its account holders use another institution’s ATM) generally declined, from $0.92 in 2000 to $0.61 in 2006 among institutions surveyed by Moebs $ervices and from $1.83 to $1.14 over the same period among institutions surveyed by Informa Research Services. The average monthly maintenance fees on standard noninterest bearing checking accounts decreased from $6.81 in 2000 to $5.41 in 2006 among institutions surveyed by Informa Research Services (Moebs $ervices did not provide data on this fee). Additionally, an increasing number of the surveyed institutions offered free checking accounts (with a minimum balance required to open the account) over this period. For example, in 2001 almost 30 percent of the institutions offered free checking accounts, while in 2006 the number grew to about 60 percent of institutions. Finally, some fees declined in amount, as well as in terms of their prevalence. For example, Moebs $ervices reported that the institutions it surveyed charged annual ATM fees, generally for issuing a card to customers for their use strictly at ATMs, ranging from an average of $1.37 in 2000 to $1.14 in 2003. However, Moebs $ervices stopped collecting data on this fee because, according to a Moeb’s official, fewer and fewer institutions reported charging the fee. Similarly, Moebs $ervices reported that the institutions it surveyed charged an annual debit card fee, generally for issuing a card to customers for their use at ATMs, averaging from $0.94 in 2000 to $1.00 in 2003; but, it stopped collecting this data as well. (Informa Research Services reported data on these fees through 2006, when they averaged $0.44 and $0.74, respectively.) Appendix III contains further details on the data reported by Moebs $ervices and Informa Research Services, in both nominal and real dollars. A number of factors may explain why some fees increased while others decreased. For example, greater use of automation and lower cost of technology may explain why certain ATM fees have decreased or been eliminated altogether. Additionally, competition among depository institutions for customers likely has contributed to the decrease in monthly maintenance fees and the increased prevalence of “free checking” accounts. Factors that may be influencing trends in fees overall are discussed subsequently in this report. Using data supplied by the two vendors, we compared the fees for checking and savings accounts by type of institution and found that, on average, banks and thrifts charged more than credit unions for almost all of them (the exception was the fee for returns of deposited items). For example, banks and thrifts charged on average roughly three dollars more than credit unions for insufficient funds and overdraft fees throughout the period. However, on average credit unions charged almost $6.00 more than banks and thrifts on returns of deposited items. The amounts institutions charged for certain transactions also varied by the institution’s size, as measured by assets. Large institutions—those with more than $1 billion in assets—on average charged more for the majority of fees than midsized or small institutions—those with assets of $100 million to $1 billion and less than $100 million, respectively. Large institutions on average charged between $4.00 and $5.00 more for insufficient funds and overdraft fees than smaller institutions. Further, on average, large banks and thrifts consistently charged the highest insufficient funds and overdraft fees, while small credit unions consistently charged the lowest. Specifically, in 2007 large banks and thrifts charged an average fee of about $28.00 for insufficient funds and overdraft fees, while small credit unions charged an average fee of around $22.00. While large institutions in general had higher fees than other sized institutions, smaller institutions charged considerably more for returns of deposited items. The results of our analysis are consistent with the Federal Reserve’s 2003 report on bank fees, which showed that large institutions charged more than medium- and small-sized institutions (banks and thrifts combined) for most fees. Our analysis of Informa Research Services data also showed that, controlling for both institution type and size, institutions in some regions of the country, on average, charged more for some fees, such as insufficient funds and overdraft fees, than others. For example, in 2006 the average overdraft fee in the southern region was $28.18, compared with a national average of $26.74 and a western region average of $24.94. Between 2000 and 2006, the portion of depository institutions’ income from noninterest sources, including income generated from bank fees, varied but generally increased. As shown in figure 3, banks’ and thrifts’ noninterest income rose from 24 to 27 percent of total income between 2000 and 2006 (peaking at 33 percent in 2004) and credit unions’ noninterest income rose from 11 to 14 percent (peaking at 20 percent in 2004). The percent of noninterest income appeared to have an inverse relationship to changes in the federal funds rate—the interest rate at which depository institutions lend balances at the Federal Reserve to other depository institutions— which is an indicator of interest rate changes during the period. Low interest rates combined with increased competition from other lenders can make it difficult for banking institutions to generate revenues from interest rate “spreads,” or differences between the interest rates that can be charged for loans and the rates paid to depositors and other sources of funds. However, noninterest income includes revenue derived from a number of fee-based banking services, not all of them associated with checking and savings accounts. For example, fees from credit cards, as well as fees from mutual funds sales commissions, are included in noninterest income. Thus, noninterest income cannot be used to specifically identify either the extent of fee revenue being generated, or the portion that is attributable to any specific fee. Among other financial information, banks and thrifts are required to report data on service charges on deposit accounts (SCDA), which includes most of the fees associated with checking and deposit accounts. Specifically, SCDA includes, among other things, account maintenance fees, charges for failing to maintain a minimum balance, some ATM fees, insufficient funds fees, and charges for stop payment orders. As figure 4 shows, banks’ and thrifts’ SCDA, and to a somewhat greater extent credit union’s fee income as a percentage of total income, increased overall during the period, with a slight decline in recent years. However, it should be noted that credit union fee income includes income generated from both deposit accounts and other products that credit unions offer, such as fees for credit cards and noncustomer use of proprietary ATMs; thus, the percentage of fee income they report is not directly comparable to the service charges reported by banks and thrifts. Because institutions do not have to report SCDA by line item, it is difficult to estimate the extent to which specific fees on checking and deposit accounts contributed to institutions’ revenues or how these contributions have changed over the years. Further, some fees that banking customers incur may not be covered by SCDA. For example, institutions report monthly account maintenance fee income as SCDA, but not income earned from fees charged to a noncustomer, such as fees for the use of its proprietary ATMs. Similarly, credit unions’ reported fee income cannot be used to identify fee revenues from specific checking and savings account fees. Since the mid-1990s, consumers have increasingly used electronic forms of payment such as debit cards for many transactions, from retail purchases to bill payment. By 2006 more than two-thirds of all U.S. noncash payments were made by electronic payments (including credit cards, debit cards, automated clearing house, and electronic benefit transfers), while the number of paper payments (e.g., checks) has decreased due to the rapid growth in the use of debit cards. Generally, these electronic payments are processed more quickly than traditional paper checks. For example, debit card transactions result in funds leaving customer’s checking accounts during or shortly after the transaction, as opposed to checks, which may not be debited from a customer’s account for a few days (although depository institutions have also begun to process checks faster, in part, as a result of the Check Clearing for the 21st Century Act (Check 21 Act) and implementing regulations, which became effective in late 2004). Despite this overall shortening of time or “float” between the payment transaction and the debiting of funds from a consumer’s account, depository institutions can hold certain nonlocal checks deposited by a consumer for up to 11 days. According to consumer groups and bank representatives, this creates the potential for increased incidences of overdrafts if funds are debited from a consumers account faster than deposits are made available for withdrawal. The shift in consumer payment preferences has occurred rather quickly, and we identified little research on the extent to which the increased use of electronic payments, such as debit cards, has affected the prevalence of specific deposit account fees, such as overdraft or insufficient fund fees. Additionally, some institutions have internal policies for posting deposits to and withdrawals from customer accounts that can affect the incidence of fees. For example, consumer group representatives, bank representatives, and federal regulatory officials told us that many institutions process the largest (highest dollar amount) debit transaction before the smallest one regardless of the order in which the customers initiated the transactions. This practice can affect the number of overdraft fees charged to a customer. For example, if a customer had only $600 available in their account, processing a payment for $590 first before three transactions of $25 each would result in three instances of overdrafts, whereas reversing the order of processing payments from smallest to largest would result in one instance of overdraft. Banking officials said that this processing of largest to smallest transactions first ensures that consumers’ larger, and presumably more important payments, such as mortgage payments, are made. One of the federal banking regulators—OTS—issued guidance in 2005 stating that institutions it regulates should not manipulate transaction clearing steps (including check clearing and batch debit processing) to inflate fees. We were unable to identify comprehensive information regarding the extent to which institutions were using this or other methods (chronological, smallest-to-largest, etc.) of processing payments. Further, some depository institutions have automated the process used to approve overdrafts and have increasingly marketed the availability of overdraft protection programs to their customers. Historically, depository institutions have used their discretion to pay overdrafts for consumers, usually imposing a fee. Over the years, to reduce the costs of reviewing individual items, some institutions have established policies and automated the process for deciding whether to honor overdrafts, but generally institutions are not required to inform customers about internal policies for determining whether an item will be honored or denied. In addition, third- party vendors have developed and sold automated programs to institutions, particularly to smaller institutions, to handle overdrafts. According to the Federal Reserve, what distinguishes the vendor programs from in-house automated processes is the addition of marketing plans that appear designed to (1) promote the generation of fee income by disclosing to account holders the dollar amount that the consumer typically will be allowed to overdraw their account and (2) encourage consumers to use the service to meet short-term borrowing needs. An FDIC official noted that some vendor contracts tied the vendor’s compensation to an increase in the depository institution’s fee revenues. We were unable to identify information on the extent to which institutions were using automated overdraft programs developed and sold by third- party vendors or the criteria that these programs used. Representatives from a few large depository institutions told us that they are using software programs developed in-house to determine which account holders would have overdrafts approved. According to consumer groups and federal banking regulators, software vendors appear to be primarily marketing automated overdraft programs to small and midsized institutions. The 2005 interagency guidance on overdraft protection programs encouraged depository institutions to disclose to consumers how transactions would be processed and how fees would be assessed. An FDIC official noted that, while no empirical data are available, institutions’ advertising of overdraft protection programs appears to have diminished since publication of the interagency guidance. Because fees for overdrafts and instances of insufficient funds may be more likely to occur in accounts with lower balances, there is some concern that they may be more likely among consumers who traditionally have the least financial means, such as young adults and low- and moderate-income households. We were not able to analyze the demographic characteristics of customers that incur bank fees because doing so would require transaction-level data for all account holders—data that are not publicly available. We identified only two studies—one by an academic researcher and one by a consumer group—that discussed the characteristics of consumers who pay bank fees. Neither study obtained a sample of customers who overdraw that was representative of the U.S. population. According to the academic researcher’s study, which used transaction level account data for one small Midwest bank, overdrafts were not significantly correlated with consumers’ income levels, although younger consumers were more likely to have overdrafts than consumers of other ages. However, the results of this study cannot be generalized to the larger population because the small institution used was not statistically representative of all depository institutions. The consumer group study, which relied on a survey in which individuals with bank accounts were interviewed, found that those bank customers who had had two or more overdrafts in the 6 months before the date of the interview were more often low income, single, and nonwhite. However, this study also had limitations, including the inherent difficulty in contacting and obtaining cooperation from a representative sample of U.S. households with a telephone survey and because it relied on consumers’ recall of and willingness to accurately report past events rather than on actual reviews of their transactions. While we cannot fully assess the quality of results from these two studies, we note them here to illustrate the lack of definitive research in this area. Partly in response to consumer concerns raised by overdraft protection products, FDIC is currently conducting a two-part study on overdraft protection products offered by the institutions it supervises. The results of this study may provide information on the types of consumers who pay bank fees. For both parts, FDIC is collecting data that are not currently available in the call reports or other standard regulatory reports. During the first phase of its study, FDIC collected data from 500 state-chartered nonmember banks about their overdraft products and policies. Data from the first phase will reveal how many FDIC-regulated banks offer overdraft protection programs and the details of these programs, such as how many of them are automated. FDIC expects to complete the data collection effort at the end of 2007. The second phase involves collecting transaction-level data on the depositors who use the overdraft products for 100 of the 500 institutions for a year. As part of this phase, FDIC plans to use income information by U.S. Census Bureau tract data as a proxy for account holder’s income to try and determine the characteristics of consumers who incur overdraft fees. FDIC expects to complete the analysis at the end of 2008. Federal regulators assess depository institution’s compliance with the disclosure requirements of Regulations DD and E during examinations by reviewing an institution’s written policies and procedures, including a sample of disclosure documents. In general, regulators do not review the reasonableness of such fees unless there are safety and soundness concerns. Since 2005, NCUA has included examination procedures specifically addressing institutions’ adherence to the 2005 interagency guidance concerning overdraft protection products and, in September 2007, all of the regulators revised their Regulation DD examination procedures to include reviews of the disclosures associated with such products offered by institutions that advertise them. In general, examinations are risk-based—that is, targeted to address factors that pose risks to the institution—and to help focus their examinations of individual institutions, the regulators review consumer complaints. Our analysis of complaint data from each of the federal regulators showed that while they receive a large number of checking account complaints, a small percentage of these complaints concerned the fees and disclosures associated with either checking or savings accounts. The federal regulators reported identifying a number of violations of the disclosure sections of Regulations DD and E during their examinations but collectively identified only two related formal enforcement actions from 2002 through 2006. Finally, officials from the six state regulators told us that, while they may look at compliance with Regulations DD and E, their primary focus is on safety and soundness issues and compliance with state laws and regulations, and they reported receiving few consumer complaints associated with checking and savings account fees and disclosure issues. Our review of the examination handbooks and examination reports indicated that the five federal regulators used similar procedures to assess compliance with Regulations DD and E (as discussed below, NCUA also includes steps to assess credit unions’ adherence to the 2005 interagency guidance on overdraft protection products, but that is distinct from assessing compliance with regulatory requirements). In general, the Regulation DD and E compliance examination procedures for each of the five federal banking regulators called for examiners to verify that the institution had policies or procedures in place to ensure compliance with all provisions of the regulations; review a sample of account disclosure documents and notices required by the regulation to determine whether contents were accurate and complete; and review a sample of the institution’s advertisements to (1) determine if the advertisements were misleading, inaccurate, or misrepresented the deposit contract and (2) ensure that the advertisements included all required disclosures. Federal regulators’ examination procedures for Regulations DD and E do not require examiners to evaluate the reasonableness of fees associated with checking and savings accounts. According to the Federal Reserve, the statutes administered by the regulators do not specifically address the reasonableness of fees assessed. Additionally, officials of the federal regulators explained that there were no objective industry-wide standards to assess the “reasonableness” of fees. OCC officials told us that an industry-wide standard would not work because, among other things, fees vary among banks that operate in different geographical areas and that competitive conditions in local markets determine fees. According to the federal regulatory officials, each depository institution is responsible for setting the fee for a particular product and service, and regulators look at rates or pricing issues only if there is a safety and soundness concern. For example, NCUA officials told us that an examiner’s finding that fee income was excessive could create safety and soundness issues, depending on the way the fees were generated and how the resulting revenues were spent. The regulators stated that while they did not evaluate the reasonableness of fees, the disclosure requirements of Regulations DD and E were intended to provide consumers with information that allow them to compare fees across institutions. Additionally, they told us that market forces should inhibit excessive fees since the financial institution would likely lose business if it decided to charge a fee that was significantly higher than its competitors. On September 13, 2007, the Federal Financial Institutions Examination Council’s Task Force on Consumer Compliance—a formal interagency body composed of representatives of the Federal Reserve, FDIC, NCUA, OCC, and OTS—approved revised interagency compliance examination procedures for Regulation DD. Officials of each of the federal regulators told us that their agencies either had begun or were in the process of implementing the updated examination procedures. Among other changes, the revised examination procedures address the Regulation DD disclosure requirements for institutions that advertise the payment of overdrafts. Specifically, the revised examination procedures ask the examiners to determine whether the institution clearly and conspicuously discloses in its advertisements (1) the fee for the payment of each overdraft, (2) the categories of transactions for which a fee may be imposed for paying an overdraft, (3) the time period by which a consumer must repay or cover any overdraft, and (4) the circumstances under which the institution will not pay an overdraft. These items are among those that were identified as “best practices” by the 2005 interagency guidance. According to the guidance, clear disclosures and explanations to consumers about the operation, costs, and limitations of an overdraft protection program are fundamental to using such protection responsibly. Furthermore, the guidance states that clear disclosures and appropriate management oversight can minimize potential customer confusion and complaints, as well as foster good customer relations. The interagency guidance identifies best practices currently observed in or recommended by the industry on marketing, communications with consumers, and program features and operations. For example, the best practices include marketing the program in a way that does not encourage routine overdrafts, clearly explaining the discretionary nature of the program, and providing the opportunity for consumers to opt out of the program. Prior to the revised Regulation DD examination procedures, NCUA had adopted procedures to assess the extent to which institutions it examines followed the interagency guidance. In December 2005, NCUA adopted “bounce protection” (that is, overdraft protection) examination procedures as part of the agency’s risk-focused examination program. The examination procedures were developed to coincide with the issuance of the 2005 interagency guidance on overdraft protection programs, according to an NCUA official. In an NCUA letter to credit unions, the agency stated that “credit unions should be aware the best practices are minimum expectations for the operation of bounce protection programs.” NCUA’s examination procedures included a review of several key best practices. For example, the examination procedures assess whether credit unions provided customers with the opportunity to elect overdraft protection services or, if enrollment in such a program was automatic, to opt out. In addition to other areas of review, the examination procedures include a review of whether the credit union distinguished overdraft protection from “free” account features, and if the credit union clearly disclosed the fees of its overdraft protection program. To a more limited extent, OTS had overdraft protection examination procedures in place that address its guidance, but these were limited to a review of compliance-related employee training and the materials used to market or educate customers about the institution’s overdraft protection programs. Officials from the Federal Reserve, OCC, and FDIC reported that, beyond the recent revisions to Regulation DD examination procedures, their agencies did not have specific examination procedures related to the 2005 interagency guidance because the best practices are not enforceable by law. These officials told us that, while not following a best practice from the interagency guidance did not constitute a violation of related laws or regulations, they encourage institutions to follow the best practices. An FDIC official noted that a deviation from the guidance could serve as a “red flag” for an examiner to look more closely for potential violations. Officials of the federal banking regulators explained that examiners use complaint data to help focus examinations that they are planning or to alter examinations already in progress. For example, according to one regulator, if consumers file complaints because they have not received a disclosure document prior to opening an account, this could signify a violation of Regulation DD, which the examiners would review as part of the examination for this regulation. The officials noted that consumer complaints could be filed and were often resolved at the financial institution involved, in which case the consumer would not be likely to contact a federal banking regulator. However, if the consumer is not satisfied with the financial institution's response, a consumer would then likely file a complaint with the federal banking regulator. Consumers may also file a complaint directly with federal regulators without contacting the financial institution about a problem. In either case, regulators are required to monitor the situation until the complaint is resolved. According to the regulators’ complaint data, most of the complaints received from 2002 to 2006 involved credit cards, although a significant number of complaints were related to checking accounts and a somewhat smaller number involved savings accounts (fig. 5). In analyzing complaints specifically about checking and savings accounts from 2002 through 2006, we found that, on average, about 10 percent were related to fees, and 3 percent were related to disclosures. (For information on how the Federal Reserve, FDIC, OCC, and OTS resolved complaints, see app. IV.) Collectively fee and disclosure complaints represented less than 5 percent of all complaints received during this period. Officials of the banking regulators told us that the overwhelming bulk of complaints they received on checking and saving accounts concerned a variety of other issues, including problems opening or closing an account, false advertising, and discrimination. Among the regulators, OCC included in its complaint data the specific part of the regulation that was the subject of the complaint. Of the consumer complaints about fees that OCC received from 2002 through 2006, 39 percent were for “unfair” fees (concerning the conditions under which fees were applied), 2 percent were for new fees, 6 percent were for “high” fees (the amount of the fees), and 53 percent concerned fees in general. The majority of disclosure-related complaints that OCC received during this period were for the Regulation DD provision that, in part, requires that depository institutions provide account disclosures to a consumer before an account is opened or a service is provided, whichever is earlier, or upon request. OCC’s analysis of these complaints serves to identify potential problems—at a particular bank or in a particular segment of the industry— that may warrant further investigation by examination teams, supervisory guidance to address emerging problems, or enforcement action. The federal banking regulators’ examination data for the most recent 5 calendar years (2002 through 2006) showed a total of 1,674 instances in which the regulators cited depository institutions for noncompliance with the fee-related disclosure requirements of Regulations DD (1,206 cases) or E (468 cases). On average, this is about 335 instances annually among the nearly 17,000 depository institutions that these regulators oversee. As shown in table 2, most of the disclosure-related violations were reported by FDIC—83 percent of the Regulation DD disclosure-related violations (998 of 1,206) and 74 percent of the Regulation E disclosure-related violations (348 of 468). According to FDIC officials, one reason for the larger number of fee-related violations identified by FDIC is the large number of institutions for which it is the primary federal regulator (5,220 depository institutions as of December 31, 2006). Also, differences among the regulators may appear due to the fact that they do not count the numbers of violations in exactly the same way. According to our analysis of the regulators’ data, the most frequent violation associated with the initial disclosure requirements of Regulation DD was noncompliance with the requirement that disclosure documents be written in a clear and conspicuous manner, in a form that customers can keep, and reflect the terms of the legal obligation of the account agreement between the consumer and the depository institution (1,053 cases). Examiners reported violations of two other disclosure provisions of Regulation DD. First, they found violations of the requirement that depository institutions provide account disclosure documents to a consumer before an account is opened or a service is provided, whichever is earlier, or upon request (124 cases). Second, they reported violations of the requirement that disclosure documents state the amount of any fee that may be imposed in connection with the account or an explanation of how the fee will be determined and the conditions under which it may be imposed (29 cases). The most frequent violation associated with the initial disclosure requirements of Regulation E was of the requirement that financial institutions make the disclosure documents available at the time a consumer contracts for an EFT or before the first EFT is made involving the consumer’s account (321 cases). Other disclosure provisions from Regulation E for which examiners cited violations included those that required disclosure statements to be in writing, clear and readily understandable, and in a form that customers can keep (5 cases) and to list any fees imposed by the financial institution for EFTs or for the right to make transfers (142 cases). According to officials of the federal banking regulators, examiners are typically successful in getting the financial institutions to take corrective action on violations either during the course of the examination or shortly thereafter, negating the need to take formal enforcement action. FDIC, NCUA, OCC, and Federal Reserve officials reported that from 2002 to 2006 they had not taken any formal enforcement actions solely related to violations of the disclosure requirements from Regulations DD and E, while OTS reported taking two such actions during the period. Officials of all six of the state banking regulators that we contacted told us that the primary focus of their examinations is on safety and soundness issues and compliance with state laws and regulations. Officials of four of the six state banking regulators we contacted told us their examiners also assess compliance with Regulation DD, and three of these four indicated that they assess compliance with Regulation E as well. Representatives of the four state banking regulators also told us that if they identify a violation and no federal regulator is present, they cite the institution and forward this information to the appropriate federal banking regulator. The other two state banking regulators said that they review compliance with federal regulations, including Regulations DD and E, only if the federal banking regulators have identified noncompliance with federal regulations during the prior examination. Officials in four states said that their state laws and regulations contained additional fee and disclosure requirements beyond those contained in Regulations DD and E. For example, according to Massachusetts state banking officials, Massachusetts bank examiners review state-chartered institutions for compliance with a state requirement that caps the fees on returns of deposited items. In another example, an Illinois law restricts institutions from charging an ATM fee on debit transactions made with an electronic benefits card (a card that beneficiaries used to access federal or state benefits, such as food stamp payments), according to Illinois state banking officials. Additionally, these state officials told us that Illinois state law requires all state-chartered institutions to annually disclose their fee schedules for consumer deposit accounts. According to an official at the New York state banking department, their state has a number of statutes and regulations concerning bank fees and their disclosure to consumers and their state examiners review institutions’ compliance with these requirements. The laws and regulations cover, among other things, permissible fees, required disclosure documents, and maximum insufficient fund fees, according to the New York state officials. Two of the states reported that, in conducting examinations jointly with the federal regulators, they had found violations of the Regulation DD and E disclosure provisions from 2002 to 2006 (one state reported 1 violation of Regulation DD, and one state reported 16 violations of Regulation DD and 10 violations of Regulation E). Four of the states did not report any violations (in one case, the state agency reported that they did not collect data on violations). Three states also reported that they had not taken any formal enforcement actions against institutions for violations of Regulation DD or E disclosure provisions; two states reported that they did not collect data on enforcement actions for violations of these regulations; one state did not report any data to us on enforcement actions. Regarding consumer complaints, officials in two states said that they did not maintain complaint data concerning fees and disclosures associated with checking and savings accounts, and the other four states reported relatively few complaints associated with fees and disclosures. For example, Massachusetts reported a total of 89 complaints related to fees and disclosures during the period, in comparison to 4,022 total complaints over the period. The results of our requests for information on fees or account terms and conditions at depository institutions we visited, as well as our visits to institutions’ Web sites, suggest that consumers may find it difficult to obtain such information upon request prior to opening a checking or savings account. A number of factors could explain the difficulties we encountered in obtaining comprehensive information on fees and account terms and conditions, including branch staff potentially not being knowledgeable about federal disclosure requirements or their institution’s available disclosure documents. Further, federal banking regulators’ examination processes do not assess whether potential customers can easily obtain information that institutions are required to disclose. Potential customers unable to obtain such information upon request prior to opening an account will not be in a position to make meaningful comparisons among institutions, including the amounts of fees they may face or the conditions under which fees would be charged. As we have seen, TISA requires, among other things, that depository institutions provide consumers with clear and uniform disclosures of the fees that can be assessed against all deposit accounts, including checking and savings accounts, so that consumers may make a meaningful comparison between different institutions. Depository institutions must provide these disclosures to consumers before they open accounts or receive a service from the institution or upon a consumer’s request. Regulation DD and the accompanying staff commentary specify the types of information that should be contained in these disclosures, including minimum balance required to open an account; monthly maintenance fees and the balance required to avoid them; fees charged when a consumer opens or closes an account; fees related to deposits or withdrawals, such as charges for using the institution’s ATMs; and fees for special services—for example, insufficient funds or charges for overdrafts and stop payment order fees on checks that have been written but not cashed. Regulation DD also requires depository institutions to disclose generally the conditions under which a fee may be imposed—that is, account terms and conditions. For example, institutions must specify the categories of transactions for which an overdraft fee may be imposed but do not have to provide an exhaustive list of such transactions. While depository institutions are required to provide consumers with clear and uniform disclosures of fees to enable meaningful comparisons among institutions, consumers may consider other factors when shopping among institutions. For example, federal banking regulators and one consumer group told us that convenience factors, such as locations of branches or ATMs, are typically the factors that consumers consider the most besides costs, when choosing where to open a checking and savings account. Our visits to branches of depository institutions nationwide suggested that some consumers may be unable to obtain, upon request, meaningful information with which to compare an institution’s fees and how they are assessed before opening a checking or savings account. We also found that the institutions’ Web sites generally did not provide comprehensive information on fees or account terms and conditions. Further, the documents that we did obtain during our visits did not always describe some key features of the institutions’ internal policies and procedures that could affect the incidence or amount of overdraft fees assessed by the institution. To assess the ease or difficulty in obtaining a comprehensive list of fees and account terms and conditions associated with checking and savings accounts, GAO staff from 12 cities across the United States visited 185 branches of banks, thrifts, and credit unions. Collectively, these branches represented 154 different depository institutions. Posing as potential customers, we specifically requested a comprehensive list of fees and terms and conditions for checking and savings accounts that would allow us to compare such information across depository institutions. The results are summarized here. Comprehensive list of fees. We were unable to obtain a comprehensive list of fees for checking and savings accounts from 40 (22 percent) of the branches (representing 36 institutions). Instead, we obtained brochures describing only the features of different types of checking and savings accounts. Some of these brochures contained information on monthly maintenance fees and the minimum balance needed to avoid them. But these brochures did not contain information on other fees, such as overdraft or insufficient fund fees. While our success in obtaining a comprehensive list of fees varied slightly among institutions of different sizes, we did note greater variations among banks, credit unions, and thrifts. For example, we were unable to obtain a comprehensive list of fees at 18 percent of the 103 bank branches and 20 percent of the 46 credit union branches we visited (representing 14 banks and 9 credit unions, respectively), while among the 36 thrift branches visited (representing 13 thrift institutions) it was 36 percent. Account terms and conditions. We were unable to obtain the terms and conditions associated with checking and savings accounts from 61 of the 185 branches (representing 54 depository institutions) that we visited (33 percent). Instead, as described earlier, we were provided with brochures on the different types of checking and savings accounts offered by the institution. We also observed little differences in our ability to obtain account terms and conditions information from institutions of different sizes but again found differences by types of institutions. For example, we were unable to obtain this information at 32 percent of the small or midsized institutions (34 of 108), compared with 35 percent of the large institutions (27 of 77). With respect to the type of depository institution, we were unable to obtain these documents at 30 percent of the bank branches (31 of 103 branches, representing 25 banks), 35 percent of the credit union branches (16 of 46 branches, representing 16 credit unions), and 39 percent of the thrift branches (14 of 36 branches, representing 13 thrift institutions). For both the comprehensive list of fees and descriptions of account terms and conditions, we observed some differences among branches of a single depository institution. For example, we visited multiple branches of 23 depository institutions (that is, more than one branch of each of the 23). For four of these institutions, we were able to obtain all of the documents we requested from all of the branches. For the other 19 institutions, we encountered inconsistencies among the different branches in our ability to obtain the full set of information we requested. The results of our direct observations are generally consistent with those reported by the U.S. Public Interest Research Group (PIRG). In 2001, PIRG had its staff pose as consumers and visit banks to request fee brochures and reported that, in many cases, its staff members were unable to obtain this information despite repeated requests. Further, our results seem to be in accord with the violations data provided by the regulators; as noted previously, the most frequent violation of the fee-related disclosure provisions of Regulation DD cited by the regulators between 2002 and 2006 was noncompliance with the requirement that disclosure documents be written in a clear and conspicuous manner and in a form that customers can keep. While depository institutions are not required to have the comprehensive list of fees and account terms and conditions on Web sites if these sites are merely advertising and do not allow consumers to open an account online, we visited these Web sites as part of our effort to simulate a consumer trying to obtain information to compare checking and savings accounts across institutions. In visiting the Web sites of all the institutions that we visited in person, we were unable to obtain information on fees and account terms and conditions at more than half of them. For example, we were unable to obtain a comprehensive list of fees from 103 of the 202 Web sites (51 percent). In addition, we were unable to obtain the terms and conditions from 134 of the 202 (66 percent). Figure 6 compares the results of our visits to branches and Web sites of depository institutions. Some of the depository institutions’ Web sites nevertheless contained information on certain fees associated with checking and savings accounts. For example, most of the Web sites had information on monthly maintenance fees and ATM fees associated with checking accounts. Smaller percentages had information on fees for overdrafts and insufficient fund fees. For example, 87 percent provided information on monthly maintenance fees, 62 percent had information on ATM withdrawal fees, 41 percent contained information on overdraft fees, and 37 percent provided information on insufficient fund fees. Among branches at which we were unable to obtain a comprehensive list of fees, branch staff offered explanations suggesting that they may not be knowledgeable about federal disclosure requirements. As previously noted, depository institutions are required to provide consumers, upon request, with clear and uniform disclosures of the fees that can be assessed against checking and savings accounts so that consumers may make a meaningful comparison between different institutions. However, during our visits to branches of depository institutions, representatives at 14 branches we visited told us that we had all the information on fees we needed to comparison shop—even though we determined that the documents they provided did not include a comprehensive list of fees that consumers opening accounts there might have to pay, representatives at seven branches told us that no comprehensive fee schedules were available, and representatives at four branches told us that we had to provide personal information or open an account in order to obtain a comprehensive list of fees. In addition, we observed differences in our ability to obtain the comprehensive list of fees and account terms and conditions among branches of 19 of the 23 depository institutions we visited that had multiple branches. This variation among branches of the same institution suggests that staff knowledge of the institution’s available disclosure documents may have varied. Further, the examination procedures that federal banking regulators use to assess compliance with Regulation DD do not require examiners to verify whether new or potential customers are actually able to obtain the required disclosure documents before opening an account. (Rather, the examination procedures call for the examiner to review written policies and procedures and disclosure documents to ensure that they contain information required under the regulation.) As a result, examination results would not provide officials of depository institutions with information showing whether potential customers were experiencing difficulty obtaining information at particular branches. Because the results of our visits cannot be generalized to other institutions, and because the federal banking regulators do not assess the extent to which consumers are actually able to obtain disclosure documents, neither we nor the regulators know how widespread this problem may be, nor—to the extent that it does exist among institutions—the reasons for it. However, regardless of the cause, if consumers are unable to obtain key information upon request prior to opening an account, they will be unable to make meaningful distinctions regarding charges and terms of checking and savings accounts. The amounts of some fees associated with checking and savings accounts have grown over the past few years, while others have varied or declined. During the same time period, the portion of depository institutions’ incomes derived from noninterest sources, including fees, has varied somewhat but has risen overall. Changes in both consumer behavior, such as increased use of electronic forms of payment, and in the terms and conditions of accounts offered by depository institutions may be influencing these trends in fees, but available data do not permit determining their exact effects. Similarly, we could find little information on the characteristics of consumers who are most likely to incur fees. However, the general upward trend in fees puts a premium on the effective disclosure of account terms and conditions, including the amounts of individual fees and the conditions under which they will be assessed, to consumers who are shopping for savings and deposit accounts. While consumers may consider convenience or other factors, as well as costs, when choosing a depository institution, Regulation DD, as well as guidance issued by the federal banking regulators, is intended to ensure that consumers receive information needed to make meaningful comparisons among institutions regarding the savings and deposit accounts they offer. While the federal regulators take consumer complaints into account when determining the scope of their examinations of specific institutions, their examinations of compliance with Regulations DD and E consist of reviewing institutions’ written policies, procedures, and disclosure documents. On this basis, the regulators have cited numbers of institutions for violating the disclosure requirements. Further, the regulators are in the process of implementing revised examination procedures for Regulation DD compliance that will include assessing the extent to which depository institutions follow requirements governing the advertisement of overdraft protection programs. This will be particularly important given that fees associated with overdrafts were among the highest of the types of fees for which we obtained data. However, even under the revised procedures, the regulators’ examinations do not determine whether consumers actually receive required disclosure documents before opening an account. While the results of our visits to 185 branches of depository institutions cannot be generalized to all institutions, they raise some concern that consumers may find it difficult to obtain upon request, important disclosure documents prior to opening an account. We were unable to obtain detailed information about fees and account terms and conditions at over one-fifth of the branches we visited and, in many cases, we found inconsistencies among branches of the same depository institution. Because the federal banking regulators, in their compliance examinations, do not assess the extent to which consumers actually receive required disclosure documents before opening an account, they are not in a position to know how widespread this problem may be among the institutions they supervise, or the reasons for it. Incorporating into their oversight a means of assessing the extent to which consumers can actually obtain information to make meaningful comparisons among institutions, and taking any needed steps to assure the continued availability of such information, would further this goal of TISA. To help ensure that consumers can make meaningful comparisons between depository institutions—we recommend that the Chairman, Federal Deposit Insurance Corporation; Chairman, Board of Governors of the Federal Reserve System; Chairman, National Credit Union Administration; Comptroller of the Currency, Office of the Comptroller of the Currency; and Director, Office of Thrift Supervision assess the extent to which consumers receive specific disclosure documents on fees and account terms and conditions associated with demand and deposit accounts prior to opening an account, and incorporate steps as needed into their oversight of institutions’ compliance with TISA to assure that disclosures continue to be made available. We requested and received written comments on a draft of this report from FDIC, the Federal Reserve, NCUA, OCC, and OTS that are presented in appendixes V through IX. We also received technical comments from FDIC and the Federal Reserve, which we have incorporated in this report as appropriate. In their written responses, all five banking regulators indicated agreement with our report and stated that they will be taking action in response to our recommendation. For example, OCC stated that it would incorporate steps, as needed, into its oversight of institutions’ compliance with TISA to assure that disclosures continue to be made available. The Federal Reserve and NCUA specifically mentioned the need to revise, improve, or strengthen the current interagency Regulation DD examination procedures. All five agencies indicated that they plan to address this issue on an interagency basis. In addition, FDIC stated that it would provide further instructions to state nonmember banks about their ongoing responsibility to provide accurate disclosures to consumers upon request and would also provide further instructions to its examiners of the importance of this requirement; NCUA stated that it would send a letter to credit unions reiterating the disclosure requirements for fees and account terms; the Federal Reserve stated that it would expand its industry outreach activities to facilitate compliance and promote awareness of Regulation DD disclosure requirements. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Ranking Member, Subcommittee on Financial Institutions and Consumer Credit, Committee on Financial Services, House of Representatives, and other interested congressional committees and the heads of the Federal Reserve, FDIC, NCUA, OCC, and OTS. We also will make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8678 or woodd@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix X. Our report objectives were to determine (1) the trends in the types and amounts of fees associated with checking and deposit accounts since 2000; (2) how federal and selected state banking regulators address checking and deposit account fees in their oversight of depository institutions; and (3) the extent to which consumers are able to obtain account terms and conditions and disclosures of fees, including information about specific transactions and bank practices that determine when such fees are assessed, upon request prior to opening an account. To provide information on the average amounts of various checking and savings account fees, we purchased data from two market research firms that specialize in the financial services industry; Moebs $ervices and Informa Research Services. Moebs $ervices provided us with an electronic file that contained data from 2000 to 2007 on the following fees: annual automated teller machine (ATM) fees, overdraft transfer fees from a line of credit, overdraft transfer fees from a deposit account, return deposited item fees, stop payment order fees and debit card annual fees. Moebs $ervices collected its data through telephone surveys with financial service personnel at each sampled institution. In the surveys, callers used a “mystery shopping” approach and requested rates and fees while posing as potential customers. The surveys were completed in June for each of the years we requested (the 2006 survey was conducted in December), and we obtained data from the following number of institutions (table 3): The statistical design of the survey was developed for Moebs $ervices by Professor George Easton of Emory University. The design consisted of a stratified random sample by (1) institution type (banks and thrifts combined, and credit unions), (2) institution size (as shown in table 4), and (3) regions of the country defined by metropolitan statistical area. We took the data we obtained from Moebs $ervices and computed average fees for institutions overall, as well as for institutions by type, size, and region. We interviewed Moebs $ervices representatives to understand their methodology for collecting the data and ensuring its integrity. In addition, we conducted reasonableness checks on the data we received and identified any missing, erroneous, or outlying data. We also worked with Moebs $ervices representatives to ensure our analysis of their data was correct. Finally, for the years 2000 through 2002, we compared the average fee amounts we calculated with averages the Board of Governors of the Federal Reserve System (Federal Reserve) had calculated using Moebs $ervices data for their “Annual Report to the Congress on Retail Fees and Services of Depository Institutions.” We found our averages to be comparable to those derived by the Federal Reserve and determined that the Moebs $ervice’s data were reliable for the purposes of this report. Informa Research Services also provided us with an electronic file that included summary level fee data from 2000 to 2006. The data included information for the same fees that Moebs $ervices had provided, but, also included the following fees: monthly fees for checking and savings account; insufficient funds and overdraft tiered fees; check enclosure and imaging fees; foreign ATM balance inquiry fees; and foreign ATM denied transaction fees. In addition to fee data, Informa Research Services also provided us with data on the minimum balances required to open an account, the monthly balances needed to waive fees, and the maximum number of overdrafts or insufficient funds fees that an institution would charge per day. Informa Research Services collected its data by gathering the proprietary fee statements of the financial institutions, as well as making anonymous in- branch, telephone, and Web site inquiries for a variety of bank fees. Informa Research Services also receives the information directly from its contacts at the financial institutions. The data are not statistically representative of the entire population of depository institutions in the country because the company collects fee data for particular institutions in specific geographical markets so that these institutions can compare their fees against their competitors. That is, surveyed institutions are self- selected into the sample, or are selected at the request of subscribers. To the extent that institutions selected in this manner differ from those which are not, results of the survey would not accurately reflect the industry as a whole. Informa Research Services collects data on over 1,500 institutions, including a mix of banks, thrifts, credit unions, and Internet-only banks. The institutions from which it collects data tend to be large institutions that have a large percentage of the deposits in a particular market. Additionally, the company has access to individuals and information from the 100 largest commercial banks. Table 5 shows the mix of institutions for which Informa Research Services collected fee type data from 2000–2006. The summary level data Informa Research Services provided us for each data element included the average amount, the standard deviation, the minimum and maximum values, and the number of institutions for which data were available to calculate the averages. Informa Research Services also provided this summary level data by the same categories of institution type and size as the Moebs $ervices data. In addition, Informa Research Services provided us with data for nine specific geographic areas: California, Eastern United States, Florida, Michigan, Midwestern United States, New York, Southern United States, Texas, and Western United States. We interviewed Informa Research Services representatives to gain an understanding of their methodology for collecting the data and the processes they had in place to ensure the integrity of the data. We also conducted reasonableness checks on the data and identified any missing, erroneous, or outlying data and worked with Informa Research Services representatives to correct any mistakes we found. As we did with the Moebs $ervices data, we compared the average fee amounts Informa Research Services had calculated for selected fees for 2000, 2001, and 2002 with the Federal Reserve’s “Annual Report to the Congress on Retail Fees and Services of Depository Institutions.” We found the averages to be comparable to those derived by the Federal Reserve and determined that the Informa Research Services data were sufficiently reliable for this report. To evaluate bank fee trends, for both the Moebs $ervices and Informa Research Services data, we adjusted the numbers for inflation to remove the effect of changes in prices. The inflation adjusted estimates used a base year of 2006 and Consumer Price Index calendar year values as the deflator. To determine the extent to which bank fees are contributing to depository institutions’ revenue, we obtained data from the quarterly financial information (call reports) filed by depository institutions and maintained by the Federal Deposit Insurance Corporation (FDIC). From this data, we analyzed interest income, noninterest income, and service charges on deposit accounts for commercial banks and thrifts from 2000 to 2006. We analyzed the data for all institutions, as well as by institution type (banks versus thrifts) and institution size (assets greater than $1 billion, assets between $100 million and $1 billion, and assets less than $100 million). Similarly, for credit unions, we reviewed the National Credit Union Administration’s (NCUA) “Financial Performance Reports,” which provided quarterly data for interest income, noninterest income, and fee income for all federally insured credit unions from 2000 to 2006. Based on past work, we have found the quarterly financial data maintained by FDIC and NCUA to be sufficiently reliable for the purposes of our reports. To determine the effect, if any, of changing consumer payment preferences and bank processing practices on the types and frequency of account fees incurred by consumers, we reviewed the 2004 and 2007 Federal Reserve payment studies on noncash payment trends in the United States. We also reviewed data on payment trends in debit and credit card transactions from the EFT Data Book. In addition, we spoke with multiple industry experts, including bank representatives and consumer group representatives, such as the Consumer Federation of America, the Center for Responsible Lending, and the U.S. Public Interest Research Group to understand what practices banks employ to process transactions on deposit accounts, how these practices have changed over the past few years, and the potential impact these practices have had on consumers incurring fees, such as overdraft fees. Furthermore, we reviewed studies that analyzed electronic payment preferences and identified one study that used transaction-level data to determine how payment preferences influence overdraft fees. To determine what data are available on the characteristics of consumers who pay bank fees, we reviewed two studies on the topic; one by an academic researcher and another by a consumer group. The academic study used transaction-level account data and regression models to estimate the probability of overdrawing an account. The data included customer information and all transactions with associated balances from May-August 2003, from one small Midwestern bank. The second study used data collected by telephone surveys of 3,310 adults, who were 18 years or older, between October 2005 and January 2006. Both studies suffer from limitations that preclude making inferences to the broader populations of banking customers who pay fees, but they represent the only relevant research at this point, and are suggestive of the characteristics of these customers. We also reviewed documentation on and interviewed officials at the FDIC about their ongoing study of overdraft protection programs, including the phase of their study in which they will review transaction-level data. Finally, we interviewed two academic researchers and representatives of eight consumer groups; five depository institutions; two software vendors; and four industry trade associations, including the American Bankers Association, Independent Community Bankers of America, America’s Community Bankers, and the Credit Union National Association, to determine what research had been done on the topic. To assess the extent that federal and selected state banking regulators review fees associated with checking and deposit accounts as part of their oversight of depository institutions, we obtained and reviewed examination manuals and guidance used by the five federal banking regulators—Federal Reserve, FDIC, NCUA, the Office of the Comptroller of the Currency (OCC), and the Office of Thrift Supervision (OTS)—and conducted interviews with agency officials. We also obtained and reviewed a sample of 25 compliance examination reports, on examinations completed during 2006, to identify how the federal regulators carried out examinations for compliance with Regulations DD and E. We selected five examination reports from each regulator based on an institution’s asset size and geographic dispersion, in an attempt to capture a variety of examinations. The asset size of the institutions ranged from $2 million to $1.2 trillion. In addition, we obtained information on the regulatory efforts of six states. We selected the states based on recommendations from the Conferences of State Banking Supervisors, New York State Banking Department, and Massachusetts Division of Banks and to achieve geographical dispersion. The selected states were: California, Connecticut, Illinois, Maine, Massachusetts, and New York. We reviewed compliance examination manuals and guidance used by the six state regulators and asked specific questions to each state’s appropriate banking officials. To determine the number of complaints that the regulators received on checking and savings accounts, in addition to complaints about fees and disclosures, we requested complaint data, including data on resolutions, for calendar years 2002 through 2006. For the complaint data, we obtained data on the banking products or services involved, the complaint category and, in some cases, the citation of the regulation. While our estimates of the proportions of complaints related to fees depend on how the banking regulators coded the subjects of the complaint they received, and how we combined those related to fees, we judge any possible variations to be slight. For the complaint resolution data, we obtained information about the resolution (outcomes) of complaints and the banking products or services involved. The data came from five different databases: (1) OCC’s REMEDY database, (2) the Federal Reserve’s Complaint Analysis Evaluation System and Reports (CAESAR), (3) FDIC’s Specialized Tracking and Reporting System (STARS), (4) OTS’ Consumer Complaint System (CCS), and (5) NCUA’s regionally based system on complaints. We obtained data from OCC, the Federal Reserve, FDIC, OTS, and NCUA that covered calendar years 2002 through 2006. For purposes of this report, we used data from the regulators’ consumer complaint databases to describe the number of complaints that each regulator received related to fees and disclosures for checking and savings accounts, as well as complaints received by four major product categories—checking accounts, savings accounts, mortgage loans, and credit cards. With respect to the data on complaint resolutions, we used the regulators’ data to describe the number of cases each regulator handled, what products consumers complained about, and how the regulators resolved the complaints. To assess the reliability of data from the five databases, we reviewed relevant documentation and interviewed agency officials. We also had the agencies produce the queries or data extracts they used to generate the data we requested. Also, we reviewed the related queries, data extracts, and the output for logical consistency. We determined these data to be sufficiently reliable for use in our report. Finally, we obtained data from each of the federal regulators on violations they cited against institutions for noncompliance with Regulation DD and Regulation E provisions. Specifically, we asked for data on the total number of violations that each regulator cited for all examined provisions of Regulations DD and E during 2002 to 2006, as well as for data on violations of selected disclosure provisions. The Regulation DD sections that we requested and obtained data on were: §§ 230.3, 230.4, 230.8, and 230.11. The Regulation E sections that we requested and obtained data on were: §§ 205.4 and 205.7. We compiled the data and summarized the total number of violations found for all of the federal regulators during 2002 to 2006. We also obtained data from 2002 through 2006 on the total number of enforcement actions that each regulator took against institutions for violations of all provisions of Regulations DD and E and the selected disclosure provisions. To assess the reliability of data from the five databases, we reviewed relevant documentation and interviewed agency officials. We also had the agencies produce the queries or data extracts they used to generate the data we requested. Also, we reviewed the related queries, data extracts, and the output for logical consistency. We determined these data to be sufficiently reliable for use in our report. Finally, we also requested information from each state regulator on consumer complaint, violation, and enforcement data pertaining to bank fees and disclosures, state specific bank examination processes, and any additional state laws pertaining to bank fees and disclosures. We did not receive all our requested data because some states’ systems did not capture complaint, violation, or enforcement data related to bank fees and disclosures. For those states where information was available, the number of complaints and violations were minimal and not consistently reported among states. We, therefore, attributed the limited information on complaints, violations, and enforcement actions to state officials and did not assess the reliability of this data. To assess the extent to which consumers, upon request prior to opening a checking and savings account, are provided disclosures of fees and the conditions under which these fees are assessed, GAO employees visited 103 bank branches, 36 thrift branches, and 46 credit union branches of 154 depository institutions throughout the nation. We selected these institutions to ensure a mix of institution type (bank, thrift, and credit union) and size; however, the results cannot be generalized to all institutions. We reviewed the federal Truth-in-Savings Act (TISA) and Regulation DD, which implements TISA, to determine what disclosure documents depository institutions were required to provide to new and potential customers. Using a standardized, prescribed script, GAO employees posed as consumers and specifically requested a comprehensive fee schedule and terms and conditions associated with checking and savings accounts. The branches were located in the following cities: Atlanta, Georgia; Boston, Massachusetts; Chicago, Illinois; Dallas, Texas; Dayton, Ohio; Denver, Colorado; Huntsville, Alabama; Los Angeles, California; Norfolk, Virginia; San Francisco, California; Seattle, Washington; and Washington, D.C. The GAO employees visiting these branches also reviewed the institutions’ Web sites to determine if these sites had comprehensive fee schedules and terms and conditions associated with checking and savings accounts. After both visiting branches and reviewing Web sites, GAO employees used standardized forms and recorded whether or not they were able to obtain the specific documents (examples were provided) and whether or not they were able to locate specific information on each institutions’ Web site. To obtain information on issues related to providing consumers with real- time account information during debit card transactions at point-of-sale terminals and automated teller machines (see app. II), we reviewed available literature from the Federal Reserve, including a 2004 report on the issues in providing consumers point-of-sale debit card fees during a transaction. We also reviewed other sources that described the payment processing system related to debit card transactions at merchants and ATMs. In addition, we conducted structured interviews with officials from five banks, two card associations, three third-party processors, four bank industry associations, and one merchant trade organization, and summarized our findings. We conducted this performance audit in Atlanta, Georgia; Boston, Massachusetts; Chicago, Illinois; Dallas, Texas; Dayton, Ohio; Denver, Colorado; Huntsville, Alabama; Los Angeles, California; Norfolk, Virginia; San Francisco, California; Seattle, Washington; and Washington, D.C., from January 2007 to January 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. According to debit card industry representatives we contacted, providing consumers with their “real-time” account balance information during a debit card transaction is technically feasible but presents a number of issues that would need resolution. These issues include the costs associated with upgrading merchant terminals and software to allow for consumers’ account balances to be displayed at the terminals; the potential difficulty of determining a consumer’s real-time account balance, given the different types of transactions that occur throughout the day; concerns over privacy and security raised by account balances potentially being visible to others besides account holders; and the increased time it would take to complete a transaction at merchant locations. A consumer using a debit card to make a purchase at a merchant’s checkout counter (referred to as a point-of-sale debit transaction) has two options for completing the transaction: (1) entering a personal identification number (PIN) or (2) signing for the transaction (similar to a credit card transaction). The consumer is typically prompted at the point- of-sale terminal to choose either “debit” (in which case the transaction is referred to as “PIN-based”) or “credit” (in which case the transaction is referred to as “signature-based”). Regardless of which option the consumer chooses, the transaction is a debit card transaction. PIN- and signature- based debit card transactions differ not only with respect to the input required from the consumer but also the debit networks over which the transactions are carried and the number and timing of steps involved in carrying out the transactions. Similarly, transactions initiated at ATMs can differ in how they are processed. Customers can make withdrawals and deposits not only at ATMs owned by their card-issuing institutions but also at ATMs owned by other depository institutions or entities. An ATM card is typically a dual ATM/debit card that can be used for both ATM and debit card transactions, both PIN-based and signature-based, at participating retailers. PIN-based debit card transactions are referred to as “single message” because the authorization—the approval to complete the transaction— and settlement—the process of transmitting and reconciling payment orders— of the transaction take place using a single electronic message. As shown in figure 7, PIN-based debit card transactions involve a number of steps between the merchant’s terminal and the consumer’s deposit account. Generally, at the locations of large national merchants, after the consumer has swiped the card a message about the transaction is transmitted directly to the electronic funds transfer (EFT) network. (For other merchants, the transaction reaches the EFT network via the merchant’s processor, also known as the merchant acquirer.) The message identifies the consumer’s institution and account, the merchant, and the dollar amount of the purchase. The EFT network routes the transaction to the card issuer (or to the card issuer’s processor, which then passes it to the card issuer). The card issuer—usually the consumer’s depository institution—receives the message and uses the identifying information to verify that the account is valid, that the card has not been reported lost or stolen, and that there are either sufficient funds available in the account or the account is covered by an overdraft protection program (that is, the issuer covers the transaction even if there are insufficient funds in the account, which is also known as bounce protection). If these conditions are met, the issuer authorizes the debit transaction. Specifically, the issuer then debits the consumer’s account and sends an authorization message to the EFT network, which sends it to the merchant’s acquirer, which forwards the authorization to the merchant’s terminal. The entire sequence typically occurs in a matter of seconds. Signature-based debit card transactions involve two electronic messages: one to authorize the transaction and another to settle the transaction between the merchant and the card issuer, at which time the consumer’s account is debited. To conduct a signature-based debit card transaction, the customer typically has a VISA- or MasterCard-branded debit card linked to a deposit account. As shown in figure 8, after the card is swiped, a message about the transaction travels directly (or indirectly, through the merchant’s acquirer) to the VISA or MasterCard network, from which the transaction proceeds directly (or indirectly, through the card-issuing institution’s processor) to the card-issuing institution. As in a PIN-based debit card transaction, if the issuer verifies the relevant information, it authorizes the transaction and routes it back through the VISA or MasterCard network to the merchant’s acquirer with the authorization. The merchant acquirer then forwards the authorization to the merchant’s terminal, and the consumer signs the receipt. The settlement of the transaction between the merchant and card issuer (and the actual debiting of the consumer’s account) occurs after a second message is sent from the merchant to the issuer, usually at the end of the day. The steps involved in ATM transactions depend upon whether a consumer is using an ATM owned by the issuer of his or her card (typically referred to as a “proprietary” ATM), or an ATM owned by a depository institution or entity other than the card-issuing institution (typically referred in the industry as a “foreign ATM”). A foreign ATM transaction is processed in essentially the same manner as a PIN-based debit card transaction, with one exception: the ATM operator (or its processor) routes the transaction to the EFT network, which then routes it to the card issuer. The card- issuing institution authorizes the transaction via the EFT or debit card networks. In contrast, when a consumer uses a proprietary ATM, the transaction stays within the issuer’s network and does not require the use of an external EFT network (fig. 9). Card issuers that are depository institutions—such as banks—may have the capability of providing a notice to their customers at a proprietary ATM that a withdrawal will result in the account being overdrawn and then allow the customer to decide whether or not to proceed with the transaction. Officials from one of the banks that we spoke with stated that they employed this capability at their proprietary ATMs. As of March 2007, there were over 5 million point-of-sale terminals in the United States. According to industry representatives, most point-of-sale terminals are not currently equipped to display a consumer’s checking account balance and, in these cases, merchants would either need to replace the terminal entirely or upgrade the software in the terminal. Industry representatives were hesitant to estimate the costs associated with this because the number of terminals that would need to be replaced versus those that would only need a software upgrade is not currently known. The industry representatives explained that the cost of upgrading the point-of-sale terminals to display account balance information would be primarily borne by merchants. In addition to upgrading point-of-sales terminals, industry representatives identified the following other costs that would be incurred: Upgrading software used by the EFT networks and depository institutions in order to transmit balance information from the card- issuing institution to the merchant. As described above, currently a debit card transaction is authorized by verifying a consumer’s checking account balance and sending back an approval or denial message— which does not include account balance information. Increasing the communications infrastructure of the EFT networks to allow for additional message traffic, namely consumers’ acceptances or declinations of a transaction once they have viewed their account balances. These messages would constitute a second message from the point-of-sale terminal to the card-issuing institution for each transaction. An associated cost with this process would be training employees who work at the terminals how to handle these debit card transactions and the cost of additional time to accomplish transactions, which we discuss here. With respect to providing account balance information at foreign ATMs, one industry representative explained that this would require all entities involved in ATM transactions (banks, ATM operators, ATM networks, and the Federal Reserve) to agree on a common message format to display balances, as well as a new transaction set for ATMs that would provide consumers with the option not to proceed with the transaction once they saw their balances. Two industry representatives we spoke with said that it could take a number of years for all of the entities involved in ATM transactions to agree on a standard format. Debit card industry representatives explained that the account balance that is used to authorize a debit card transaction—and which would be displayed to the consumer—may not necessarily reflect the true balance in the consumer’s checking account at the time of the transaction. One of the reasons for this is that, while a depository institution may attempt to get as close to a real-time balance as possible, it may be unable to capture all of the transactions associated with the account as they occur throughout the day. For example, one depository institution official told us that it updates its customers’ account balances throughout each day; it refers to these updated balances as a customer’s “available balance.” This available balance is updated throughout the day to reflect debit card transactions at point-of-sale terminals and ATMs, as well as other transactions such as those that occur online. This balance, however, might not take into account checks that will be clearing that day, deposits made at a foreign ATM, or some transactions that would come in via the Automated Clearing House (ACH). An example of the latter is a transaction in which a consumer electronically transfers funds from a mutual fund to a checking account. The net result of the inability to provide consumers with a real-time balance is that the consumer may be presented with a balance that is not reflective of all the transactions that will be processed as of that day. Another reason why a depository institution may be unable to provide consumers with a real-time balance is that the institution may not update balances throughout the day. Most institutions “batch process” transactions at night, then post the revised customer account balances. The following day, the institutions update the customer’s account balance for debit card authorizations and certain other transactions that occur throughout the day. However, according to a card association, some small banks only post the account balance from the batch process to the customer’s account and do not update account balances as transactions occur throughout the day. Finally, if a depository institution uses a third-party processor to authorize debit card transactions, the balance that the third-party processor uses may also not reflect all the transactions that occur throughout the day. For example, transactions involving a bank teller, such as deposits or withdrawals, do not require a third-party processor to authorize transactions, thus the processor would not be able to update its balance to reflect these transactions. One of the major concerns raised by the debit card industry representatives we spoke with regarding providing consumers with real-time balances at point-of-sale terminals was a concern over privacy. Unlike ATM transactions, which are transactions between a consumer and the machine, under which consumers tend to be cognizant of the need for privacy, point- of-sale terminals are generally more visible to others, according to these representatives. For example, the balance on a point-of-sale terminal could be visible to the cashier and customers in line at a merchant location. In addition, at restaurants, the waiter or other staff could view this information out of sight of the consumer. The industry representatives stated that most consumers would likely be uncomfortable having their account balance information visible to others. Another related concern raised by these representatives was one of security, in that cashiers or possibly other customers might be able to view a consumer’s account balance. Thus, the industry representatives stated that providing balances at a point-of-sale terminal could increase the risk of fraud. One industry representative told us that providing a balance at a point-of-sale terminal would be a departure from current privacy and security approaches with point-of-sale transactions. Industry representatives explained that allowing consumers to accept or decline a transaction once they have viewed their balance would likely increase the time it takes to get customers through a check-out line. According to a retail merchant’s trade association that we contacted, merchants depend on moving customers quickly through check-out lines. The retail merchants’ trade association stated that adding a step in the check-out process would add time, resulting in lower sales volume per unit of time for each cashier, and potentially greater costs associated with adding cashiers to maintain the same volume of transactions. Industry officials also stated that there were some circumstances during a point-of-sale transaction for which providing consumers with real-time balances would not be possible or would be problematic. For example, during “stand-in” situations, such as when a card issuer’s systems are offline for maintenance, EFT networks review and authorize (or deny) transactions in accordance with instructions from the issuer. The networks would not have real-time access to account balance information when the issuer’s system is down. Another example would be merchants, such as fast food outlets, who perform quick swipes of debit cards for low dollar transactions. At the time of the swipe, the merchant has not actually routed the transaction to the card issuer and thus has not yet accessed the consumer’s account balance. In these cases, the merchant has accepted the risk of not being paid if there are insufficient funds in the account in order to move customers through lines more quickly. Finally, one industry representative questioned how the industry would be able to provide consumers with real-time balances if consumers make debit card purchases online or over the telephone. There are other options short of providing real-time account balances at point-of-sale terminals and ATMs that might assist in warning consumers of a potential overdraft, but each of these options has challenges and limitations. For example, one option involves sending a warning with the authorization message instead of a real-time balance. The warning would indicate that the transaction could result in an overdraft. As indicated above, one of the banks we met with currently provides a similar warning on its proprietary ATMs. The consumer would then have the option to accept or deny the transaction. This option would require two messages to complete a debit card transaction rather than one message. Further, under this option, depository institutions would still be unable to base their authorization decisions on a real-time balance because of the various types of transactions that may occur in a day, and thus no warning message would be triggered—yet once the institution reconciles all accounts, a consumer could be faced with an overdraft fee. This option would also likely slow down transactions and raise costs for merchants. However, unlike providing real-time account balance information at a point-of-sale terminal, this option would not present privacy or security concerns because the balance in the consumer’s account would not be transmitted. Another option short of providing consumers with real-time account balance information is printing a consumer’s available balance on a receipt after a transaction has been completed. This is currently possible when consumers use their card issuers’ proprietary ATMs and some foreign ATMs, according to industry representatives. Under this option, the consumer would not receive a warning that the transaction could subject them to an overdraft, and they would not have a choice to accept or decline the transaction. Further, under this option, the consumer would not be provided his or her account balance until after the transaction was completed. However, once consumers obtained their balance, they could change their spending behavior to avoid a fee on subsequent transactions. This option would entail certain costs for upgrading terminals or software in order to print the consumer’s real-time balance on the receipt, as well as costs of upgrading software to transmit the real-time balance from the card-issuing institution to the merchant terminal. The option would not address an institution’s ability to provide an actual real-time balance and would introduce privacy and security concerns because if the receipt were inadvertently dropped, others could view the balance. However, this option would not slow down the time it takes to complete a transaction because the consumer would not be given the option of accepting or declining a transaction. Finally, industry representatives noted that consumers currently have a number of ways to check their account balances (e.g., by phone and Internet), which might help them avoid overdraft fees. According to Federal Reserve officials, this would require “near-time” processing and a system that synchronizes the balance information reported through the phone and Internet banking systems with the balance information that is transmitted by the institution to the ATM/EFT network. Three of the four large banks we spoke with stated that their customers currently have the ability to sign up for a feature in which the bank will send a message to the consumers’ e-mail accounts or cell phones—“E-alerts”—when their balances reach a designated “threshold” amount. Under this option, consumers receiving an E-alert could change their spending patterns to avoid incurring an overdraft situation and fees. Table 6 compares the E-alert option with other potential options for warning consumers that they may incur an overdraft fee and the associated issues surrounding the particular option. Using the methodology we noted earlier, we analyzed select bank fee data obtained from two firms, Moebs $ervices and Informa Research Services. Some bank fees have increased since 2000, while a few, such as monthly fees, have decreased. As noted earlier in the report, we analyzed data in aggregate for all depository institutions and also by institution type and size. According to data we obtained, banks and thrifts charged more than credit unions for almost all select fees analyzed, and larger institutions charged higher fees than midsized and smaller institutions. We found slight variations in fees charged by region, with certain regions charging less than the national average for some select bank fees analyzed. For example, California and the Western United States consistently charged less than the national average for almost all select fees analyzed according to the Informa Research Services data. For both the Moebs $ervices and Informa Research Services data, banks and thrifts were combined into one institution type category, with credit unions as the other institution type. For both sets of data, the following asset size categories were used: small institutions had assets less than $100 million, midsized institutions had assets between $100 million and $1 billion, and large institutions had assets greater than $1 billion. For the Moebs $ervices data, we computed average amounts ourselves, but statistics were provided to us for the Informa Research Services data. We identified all instances in which the information presented was based on data provided by less than 30 institutions and did not include those instances in this report because averages based on a small number of institutions may be unreliable. The information presented for the Moebs $ervices data is statistically representative of the entire banking and credit union industry, but the Informa Research Services data is not. For additional information on the select fees analyzed and the number of institutions surveyed, see appendix I. Table 7 provides a detailed comparison of the Moebs $ervices data for all institutions for select bank fees for the 8-year period, 2000–2007. Table 8 provides a detailed comparison of the Informa Research Services data for all institutions for select bank fees for the 7-year period, 2000– 2006. The data is presented for a variety of types of checking and savings accounts. In analyzing the resolution of complaints for fees and disclosures associated with checking and savings accounts, we found similar outcomes among complaints received by the Federal Reserve, FDIC, OCC, and OTS. As shown in figure 10, these federal regulators reported resolving complaints in the following order of decreasing frequency: 1. Finding that the bank was correct. This included instances in which the regulator determined that the financial institution did not err in how it administered its products and/or services to the consumer. 2. Providing the consumer with additional information without any determination of error. This included instances in which the regulator told the consumer that the dispute was better handled by a court or where the regulator determined that rather than wrongdoing there was miscommunication between the bank and its customer. 3. Other, including instances in which the consumer did not provide information needed by the regulator or withdrew the complaint. 4. Determining that the bank was in error. This included instances in which the regulator determined that the bank erred in administering its products and/or services to the consumer (errors could include violations of regulations). 5. Complaint in litigation, in which the regulator tabled the complaint because it was involved in legal proceedings. This includes instances in which the regulator can not intervene because the issues raised in the complaint are the subject of either past, current, or pending litigation. In addition to the individual named above, Harry Medina, Assistant Director; Lisa Bell; Emily Chalmers; Beth Faraguna; Cynthia Grant; Stuart Kaufman; John Martin; Marc Molino; José R. Peña; Carl Ramirez; Linda Rego; and Michelle Zapata made key contributions to this report. | In 2006, consumers paid over $36 billion in fees associated with checking and savings accounts, raising questions about consumers' awareness of their accounts' terms and conditions. GAO was asked to review (1) trends in the types and amounts of checking and deposit account fees since 2000, (2) how federal banking regulators address such fees in their oversight of depository institutions, and (3) the extent that consumers are able to obtain account terms and conditions and disclosures of fees upon request prior to opening an account. GAO analyzed fee data from private data vendors, publicly available financial data, and information from federal regulators; reviewed federal laws and regulations; and used direct observation techniques at depository institutions nationwide. Data from private vendors indicate that average fees for insufficient funds, overdrafts, returns of deposited items, and stop payment orders have risen by 10 percent or more since 2000, while others, such as monthly account maintenance fees, have declined. During this period, the portion of depository institutions income derived from noninterest sources--including fees on savings and checking accounts--varied but increased overall from 24 percent to 27 percent. Changes in both consumer behavior, such as making more payments electronically, and practices of depository institutions are likely influencing trends in fees, but their exact effects are unknown. Federal banking regulators address fees associated with checking and savings accounts primarily by examining depository institutions' compliance with requirements, under the Truth in Savings Act (TISA) and its implementing regulations, to disclose fee information so that consumers can compare institutions. They also review customer complaints but do not assess whether fees are reasonable. The regulators received relatively fewer consumer complaints about fees and related disclosures--less than 5 percent of all complaints from 2002 to 2006--than about other bank products. During the same period, they cited 1,674 violations of fee-related disclosure regulations--about 335 annually among the 17,000 institutions they oversee. GAO's visits to 185 branches of 154 depository institutions suggest that, despite the disclosure requirements, consumers may find it difficult to obtain information about checking and savings account fees. GAO staff posing as customers were unable to obtain detailed fee information and account terms and conditions at over one-fifth of visited branches and also could not find this information on many institutions' Web sites. Federal regulators examine institutions' written policies, procedures, and documents but do not determine whether consumers actually receive disclosure documents. While consumers may consider factors besides costs when shopping for accounts, an inability to obtain information about terms, conditions, and fees hinders their ability to compare institutions. |
FPS’s mission is protect the buildings, grounds, and property that are under the control and custody of the GSA, as well as the persons on the property; to enforce federal laws and regulations; and to investigate offenses against these buildings and persons. FPS conducts its mission by providing security services through two types of activities: (1) physical security activities—conducting facility risk assessments and recommending countermeasures aimed at preventing incidents at facilities—and (2) law enforcement activities—patrolling facilities, responding to incidents, conducting criminal investigations, and exercising arrest authority. In 2007, FPS adopted an inspector-based workforce approach to protecting GSA facilities. Under this approach, FPS eliminated the police officer position and uses about 752 inspectors and special agents to oversee its 15,000 contract guards, provide law enforcement services, conduct building security assessments, and perform other duties as assigned. According to FPS, its 15,000 contract guards are used primarily to monitor facilities through fixed post assignments and access control. FPS’s facility security assessments and the corresponding recommended security countermeasures, such as contract security guards, security cameras, bollards, or magnometers, are based on standards set by the Interagency Security Committee (ISC). The ISC is composed of 49 federal departments and agencies and is responsible for developing and evaluating security standards for federal facilities in the United States. The foundation of the ISC standards is the facility security level (FSL) determination. FSL determinations, which range from level I (the lowest) to level V (the highest) are based on criteria including facility size and population, mission criticality, symbolism, and threat to customer agencies. FPS was created in 1971 and was part of the GSA until 2003, when the Homeland Security Act of 2002 transferred it to DHS’s Immigration and Custom’s Enforcement (ICE) component. In October 2009 FPS was transferred within DHS from ICE to the National Protection and Programs Directorate (NPPD). During the period it was within GSA, FPS was under the umbrella of the Federal Building Fund (FBF), an intragovernmental revolving fund that is part of GSA’s Public Building Service, and received administrative support services from GSA. During this period, GSA did not know how much it was charging for facility security and, therefore, how much of the facility security costs it was recovering. GSA officials said the security costs were funded by an unknown portion of the facility’s appraised rental rate plus an additional charge of approximately $0.06 per square foot. GSA officials said this means there is no way to know whether, and, if so, how much, the security costs were subsidized by other revenue in the FBF. When FPS transferred to DHS, no portion of GSA’s rental rate was transferred with FPS. FPS’s fees did not recover its costs during the transition years from GSA to DHS. FPS is authorized to charge customer agencies fees for security services and to use those collections for all agency operations. All of FPS’s security fees are available to FPS, without fiscal year limitation, for necessary expenses related to the protection of federally owned and leased buildings for FPS operations. Currently FPS is a fully fee-funded organization. Customer agencies use their appropriated funds to pay FPS security fees, which are credited to FPS as offsetting collections. These fees are used for FPS’s expenses in providing security services and for overhead costs and capital investments. Rather than receiving a direct appropriation each year, FPS receives authority through the appropriation process to obligate and spend its collected fees. Since 2007 FPS has had authority to use all of its collections for necessary expenses. As we will discuss in more detail, FPS charges federal agencies three fees: (1) a basic security fee, (2) the building-specific administrative fee, and (3) the security work authorization (SWA) administrative fee. All customer agencies in GSA-controlled properties pay a basic security fee. Customer agencies in facilities for which FPS recommends specific countermeasures pay the building-specific administrative fee, along with the cost of the countermeasure. Customer agencies that request additional countermeasures pay the SWA administrative fee, along with the cost of the countermeasure. FPS security fees are transferred from customer agencies to FPS’s expenditure account per interagency agreements. FPS retains all collected fees, using the basic security fee and the building-specific and SWA administrative fees to cover its operating costs. FPS passes revenue for the contract costs associated with building-specific and SWA countermeasures on to the contractors that provide security equipment or guard services. Figure 1 illustrates the flow of funding for the three fees. FPS’s security fees do not neatly fit with a single type of fee or charge, but aspects of various guidance and criteria may apply, including GAO’s User Fee Design Guide, OMB Circular A-25, and the Chief Financial Officers (CFO) Act of 1990. We developed a User Fee Design Guide that examines the characteristics of user fees, and factors that contribute to a well-designed fee. The manner in which fees are set, collected, used, and reviewed may affect their economic efficiency, equity, revenue adequacy, and administrative burden. The design guide principles are used in evaluating fees that are charged to readily identifiable users or beneficiaries of government services beyond what is normally provided to the general public. A number of these principles can serve as good practices for FPS to consider: Efficiency: Efficiency refers to requiring identifiable beneficiaries to pay for the costs of services, allowing user fees to simultaneously constrain demand and reveal the value that beneficiaries place on the service. Equity: Equity refers to everyone paying his/her fair share, though the definition of fair share can have multiple facets. For example, equity could be based on the beneficiary paying for the cost of the service or equity could be based on the beneficiaries’ ability to pay. Revenue adequacy: Revenue adequacy refers to the extent to which the fee collections cover the intended share of costs. It encompasses variations in collections over time relative to the cost of the program. Revenue adequacy also incorporates the concept of revenue stability, which generally refers to the degree to which short-term fluctuations in economic activity and other factors affect the level of fee collections. Administrative burden: Administrative burden refers to the cost of administering the fee, including the cost of collection as well as the compliance burden. OMB’s Circular A-25 establishes federal policy regarding user fees, including the scope and types of activities subject to user fees and the basis upon which the fees are set. It also provides guidance for executive branch agency implementation of fees and the disposition of collections. Circulars No. A-25 and No. A-11 both include guidelines to agencies when determining the amount of user charges to assess. The CFO Act of 1990 requires an agency’s CFO to review, on a biennial basis, the fees, royalties, rents, and other charges for services and things of value and make recommendations on revising those charges to reflect costs incurred. While the CFO Act generally is applied to fees charged by government agencies to nongovernmental entities, the act’s provision requiring biennial review provides a useful leading practice for intragovernmental fee review. When FPS was a part of GSA, GSA did not charge customer agencies security fees to recover the full cost of physical security services. However, since FPS transferred to DHS in 2003, FPS has been required to recover its full costs through security fees. FPS’s initial fee rates were established without a clear understanding of what FPS’s total costs had been and were likely to be. As a result, FPS did not initially collect enough in fees to cover its actual costs. Despite a number of security fee increases and cost-cutting efforts in the 7 years since transferring to DHS, FPS has not conducted a fee review to develop an informed, deliberate fee design. FPS officials said they are not required to report on FPS’s security fees as part of the DHS biennial fee review required by the CFO Act because FPS’s fees are paid by government payers. However, the CFO Act does not specify that fees from government payers are excluded from the biennial reporting requirement. FPS officials said the annual budget formulation process and resulting budget justification serve as their fee review. Further, although FPS is required to annually certify that its collections will be sufficient to maintain a particular FTE level, in 2008 and 2010, the years FPS certified its collections, it did not provide detail about the operations or activity costs. Currently, FPS sets its fee rates for a given year so that its estimated total collections match the agency’s estimated total operating costs. To do this FPS first estimates collections from the basic security fee and then adjusts the building-specific and SWA administrative fees as needed to bridge any difference. Specifically, (1) FPS estimates the agency’s total operating costs for the upcoming fiscal year, then (2) estimates how much it will likely collect in basic security fees. FPS estimates the basic security fee collections by multiplying the current per-square foot basic security fee rate by the square footage it protected in the last quarter of the prior year. To set the building-specific and SWA administrative fee rates, FPS estimates the total cost of the contractor-provided countermeasures in the aggregate based on the prior year’s cost. It then determines the percentage of the total estimated cost of the countermeasures likely to generate enough in collections to cover any difference between its estimated operating costs and its estimated basic fee collections. For example, if FPS estimates it will need $270 million for total agency operating costs and estimates it will get $220 million in collections from the basic security fee, FPS sets the building-specific and SWA administrative fee rates at a percentage rate of the estimated cost of countermeasures that it estimates will raise an additional $50 million. FPS officials said the current fee- setting methodology is simple and shares FPS’s costs equitably among its customer agencies in about 9,000 GSA-controlled facilities. FPS was expected to fully cover its costs for the first time when it moved to DHS in 2003, but it did not actually do so until 2007. To cover the difference, FPS raised fee rates, requested additional funds from GSA and DHS, and imposed cost-cutting efforts (see table 1). We have previously reported that these cost-cutting efforts—which included restricted hiring and travel and limiting training and overtime had adverse effects on the agency, including effects on morale, safety, and increased attrition. As shown in table 2, to cover its costs FPS has increased its fees four times from 2004 to 2009. The basic security fee alone has increased more than 100 percent—from $0.30 cents per square foot in fiscal year 2004 to $0.66 cents per square foot in fiscal year 2009. Further, the President’s Fiscal Year 2012 Budget proposes an additional $0.08 increase in the basic security fee to $0.74 per square foot (see table 2). In March 2008, FPS increased the basic security fee in midyear and made the increase retroactive to the start of fiscal year 2008. As discussed in more detail later in this report, customer agency officials we spoke with said the midyear change created serious budgeting challenges. From 2004 to 2010, FPS’s building-specific and SWA administrative fees have declined from 15 percent in 2007 to 6 percent in 2010, reflecting the collections level needed (see table 2). We have previously reported that fee collections should be sufficient to cover the intended portion of program costs over time and that while regular, timely, and substantive fee reviews are critical for any agency, they are especially important for agencies—like FPS—that are mostly or solely fee funded in order to ensure that fee collections and operating costs remained aligned. Without conducting a fee review to develop an informed, deliberate fee design, fee adjustments are arbitrary and are unlikely to align with actual agency costs. FPS does not know the cost of providing specific security activities to its customer agencies, although in its Security Services and Pricing Provision document, FPS associates specific security activities with specific security fees—an alignment that does not exist (see table 3). That is, FPS uses security fee receipts for activities other than those it associates with a specific fee. For example, FPS charges agencies administrative fees for the management and oversight of contractor- provided countermeasures, but FPS does not know if those fees recover the management and oversight costs, nor are they specifically applied to management and oversight activities for the countermeasures. FPS’s Security Services and Pricing Provision document was created to address stakeholder questions about what services FPS was providing. By law FPS is required to charge fees that cover its total operating expenses, but it is not required to have specific fees match the cost of specific activities. FPS officials acknowledged that customer agencies may assume a tight linkage between the fees and the services, but they noted they are not required to keep data at this level of detail. Although FPS is not required to use specific fee collections for specific security activities, by suggesting an alignment where none exists FPS is contributing to stakeholder confusion and potential adverse effects on building security. Officials from several customer agencies we spoke with said their agencies had procured countermeasures, such as closed circuit television systems, through private security companies or GSA rather than through FPS. FPS officials said that this can be problemat ic because services acquired externally from FPS are frequently incompatib with FPS’s central monitoring system. Officials from FPS’s customer agencies also described confusion or lack of clarity about the basic security and contract oversight services they receive in return for the fees they pay. A number of officials from customer agencies told us that they le wanted to know the cost of FPS’s services at their facilities. We have previously reported that effectively communicating with stakeholders may contribute to an improved understanding about how the fees work and what activities they fund. FPS’s current activity based costing (ABC) model cannot break out the costs of the specific FPS inspector-provided services associated with the different security fees. Officials told us that this is because an inspector may conduct multiple types of activities in a single facility visit. For example, FPS officials said that in one facility visit, a FPS inspector might conduct oversight of the contract guards and conduct an interview with an agency to inform the facility security assessment. They said they have an ABC model to help them better understand their costs but do not yet have the data on the costs of its activities to populate the model and make activity cost linkages. FPS expects new systems such as the Risk Assessment and Management Program (RAMP), when fully implemented, to provide them with more detailed cost data needed for these kinds of linkages. We believe, however, that even without RAMP, FPS has data needed to reasonably estimate certain costs. For example, officials told us that they have estimates for the percentage of time inspectors spend on standard activities, such as contract oversight, that could be used to approximate a large portion of its activity costs. FPS does not have a detailed understanding of its activity costs, including information about the cost of providing its security services at federal facilities of different risk levels, and therefore has difficulty justifying the rate of the basic security fee to its customers. The Statement of Federal Financial Accounting Standards Number 4, Managerial Cost Accounting Concepts and Standards for the Federal Government, establishes standards for federal agencies to use in reporting the costs of their products, services, and activities, including providing reliable and timely information on the full cost of federal programs, their activities, and outputs. We have found that having accurate cost information—and understanding the drivers of a program’s cost—allows an organization to demonstrate its cost-effectiveness and productivity to stakeholders, link levels of performance with budget expenditures, provide baseline and trend data for stakeholders to compare performance, and provide a basis for focusing an organization’s efforts and resources to improve its performance. FPS has developed a workforce analysis plan that is under review by the Secretary of Homeland Security and OMB. Officials said the model shows that for FPS to fulfill its mission would require more inspector hours than current resources would cover. Although staff is the primary cost driver for FPS, FPS officials did not tell us how the fee structure or rates would be affected. According to FPS officials, FPS does not include the cost of planned systemwide capital investments when estimating its costs and setting its fee rates. As a result, FPS is unable to fund all of its capital investment priorities. Instead, FPS relies on any carryover balance to pay for the systemwide investments. Recently, FPS officials said FPS has used its carryover balance to fund information technology investments, such as FPS’s RAMP, and operational initiatives, such as overtime associated with FPS’s Operation Shield Program to measure the effectiveness of FPS countermeasures. The carryover balance comes from two main sources: (1) deobligated funds from contracts for building-specific countermeasures and (2) unspent fee collections from prior years. FPS officials said that as a normal part of its contract management business process it deobligates funds from contracts when the actual building- specific countermeasure contract cost is less than the estimated costs FPS charged to customer agencies. For example, officials said if a contract guard does not report to his post for 4 hours, FPS’s estimated contract cost will be higher than the actual contract cost and FPS will deobligate the unused funds. According to FPS officials, the second source of FPS’s carryover balance is collections from the basic security fee and the building-specific and SWA administrative fees that were not spent in the previous fiscal year. FPS officials told us that its carryover balance has been about $45 million annually in recent years. Since FPS’s carryover balance is not large enough to fund all its systemwide investments, FPS officials said FPS makes investment decisions annually based on the cost of the various investments and the availability of carryover funds. As a result, FPS has had to delay certain critical systemwide capital investments. For example, FPS officials said they have delayed investment in FPS’s $79.5 million radio program in both fiscal years 2009 and 2010, which would have provided FPS officers with radio communication capabilities in many locations across the country where it is currently unavailable. Upgrading FPS’s communication infrastructure, which FPS officials described as in urgent need of replacement, is meant to address potential officer safety issues. Instead, FPS pays for maintenance on the old system, which FPS described as less effective and more expensive in the long term. As we have previously reported, it is inevitable that resource constraints will prevent some worthwhile capital investments from being undertaken. However, decisions about whether any particular capital investment is funded should reflect the priorities of the administration and Congress. Ideally, those capital investments that are funded will be ones with the highest returns or that meet the highest priority mission needs, rather than those that happen to fit the unplanned carryover. Despite evidence that customer agencies receive varying levels of basic security services and therefore do not cost the same amount to protect, FPS does not know the extent to which some customers are subsidizing the activities received by other customers. The level—and therefore the cost—of basic security services FPS provides at each of the 9,000 GSA facilities for which it is responsible varies depending on the facility’s security risk level and its proximity to FPS. FPS categorizes buildings in security levels based on its assessment of the building’s risk and size—but the basic security fee rate is the same for all facilities even though higher- risk facilities receive more services and cost more to protect. For example, level I facilities typically face less risk because they are generally small storefront-type operations or have limited public contact, such as a Social Security office. A level IV facility may have significant public contact and may contain high-risk law enforcement and intelligence agencies and highly sensitive government records. In some cases, there are known cost variations to providing security services. For example, ISC standards require facility security assessments—an activity associated with the basic security fee—every 5 years for lower risk level, and 3 years for higher risk level facilities. We have previously reported that in some situations FPS staff are stationed hundreds of miles from buildings under its responsibility, with many of these buildings rarely receiving services from FPS staff and relying mostly on local law enforcement agencies for law enforcement services. However, these customer agencies are charged the same basic security fee rates as are buildings in major metropolitan areas where numerous FPS officers and inspectors are stationed and are available to provide security services. We have previously reported that a customer in a federally owned building in a remote location did not know that FPS provided 24-hour alarm-monitoring services, because FPS had not visited the office in over 2 years. In our design guide we noted that there are trade-offs involved in deciding between systemwide and user-specific fees. Effectively setting a fee rate requires determining how much a program costs and determining how to assign program costs among different users. Since systemwide fees—or fees set at an average rate—may be higher or lower than the actual costs of providing services to those users, they can lead to cross-subsidizations among users. User-specific fees—or fees based on the cost of providing the program or service from which that user benefits—ensure that each user pays for the cost of services actually used. However, there are trade- offs between user-specific and systemwide fees. We have previously reported that systemwide fees may promote a policy goal such as helping to support a national system, but user-specific fees may be more desirable if the fee is seen as a way to support individual entities or locations or if there is wide variation in the cost of services among users. FPS officials said the basic security fee and administrative fees were designed to spread costs among agencies in two ways. First, FPS officials said their mission—to ensure safety at GSA-controlled federal facilities—is a national policy goal and therefore the basic security costs are intended to be shared evenly among the facilities. FPS officials compare the basic security fee to a local property tax paid to maintain police services. Second, FPS officials said the building-specific and SWA administrative fees are designed to reflect the increased risk inherent to facilities requiring or requesting additional countermeasures and therefore the administrative fees should subsidize the aggregate cost of basic security services. As noted, facilities implementing recommended or voluntary security countermeasures pay the basic security fee, the cost of the contractor-provided countermeasures, and a 6 percent building-specific or SWA administrative fee. According to the ISC standards security level III and IV facilities have nearly all of the countermeasures and therefore also pay nearly all of FPS’s building-specific and SWA fees. FPS officials told us their intent is for the administrative fees paid by facilities with recommended or agency-requested countermeasures to help fund the basic security costs at higher-risk facilities. FPS officials said they do this because having security countermeasures reflects an increased security risk at these facilities, and as a result, these facilities are also likely to consume more of FPS’s basic security services. In other words, FPS tries to collect more in building-specific and SWA administrative fees from agencies with countermeasures than it costs to manage the contracts for the countermeasures and uses the additional collections for basic security services at higher-risk facilities. However, because FPS does not know what it costs to administer these contracts, it does not know whether the administrative fees are providing the intended subsidy or are supporting basic security costs as intended. While charging some beneficiaries more or less than the actual service costs may help achieve a particular public policy goal, reliably accounting for the costs and benefits of such a provision is important to ensure that these provisions are achieving the intended results. Other security fee structures may address the current equity and cross- subsidizations and improve transparency to customer agencies. However, without a full fee review it is difficult to understand fully the relative trade- offs in any particular proposal. In addition, revising the funding mechanisms alone would not address the variations in service levels reported by FPS’s customer agencies nor the overall level of services FPS provides at GSA-controlled facilities. Alternatives discussed in past GAO work and prior legislative proposals—which have not been acted on to date—regarding FPS’s current security fee design were (1) modifying the current fully fee-funded structure to better align fees with facility risk levels and (2) funding FPS through a combination of fees and direct appropriations. Modified Fee Structures. Changing the design of the basic fee to reduce cross-subsidizations could address equity concerns and increase transparency and acceptance among customer agencies. Two alternative fee structures discussed were: Charging customer agencies for the basic security activities based on a tiered fee system, where facilities in each tier pay fee rates based on FPS’s average service costs for facilities within the respective tiers. Charging customer agencies using a two-part basic fee consisting of (1) a flat rate to cover fixed costs and (2) a risk-level-based fee to cover average marginal costs associated with facility security risk level. Both alternatives aim to link more closely each customer’s fee rates to average costs associated with their building risk level. We have previously reported that setting fees in this way—that is, at a rate equal to the marginal cost of providing services—maximizes economic efficiency by ensuring resources are allocated to their highest use. However, in part because it is often difficult to measure marginal cost, fee rates are sometimes set based on average costs. When the marginal costs of providing services are measurable but are low compared to the fixed costs of the program, setting the fee at marginal cost will lead to collections less than total costs. If a review of FPS’s costs comes to this conclusion, a two- part fee, where all agencies would pay a portion of fixed costs, including systemwide capital investments, plus an amount that reflected the average service costs by facility security level, could make sense. The amount by which an agency’s security costs would change under this option would depend on the security levels of the buildings in which the agency is located. Agencies that are more frequently located in facilities with higher security levels are likely to see their basic security fee rate increase, while agencies more frequently located in lower security risk level facilities may see their basic security fee decrease, stay the same, or increase at a slower rate. In both alternatives above, revising the basic security fee to reflect the service cost variation among customer agencies would better align the fee with the costs of the services received. This would help address concerns about the fee charged considering the disparity in services that agencies receive. However, the administrative cost of identifying user-specific costs may outweigh the benefits. We have previously reported that if a program has relatively few categories of users and the cost of providing the service to those groups differs significantly, then user-specific fees might be both beneficial and feasible. Conversely, if there are numerous different categories of users or there is a small cost variation among them, the efficiency gains of a user-specific fee may be overwhelmed by the added costs of administering a more complicated fee structure. Without a fee review, it is unclear which type of fee structure is most appropriate for FPS. It is important to consider the administrative and collection costs both to the provider and to the customers when designing a fee. From the customer agency perspective, a varied fee rate system may complicate budgeting, space planning, and billing reconciliation. From FPS’s perspective, a varied fee system would require detailed analysis of activity costs and incorporating a facility security level component to their billing system. Combination of Fees and Direct Appropriations. Proposals to fund FPS through a combination of fees and direct appropriations could also reduce cross-subsidizations and increase acceptance among customer agencies, but it would not address the disparity in the service levels among facilities that are charged similar fees. Proposals to fund FPS through direct appropriations would: Fund FPS’s basic security activities and the administrative costs of implementing building-specific and SWA countermeasures via a direct appropriation to FPS and charge agencies only for cost of the actual countermeasures. FPS officials said a funding model that includes a direct appropriation to FPS should provide appropriated funds for all of the agency’s fixed costs since it is difficult for FPS to predict agency demand for voluntary services such as additional countermeasures. Because the majority of FPS’s costs are salary related and must be paid whether or not agencies “purchase” voluntary services such as SWAs or building-specific security countermeasures in the amounts for which FPS planned, FPS officials said funding all of FPS’s FTEs with appropriations would reduce concerns about revenue adequacy. Receiving an appropriation for their fixed costs would not eliminate the need for FPS to develop an informed budget request based on estimated agency needs. Although security-related missions, like FPS’s, may be less vulnerable to budget cuts, it is important to note that discretionary appropriation decisions are generally made annually and should not be assumed to remain at a constant funding level—especially in the current fiscal environment. Any model that shifts some FPS funding from fees to direct appropriations must be viewed through the lens of the overall federal budget. If Congress desires to keep the total federal investment in facility security at current levels, a direct appropriation to FPS would mean either (1) an increase in the homeland security appropriations subcommittee budget allocation and a corresponding decrease in the allocations for the appropriations subcommittees responsible for FPS’s customer agencies or (2) shifting priorities within the homeland security committee’s current budget allocation and providing resources to FPS in lieu of other homeland security activities. Funding FPS with direct appropriations may result in a decline in interagency payment processing costs. FPS officials said they pay the ICE’s Burlington Finance Center about $3.1 million annually to support FPS’s billing process. Officials said the finance center does about 80 percent of the actual processing for FPS transactions. A direct appropriation to FPS would mean fewer transactions between FPS and customer agencies since only facilities with recommended or requested countermeasures would make payments to FPS. Officials said that if it received a direct appropriation, FPS’s overall budget could be reduced by the amount it currently spends on the administrative costs associated with collecting fees. Funding all or some of FPS’s activities with a direct appropriation may also increase demand for FPS services. That is, when beneficiaries do not pay for the cost of services they may seek more of the service than is economically efficient. If FPS were to receive direct appropriations based on its current costs, the amount would not cover security costs for agencies that currently procure security services external from FPS if they decide to request them from FPS. Based on our updated work, any analysis meant to inform decisions about the type of funding model that would best meet FPS’s needs would be incomplete without discussing the types of funding models discussed in this report—specifically, both (1) alternative fee structures and (2) a combination of fees and appropriations. In 2008 we recommended that FPS evaluate whether its current use of a fee-based system or an alternative funding mechanism is the most appropriate manner to fund the agency, and although DHS concurred with the recommendation, FPS has not begun such an analysis. When we asked whether FPS had considered the benefits and challenges of other fee designs, FPS officials said that there were probably other fee designs that could recover all of FPS’s costs. However, they said that they find the current fee structure to be simple, straightforward, and efficient, and they are not convinced that they can improve equity among payers within the current structure. FPS officials said that having specific options for them to consider—such as the ones discussed above—would help them complete this type of analysis. FPS and customer agencies identified two key issues that lead to billing and budgeting challenges. First, FPS lacks points of contact in its customer agencies for budgeting and billing purposes, which leads to difficulties and delays in resolving billing discrepancies. Second, FPS and customer agencies described a lack of timely, reliable information available for the budget formulation process. This makes it difficult for agencies to, for example, timely implement security countermeasures meant to address current and emerging security threats. Although there are no obvious solutions for many of the budget timing disconnects described below, alternative budget account structures could help mitigate these challenges without compromising accountability. FPS communicates with customer agencies regarding its security fees through annual fee rate letters, regional conferences, and Facility Security Committee meetings; nevertheless, FPS reported difficulties determining the correct customer agency points of contact for the fees. Not all customer agencies we spoke with budget and pay for their FPS security fees centrally. Rather, headquarters and regional offices have shared responsibility for managing FPS security services and fees, making it difficult for FPS officials to find the appropriate officials with whom to discuss security charges and billing issues. FPS officials in one region said they did not have a complete list of all appropriate customer agency contacts that budget for FPS security fees or pay FPS security bills, who may work in different agency offices. In a different region FPS officials said they have trouble identifying their target audience and stakeholders at customer agencies. Because officials in security offices and in budget or finance offices have responsibilities regarding FPS services and fees, the officials involved in determining which services the agency purchases from FPS may be different than the officials that budget for FPS security fees or pay FPS security bills. Determining the correct points of contacts can be so confusing that even customer agency officials themselves reported difficulties in getting information about FPS services and fees from their own agencies. The confusion goes both ways, as customer agency officials at times also find it difficult to identify appropriate points of contact in FPS, even though FPS includes a point of contact on its security bills. For example, CDC officials said that communicating with FPS about billing issues is a constant challenge because CDC handles billing in CDC headquarters but FPS determines customer agencies’ security costs and coordinates with customer agency officials at the regional level. FPS has a different point of contact for each of its 11 regions and sometimes multiple contacts within a region. CDC officials said that FPS points of contact vary by facility so they often do not know whom to contact at FPS with billing questions. GSA officials said that the FPS staff with whom they work are not always responsive to problems, which GSA attributed to large workloads. Some customer agency officials are confused about the roles of GSA and FPS regarding security fees. GSA officials said that some customer agencies continue to contact GSA with questions about FPS security bills even though FPS transferred out of GSA in 2003. FPS officials in one region also said that customer agencies confuse the roles of FPS and GSA and they sometimes receive questions from customer agencies on GSA rent charges and services. FPS officials in another region also said bills can be confusing to customer agency officials because FPS basic and building-specific security bills are displayed with GSA rent bills on GSA’s Rent on the Web system. Complicating matters, FPS officials explained that FPS refers agencies to GSA for questions on square footage data because FPS bills for the basic security fee based on square footage from GSA’s STAR inventory system. FPS and GSA officials said their agencies need to better educate customers on the different roles of FPS and GSA in the billing process. Effectively communicating with stakeholders involves sharing relevant analysis and information as well as providing opportunities for stakeholder input; agencies that do not communicate effectively with stakeholders miss opportunities for meaningful feedback that could affect the outcome of changes in fees and program implementation. We found that the quality and quantity of FPS’s communication with stakeholders varies by region. Officials in some FPS regions said they typically wait for client agencies to ask questions about the fees rather than taking the initiative to push information out to them. In other regions there is a greater focus on outreach efforts. For example, in 2009 and 2010 FPS- National Capital Region (NCR) invited their security points of contact to an annual security summit to discuss a range of issues, including fees. Although not all customer agency budget and management officials were aware of the summits, those that did attend generally found it helpful. For example, SSA officials who work with FPS security fees said several of their officials attended the NCR security summit last year and found it useful. They said FPS explained its procedures, and attendees were able to ask questions. At FEMA in the NCR, a security official participated in the summit for 2 years and found it to be beneficial. However, FEMA budget officials who manage security fees were not informed by either FEMA’s security officials or by FPS of the summit. Similarly, although FPS provides an annual letter to customer agency heads and CFOs regarding fee rates for the upcoming fiscal year, in some customer agencies the rates were not communicated to the agency officials who are responsible for budgeting. While customer agencies are responsible for communicating FPS fee rates within their agency, when all the necessary officials do not receive information on FPS security fees, it creates implementation challenges for both FPS and the customer agencies. We have previously reported that agencies providing services can segment their customers into groups and provide targeted communication or services to better meet customer needs. When FPS’s communication efforts do not reach all of the customer agency officials working with FPS security fees, important information on rates and procedures are missed, contributing to operational challenges, such as overbilling issues, discussed below. FPS has taken steps to improve communication with customer agencies at the regional level. FPS officials in two regions said FPS has made efforts to educate all FPS employees on the security fee rates and the services they cover, so FPS can address customer agency questions at all levels of the organization and provide accurate information to its customers. They said these efforts have reduced the number of questions on their fees from customer agencies. In some cases, confusion regarding FPS contacts can lead to significant challenges in resolving billing issues. For example, in 2008 FDA officials said FPS overbilled FDA $2.1 million because FPS billed for the same service in both an SWA and the building-specific charge. FDA officials said it was difficult to locate the appropriate point of contact and they had to communicate with FPS multiple times over 6 months to resolve the issue. In another example, HHS headquarters officials discovered a $100,000 error in their bill for one facility. FPS had billed HHS as if it was the sole tenant in the building because another tenant had vacated. HHS officials said it took several months to resolve the problem because they found it difficult to identify someone at FPS who understood the problem and could issue them a credit. When we spoke with FPS officials about these issues they told us that it can take 3 to 6 months to credit customer agencies when they are overbilled because FPS performs an audit of the account that covers several years and FPS and the customer agency need to agree on the amount to be credited to the agency. FPS officials also said they process refunds once a month. Unresolved billing issues lead to customer agency funds being tied up and not available for other activities. In times of fiscal constraint this can be especially challenging. Unresolved billing issues also lead to wasted customer agency resources in the form of the time spent to resolve the issue and create bad will. FPS has procedures in place to prevent under- or overbilling customer agencies. FPS NCR officials said FPS performs a reconciliation process each month to check for billing errors as well as a monthly post-by-post report card that reports which contract guard posts are paid by building-specific charges and which are tenant-specific charges. FPS does not have a process to calculate security fee rates prior to submitting its budget to OMB and therefore does not have timely information to provide to customer agencies to inform their budget formulation process. As a result, customer agencies’ annual budget submissions to OMB include security funding requests that are not based on accurate security cost estimates. OMB Circular No. A-11 states that where possible agencies should include the full cost of a program and cover all programs and activities in their budget submissions. In the past, FPS has provided estimates of security fee rates to customer agencies approximately 9 months after agency budget requests are submitted. For example, in the fiscal year 2011 budget cycle, agencies submitted their budget requests to OMB in September 2009 and FPS provided fee rates in July 2010 (after the budget had gone to Congress). This is because FPS is on the same budget cycle as its customer agencies. FPS officials said FPS is working to improve its process to notify customers of fee rates and security costs. In his fiscal year 2012 budget the President proposed increasing the basic security fee to $0.74 per square foot. This is the first time FPS has provided its fee rate for the upcoming fiscal year in its congressional justification of estimates. FPS officials said they included the proposed fee rate to allow as much time as possible for agencies to plan for resources for security fees. While FPS did provide more notice about a potential fee rate change than in the past, federal agencies all submit their fiscal year 2012 budget requests to OMB at the same time. The proposed fee increase is available to Congress for the appropriations cycle, so Congress does have the opportunity to consider FPS’s proposed fee rate increase at the same time that it considers appropriations for FPS’s customer agencies. While FPS can indicate a fee increase in its budget documents, it cannot finalize its fee rates for a given fiscal year until DHS’s appropriation is enacted. This is because FTEs are the largest driver of FPS’s cost and the DHS Appropriations Act specifies the FTE level at which FPS must operate. According to FPS officials, if requirements in the DHS Appropriations Act require more resources than FPS estimated, FPS may need to increase its fee rates midyear. For example, in March 2008— halfway through the fiscal year—FPS increased its basic security fee to $0.62 to fund increased FTE levels in the fiscal year 2008 DHS Appropriations Act. Mandated changes to FPS’s FTE levels also challenge FPS’s ability to provide accurate fee rate information to its customer agencies in a timely manner. Officials said FPS may have to increase its fee rates in the middle of fiscal year 2011 because the proposed Senate bill for the fiscal year 2011 Homeland Security Appropriation included a requirement for FPS to increase FPS’s minimum FTEs to 1,348, or 148 FTEs greater than the current required level on which FPS’s budget estimates were based. However, under FPS’s final fiscal year 2011 appropriation, FTE levels were set at 1,250, which is 50 more than the level on which FPS’s budget estimates were based. Such changes are not unusual. For example, during the 111th Congress (2009-2010) two other bills were introduced that would have required FPS to increase its FTE level. The proposed Federal Protective Service Improvement and Accountability Act of 2010 included a provision to increase FPS’s FTEs to 1,350, while the proposed Supporting Employee Competency and Updating Readiness Enhancements (SECURE) for Facilities Act of 2010 included a provision to increase FPS’s FTEs by 350 over a 4-year period. Unexpected changes in FPS security fees require customer agencies to make unplanned trade-offs during the fiscal year. Because customer agencies do not have FPS security fee estimates in time for budget formulation, they create their own “rules of thumb,” which vary by agency. Officials from one agency with whom we met said it budgets for a 2-3 percent increase in security fees, while officials from a different agency said they budget for a 7 percent increase. In the past, customer agency rules of thumb might not have provided enough room to cover fee increases in those fiscal years with large increases in fee rates. Since 2004, the increase in the basic security fee rate has varied from 0 to almost 60 percent (see table 4). We have previously found that changes in FPS’s security fees—specifically notifications about rate increases late in the federal budget cycle—have adverse implications for customer agencies; our current work confirms this is still an issue. While fee rate increases are relatively small compared to an agency’s overall appropriation, they can significantly affect an agency’s security budget. When faced with unanticipated fee increases, customer agencies described unplanned trade-offs they make. FEMA officials said they do not cut back on security services at any of their facilities. They ask the budget office to allocate more funds to their area; if they are not successful they decrease security funding in other areas, such as employee background investigations or fingerprinting. Customer agencies face challenges in funding recommended building- specific countermeasures; that is, measures that are meant to address current and emerging security threats. FPS officials said that the recommendations in its facility security assessments are made in response to security risks present at the time the assessment is made. These costs, FPS officials told us, can change quickly and unexpectedly depending on external risks in the environment. Given the budget cycle, however, there is an inherent mismatch in timing. The budget formulation process for any given fiscal year begins 2 years prior to the start of that fiscal year. As a result, to respond timely to current threats, customer agencies must reallocate funds to countermeasures for which they did not and could not plan. For example, officials from a facility security committee in Atlanta said they did not implement a FPS recommendation for security bollards around the perimeter of the building because of budget timing issues. This timing issue is not new. In 2009 we reported that the timing of the assessment process may be inconsistent with customer agencies’ budget cycles. Similarly, in 2008 we reported on instances in which recommended security countermeasures were not implemented at some of the buildings we visited because facility security committee members were unable to get a funding commitment from their agencies, among other reasons. There is no obvious solution for the federal budget timing disconnects described above, but in our prior work reviewing fee-funded agencies, we have identified various budget account structures that could help mitigate budgeting and timing challenges for FPS and customer agencies without compromising accountability for federal funds. A no-year reimbursable appropriation. If FPS were to receive a no- year reimbursable appropriation account, FPS would receive a direct annual appropriation based on its estimated total collections that FPS would later reimburse with its fee collections. If Congress increased FPS’s FTEs in a given current fiscal year, thereby increasing costs, FPS would be able to draw on its direct appropriation to cover the resulting cost increase. It could then inform customer agencies of a fee rate increase in time for them to build the additional cost into their budget requests for the next fiscal year. FPS would then reimburse its appropriation account with its future fee collections from its customer agencies. The Customs and Border Protection (CBP) uses a reimbursable account to mitigate funding issues caused by the timing of certain fee collections. To help manage cash flow issues caused by quarterly, rather than more frequent, fee collections, CBP initially uses appropriations to “front” the cost of the agriculture quarantine and immigration inspections and then reimburses its appropriation account from the immigration and agriculture user fees collected throughout the year., An intragovernmental revolving fund account. An intragovernmental revolving fund is an appropriation account authorized to be credited with collections, including both reimbursements and advances, from other federal agencies’ accounts to finance a cycle of businesslike operations. For example, GSA’s real property activities are financed through the FBF—a revolving fund that includes rent federal agencies pay for GSA space. With respect to structural improvements, GSA can provide agencies with the option to delay payments on amortized costs to allow agencies time to build the costs into their budgets by fronting the costs from its FBF. A similar approach could provide FPS with greater ability to assist agencies with obtaining building-specific countermeasures. FPS already enjoys access to its fee collections without fiscal year limitation so a no-year reimbursable account or a revolving fund would not create accountability concerns in that respect. These types of accounts would, however, provide FPS with the ability to “front” a fee rate increase and reduce the pressure of unanticipated fee rate increases on its customers. In addition, the transparency of any fee increase resulting from changes would facilitate congressional oversight both of FPS and of the cost of security at various agencies and buildings. FPS might also benefit from considering ways other agencies have found to provide cost information to customer agencies in a more timely manner: An approved fee-setting methodology. An approved methodology by which to set fees could allow FPS to set its fee rates in advance of receiving requirements in its appropriation and therefore better align with the budget formulation needs of its federal customers. For example, GSA officials told us that having an approved methodology to calculate rent estimates allows GSA to provide them to customer agencies in time to inform budget formulation. FPS officials said that FPS would need new statutory authority to take this approach. Estimates of future security costs. FPS’s customer agencies do not receive timely estimates of future costs, impairing agencies’ ability to budget for those costs. FPS has data that could help FPS’s customer agencies with this issue. For example, if customer agencies received high-level estimates for countermeasure costs—which could be based on known costs associated with recommended building-specific countermeasures—they could better develop budget estimates for unknown future costs. Such information could also help inform congressional debate about budget priorities and trade-offs. For example, the Federal Emergency Management Administration (FEMA) revised its methodology for estimating the cost of disaster response after we reported that (1) when FEMA excludes costs from catastrophic disasters in annual funding estimates it prevents decision makers from receiving a comprehensive view of overall funding claims and trade-offs, and (2) that especially given the tight resource constraints facing our nation, annual budget requests for disaster relief may be improved by including known costs from previous disasters and some costs associated with catastrophic disasters. FPS is responsible for protecting some of the nation’s most critical facilities and the people who work in and access these locations every day. Analyzing and understanding the costs of providing these important security services, including the costs of systemwide capital investments, are important so that FPS, customer agencies, and Congress have the best possible information available to them when designing, reviewing, and overseeing FPS’s fees and operations. Regular, timely, and substantive fee reviews are critical for any agency, but especially for agencies—like FPS— that are mostly or solely fee funded in order to ensure that fee collections and operating costs remain aligned. FPS has broad authority to design its security fees, but the current fee structure has consistently resulted in total collection amounts less than agency costs, is not well understood or accepted by customer agencies, and continues to be a topic of congressional interest and inquiry. In 2008 we recommended FPS evaluate whether its use of a fee-based system or an alternative funding mechanism is the most appropriate manner to fund the agency. Although FPS agreed with this recommendation it has not begun such an analysis. Based on our updated work, we believe that such an analysis can benefit from the examination of both (1) alternative fee structures and (2) a combination of fees and appropriations. Considering the various options in this report—a redesigned fee structure and funding FPS through a combination of fees and direct appropriations—can help guide FPS’s analysis and Congress’s consideration of the trade-offs among a variety of funding mechanisms. The success of any fee design depends on complete, reliable, timely information on which to base decisions and on informed trade-offs that support program goals. Whenever the formulas for assigning costs to customer agencies change there will be winners and losers. Whether and how to change FPS’s funding structure—either to develop an alternate fee structure or a model that includes some amount of direct appropriations— is largely a policy decision. However, without a better understanding of the costs of FPS’s services, changes to FPS’s funding model are unlikely to address FPS’s chronic funding gaps or the equity concerns and skepticism of FPS’s stakeholders. Further, our analysis shows that in implementing the fee program on a day-to-day basis, FPS and customer agencies encounter challenges that are handled by budget and billing officials as well as security officials. We have previously recommended that FPS collect and maintain a list of facility designated points of contact for security issues. Unless FPS also creates a complete and accurate list of security fee budget and billing contacts in its customer agencies, FPS and its customers will continue to face budget and billing-related challenges, and the opportunity costs associated with delays in returning appropriated funds to customer agencies will persist. Ideally, security decisions at federal facilities are based on real-time information about current and emerging threats. However, federal agencies budget for planned needs—including security needs—about 2 years before the start of each fiscal year. While there is no easy solution for the mismatch in timing between FPS security costs and the federal budget formulation process, options such as different account structures and improved fee estimating procedures could help mitigate these challenges without compromising accountability over federal funds. The Secretary of Homeland Security should direct the Director of the Federal Protective Service to take the following six actions: conduct regular reviews of FPS’s security fees and use this information to inform its fee setting; include systemwide capital investments when estimating costs and include them when setting basic security fee rates; make information on the estimated costs of key activities as well as the basis for these cost estimates readily available to affected parties to improve the transparency and credibility—and hence the acceptance by stakeholders—of the process for setting and using the fees; in implementing our previous recommendation to evaluate the current fee structure and determine a method for incorporating facility risk, assess and report to Congress on: the current and alternative fee structures, to include the options and trade-offs discussed in this report, and if appropriate, options to fund FPS through a combination of fees and direct appropriations, to include the options and trade-offs discussed in this report; evaluate and report to Congress on options to mitigate challenges agencies face in budgeting for FPS security costs, such as: an alternative account structure for FPS to increase flexibility, while retaining or improving accountability and transparency or an approved process for estimating fee rates; and work with customer agencies to collect and maintain an accurate list of points of contact of customer agency officials responsible for budget and billing activities as well as facility designated points of contact as we previously recommended. We provided a draft of this report to the Secretary of Homeland Security and the Administrator of the General Services Administration for review. The General Services Administration had no comments on the report. DHS provided written comments that are reprinted in appendix II. We also provided portions of the report to the four FPS customer agencies with which we met. In its written comments, the Director of DHS’s GAO/Office Inspector General Liaison Office concurred with our recommendations and provided information about steps DHS is taking to address each recommendation. In responding to our recommendation that the Federal Protective Service report to Congress on the current and alternative fee structures, to include the options and trade-offs discussed in this report, DHS said that it has reviewed the current and alterative methods to calculate basic security fees in addition to reviewing alternative funding structures and will use that analysis as a baseline in developing its alternative analysis. Throughout the course of our audit work we asked FPS to provide us with any reviews of current and alternative funding structures it had conducted; FPS did not provide any evidence of having conducted this type of analysis. As noted in our report, FPS officials told us that it has not begun such an analysis. When we asked whether FPS had considered the benefits and challenges of other fee designs, FPS officials said that having specific options for them to consider—such as the ones discussed in this report— would help them with this type of analysis. We are sending copies of this report to the Secretary of the Department of Homeland Security, the Directors of the Federal Protective Service and the Office of Management and Budget, and the Administrator of the General Services Administration. We are also sending copies to appropriate congressional committees, and to the Chairmen and Ranking Members of other Senate and House committees and subcommittees that have appropriation, authorization, and oversight responsibilities for FPS. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions or wish to discuss the material in this report further, please contact me at (202) 512-6806 or irvings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff making key contributions to this report are listed in appendix III. The objectives of this report were to (1) analyze the Federal Protective Service’s (FPS) current fee design and proposed alternatives, and (2) examine how FPS’s security fees challenge FPS and customer agency budget formulation and execution. To meet these objectives, we reviewed legislation and guidance, agency documents, and literature on user fee design and implementation characteristics. We also interviewed officials responsible for managing user fees at FPS, General Services Administration (GSA), and selected customer agencies at their headquarters and in several regional locations. We selected four of FPS’s customer agencies and four regional locations to illustrate how FPS’s security charges benefit and challenge customer agencies. We selected customer agencies with a large representation in GSA’s facility inventory (measured by total rental square footage and total annual rent) and based on prior GAO work on FPS. From these agencies we selected: the Department of Health and Human Services, Internal Revenue Service, Social Security Administration, and the Department of Homeland Security. We selected the regional locations based on (1) a range of region size (number of FPS-protected buildings), (2) geographic diversity, and (3) stakeholder input on successes and challenges faced by regional management. As a result, we selected the following FPS regions for site visits: National Capital Region (Washington, D.C.), Southeast Region (Atlanta, Ga.), Rocky Mountain Region (Denver, Colo.), and Northwest/Arctic Region (Federal Way/Seattle, Wash.). We interviewed FPS, GSA, and customer agency officials who are familiar with FPS security fees at both headquarters and in our selected regions. We conducted this performance audit from May 2009 through April 2011, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual above, Jackie Nowicki, Assistant Director; Chelsa Gurkin; Lauren Gilbertson; Barbara Lancaster; Felicia Lopez; Julie Matta; and Jenny Shinn made key contributions to this report. Donna Miller provided the report’s graphics. Federal User Fees: Additional Analyses and Timely Reviews Could Improve Immigration and Naturalization User Fee Design and USCIS Operations. GAO-09-180. Washington, D.C.: January 23, 2009. Federal User Fees: A Design Guide. GAO-08-386SP. Washington, D.C.: May 28, 2008. Federal User Fees: Substantive Reviews Needed to Align Port-Related Fees with the Programs They Support. GAO-08-321. Washington, D.C.: February 22, 2008. Federal User Fees: Key Aspects of International Air Passenger Inspection Fees Should Be Addressed Regardless of Whether Fees Are Consolidated. GAO-07-1131. Washington, D.C.: September 24, 2007. Budget Issues: Electronic Processing of Non-IRS Collections Has Increased but Better Understanding of Cost Structure Is Needed. GAO-10-11. Washington, D.C.: November 20, 2009. Homeland Security: Addressing Weaknesses with Facility Security Committees Would Enhance Protection of Federal Facilities. GAO-10-901. Washington, D.C.: August 5, 2010. Homeland Security: Preliminary Observations on the Federal Protective Service’s Workforce Analysis and Planning Efforts. GAO-10-802R. Washington, D.C.: June 14, 2010. Homeland Security: Federal Protective Service’s Contract Guard Program Requires More Oversight and Reassessment of Use of Contract Guards. GAO-10-341. Washington, D.C.: April 13, 2010. Homeland Security: Greater Attention to Key Practices Would Improve the Federal Protective Service’s Approach to Facility Protection. GAO-10-142. Washington, D.C.: October 23, 2009. Homeland Security: Federal Protective Service Should Improve Human Capital Planning and Better Communicate with Tenants. GAO-09-749. Washington, D.C.: July 30, 2009. Homeland Security: The Federal Protective Service Faces Several Challenges That Hamper Its Ability to Protect Federal Facilities. GAO-08-683. Washington, D.C. June 11, 2008. Homeland Security: Transformation Strategy Needed to Address Challenges Facing the Federal Protective Service. GAO-04-537. Washington, D.C.: July 14, 2004. | The Federal Protective Service (FPS) is a fee-funded agency in the Department of Homeland Security (DHS) responsible for providing physical security to over 9,000 federal facilities. In 2003 FPS transferred to DHS from the General Services Administration and for the first time was to fully recover its costs. GAO recently reported that stakeholders were concerned about FPS's ability to determine security costs, and the strategies used to address funding challenges had adverse effects on FPS. In this context, Congress directed GAO to evaluate FPS's resource levels. This report (1) analyzes FPS's fee design and proposed alternatives, and (2) examines how FPS's security fees challenge FPS and customer agency budget formulation and execution. GAO reviewed legislation and agency documentation and interviewed FPS and customer agency officials in headquarters and four FPS regions. FPS increased its basic security fee four times in 6 years to try to cover costs (an increase of over 100 percent). FPS has not reviewed its fees to develop an informed, deliberate fee design. GAO has found that timely, substantive fee reviews are especially critical for fee-funded agencies to ensure that fee collections and operating costs remain aligned. FPS is legally required to charge fees that cover its total costs, but it is not required to align specific fees with specific activities. Nevertheless, in its pricing documents FPS describes an alignment between specific fees and specific activities that does not exist. FPS charges a basic security fee based on facility square footage. In addition, FPS charges facilities that have contractor-provided countermeasures, such as guards, the cost of the countermeasure plus an administrative fee that is a percentage of the countermeasure cost. Federal facilities vary in how much they cost to protect, but FPS does not know to what extent some facilities currently subsidize others. This contributes to expectation gaps with and unknown cross-subsidizations among payers. FPS officials said that basic security costs are meant to be "shared evenly" (i.e., based on square footage) among all payers while administrative fees for FPS-recommended or facility-requested countermeasures are meant to both (1) reflect the increased risk inherent to those facilities requiring or requesting additional countermeasures and (2) subsidize the aggregate cost of basic security services. Charging beneficiaries more or less than actual costs may help achieve policy goals, but FPS lacks data to determine whether this occurs as intended. Modifying the current fee structure or funding FPS through a combination of fees and direct appropriations may address equity and cross-subsidization issues and improve transparency to customers, but without detailed activity cost information and a full fee review the relative trade-offs in any particular proposal are unclear. Further, revising the fee structure alone will not address the variations in service levels reported by FPS's customer agencies or the overall level of services FPS is able to provide. The design and implementation of FPS's fees affect agencies' and FPS's ability to budget for and timely implement security measures in multiple ways. First, FPS lacks a method to propose security fee rates prior to submitting its budget request and cannot finalize its rates each year until it receives congressional instructions about its staffing levels in its appropriation act. As a result, agencies annually request security funding without accurate security cost estimates. Second, FPS makes security recommendations to customer agencies based on current threats, but agencies budget for security costs in advance and therefore must reallocate funds to pay for countermeasures for which they had not planned. Although there are no obvious solutions for these and other budget timing disconnects, alternative budget account structures like a reimbursable account or a revolving fund could help mitigate budgeting and timing challenges for FPS and customer agencies without compromising accountability for federal funds. GAO recommends that the Secretary of Homeland Security direct the Director of FPS to, among other things, conduct and make available regular fee reviews to improve its fee design, include capital investment costs in its rates, and evaluate its current and alternative funding and budget account structures to mitigate budget timing and other issues. DHS concurred with GAO's recommendations. |
From 2005 to 2014, the civilian workforce (excluding the U.S. Postal Service) grew from 1.88 million to 2.07 million, an increase of 10.3 percent, or 192,951 individuals. Most of this growth (76 percent) occurred between 2009 and 2014. The number of permanent career executive branch employees grew by 221,672, from about 1.7 million in 2005 to 1.92 million in 2014 (an increase of 13 percent). Of the 24 Chief Financial Officers (CFO) Act agencies, 13 had a higher percentage of permanent career employees in 2014 than they did in 2005, and 11 had a lower percentage (see figure 1). The retirement rate of federal civilian employees rose from 3.2 percent in 2004 to a high of 3.6 percent in 2007 when, according to data from the National Bureau of Economic Research, the recession began. During the recession, the total attrition rate dropped to a low of 2.5 percent in 2009 before rebounding to pre-recession levels in 2011 and 2012. Beginning at the end of 2007, the recession saw retirement rates decline to 3.3 percent in 2008, 2.5 percent in 2009, and 2.7 percent in 2010, before increasing again to 3.5 percent in 2014. With respect to retirement eligibility, of the 1.92 million permanent career employees on board in 2014, approximately 270,000 (14 percent) were eligible to retire. By September 2019, approximately 590,000 (31 percent) of on board staff will be eligible to retire. Not all agencies will be equally affected. Also by September 2019, 18 of the 24 CFO Act agencies will have a higher percentage of staff eligible to retire than the current overall average of 31 percent. About 23 percent of Department of Homeland Security staff on board as of September 2014 will be eligible to retire in 2019, while more than 43 percent will be eligible to retire at both the Department of Housing and Urban Development and the Small Business Administration (see figure 2). Certain occupations—such as air traffic controllers, customs and border protection agents, and those involved in implementing government programs—will also have particularly high retirement-eligibility rates by 2019. About 63 percent of career executives may be eligible to retire by 2016. As we reported in 2014, the General Schedule (GS) classification system is a mechanism for organizing federal white-collar work, notably for the purpose of determining pay, based on a position’s duties, responsibilities, and difficulty, among other things. The GS system, which is administered by OPM and includes a standardized set of 420 occupations, grouped in 23 occupational families and 15 statutorily-defined grade levels, influences other human capital practices such as training, since training opportunities link position competencies with the employee’s performance. In 2013, the GS system covered about 80 percent of the civilian white-collar workforce (about 1.6 million employees). The GS system was designed to uphold the key merit system principle of equal pay for work of substantially equal value and other important goals. However, some OPM reports and several public policy groups have questioned the GS system’s ability to meet agencies’ needs for flexible talent management tools that enable them to align employees with mission requirements. For example, in 2002, OPM outlined the advantages and disadvantages of the GS classification system and concluded that agencies should be allowed to tailor their pay practices to better recruit, manage, and retain employees to accomplish their mission. In 2014, the Partnership for Public Service reported that by treating all occupations equally and linking them to the current pay scales, the GS system is unable to distinguish between meaningful differences in complexity and skill across occupations. Also, as federal agencies have taken on additional roles and responsibilities, their missions have become increasingly complex, and their employees need to possess a range of expertise and skills that may not be adequately captured by the GS system. We reported in July 2014 on the attributes of a modern, effective classification system and the extent to which the current GS system balances those attributes. Our analysis of subject matter specialists’s comments, related literature, and interviews with OPM officials identified a number of important characteristics for a modern, effective classification system, which we consolidated in eight key attributes (see table 1). In 2014 we found that, in concept, the current GS classification system’s design incorporates several key attributes including internal and external equity, transparency, simplicity, and rank in position. However, as OPM implemented the system, the attributes of transparency, internal equity, simplicity, flexibility, and adaptability were reduced. This occurred, in part, because some attributes are at odds with one another. So, fully achieving one attribute comes at the expense of another. Thus, OPM, working with its stakeholders, is challenged to determine how best to optimize each attribute. We also reported that the GS system’s standardized set of 420 occupations incorporates several key attributes, but falls short in implementation. For example, the occupational standard for an information technology specialist clearly describes the routine duties, tasks, and experience required for the position. This kind of information is published for the 420 occupations so all agencies are using the same, consistent standards when classifying positions—embodying the attributes of transparency and internal equity. However, in implementation, having numerous, narrowly defined occupational standards inhibits the system’s ability to optimize these attributes. Specifically, classifying occupations and developing position descriptions in the GS system requires officials to maintain an understanding of the individual position and the nuances between similar occupations. We concluded that without this understanding, the transparency and internal equity of the system may be inhibited as agency officials may not be classifying positions consistently, comparable employees may not be treated equitably, and the system may seem unpredictable. We believe that, going forward, these eight attributes of a more modern, effective classification system can help provide criteria for policymakers and other stakeholders to use in determining whether refinements to the current GS system or wholesale reforms are needed. In our July 2014 report, we recommended that OPM, working through the Chief Human Capital Officer (CHCO) Council and in conjunction with key stakeholders such as the Office of Management and Budget, unions, and others, examine ways to make the GS system’s design and implementation more consistent with the attributes of a modern, effective classification system. OPM partially concurred with our recommendation to work with key stakeholders to use prior studies and lessons learned to examine ways to make the GS system more consistent with the attributes of a modern, effective classification system. But it also noted several efforts to assist agencies with classification issues, including its interagency classification policy forum and partnering with agencies to address challenges related to specific occupational areas. While these examples of assisting agencies to better implement the GS system on a case-by- case basis are helpful, they do not fully address the fundamental challenges facing the GS system, which we and others have said is not meeting the needs of federal agencies. In 2014, we also reported that OPM is responsible for establishing new— and revising existing—occupational standards after consulting with agencies. From 2003 to 2014, OPM revised almost 20 percent of the occupational standards and established 14 new ones. However, there was no published review or update of 124 occupations since 1990. OPM officials said they first review occupations identified in presidential memorandums as needing review; however, they do not systemically track and prioritize the remaining occupational standards for review. Therefore, we concluded that OPM had limited assurance that it is updating the highest priority occupations. OPM is required by law to oversee agencies’ implementation of the GS system. However, OPM officials said OPM has not reviewed any agency’s classification program since the 1980s because OPM leadership at the time concluded that the reviews were ineffective and time consuming. As a result, we also concluded that OPM has limited assurance that agencies are correctly classifying positions according to standards. In 2014, we determined that, going forward, OPM could improve its management and oversight of the GS system, and like all agencies, must consider cost-effective ways to fulfill its responsibilities in an era of constrained resources. Using a more strategic approach to track and prioritize reviews of occupational standards—that perhaps better reflects evolving occupations—could help OPM better meet agencies’s needs and the changing nature of government work. We therefore recommended that OPM develop a strategy to systematically track and prioritize updates to occupational standards. However, OPM did not concur with our recommendation and noted that occupational standards are updated in response to a systematic, prioritized process informed by working with agencies and other stakeholders and analysis of occupational trends. OPM officials were unable to provide us with the documentation of such efforts. As we noted in our 2014 report, OPM had not published a review or update of roughly 30 percent of the total number of occupations on the GS system since 1990. Further, OPM officials could not provide the near- or long-term prioritization of occupations scheduled for review. As a result, we concluded that OPM cannot demonstrate whether it is keeping pace with agencies’ needs nor does it have reasonable assurance that it is fulfilling its responsibilities to establish new or revise existing occupational standards based on the highest priorities. We continue to believe that OPM should take action to fully address our recommendation. We also recommended in 2014 that OPM develop a strategy that would enable it to more effectively and routinely monitor agencies’s implementation of classification standards. OPM partially concurred with our recommendation and stated that it will continue to leverage the classification appeals program to provide interpretative guidance to agencies to assist them in classifying positions. OPM also stated it will direct consistency reviews as appropriate, however as we noted in the report, OPM does not review agencies’ internal oversight efforts. We continue to believe that OPM should develop a strategy to fully address the recommendation, and we will continue to monitor OPM’s efforts in that regard. Our past work has shown that mission-critical skills gaps in such occupations as cybersecurity and acquisition pose a high-risk to the nation. Whether these gaps are within specific federal agencies or across the federal government, they impede federal agencies from cost- effectively serving the public and achieving results. To address complex challenges such as disaster response, national and homeland security, and rapidly evolving technology and privacy security issues, the federal government requires a high-quality federal workforce able to work seamlessly with other agencies and levels of government, and across sectors. However, efforts are threatened by trends that include current budget and long-term fiscal pressures, declining levels of federal employee satisfaction, the changing nature of federal work, and a potential wave of employee retirements that could produce gaps in leadership and institutional knowledge. In our 2011 High Risk report we stated that OPM, agencies, and the CHCO Council need to address critical skills gaps that cut across several agencies. As we reported earlier this year, OPM and agencies have taken promising steps, but additional efforts are needed to coordinate and sustain their efforts. Additionally, agencies and OPM need to make better use of workforce analytics which can be used to predict newly emerging skills gaps. An important government-wide effort we identified in this area was the CHCO Council’s Working Group (Working Group). The Working Group has identified skills gaps in six government-wide, mission- critical occupations: cybersecurity specialist, auditor, human resources specialist, contract specialist, economist, and the science, technology, engineering, and mathematics (STEM) professions. Although this effort was an important step forward, our 2015 work identified skills gaps in nearly two dozen occupations with significant programmatic impact. We also determined that the Working Group did not develop a more comprehensive list because of various methodological shortcomings. Going forward, we concluded that OPM and the CHCO Council will need to use lessons learned to inform a new round of work expected in this year. Specifically, the Working Group’s experience underscored the importance of (1) using a robust, data-driven approach to identify potential mission-critical occupations early in the process; (2) prioritizing occupations using criteria that consider programmatic impact; and (3) consulting with subject matter experts and other stakeholders prior to identifying mission-critical occupations. Our January 2015 report also noted that, to make further progress on this issue, the federal government needs to build a predictive capacity for identifying emerging mission-critical skills gaps. Realizing this, OPM has established an interagency working group known as the Federal Agency Skills Team (FAST), which is composed of agency officials with workforce planning and data analysis skills. OPM has tasked the group with implementing a standard and repeatable methodology for identifying and addressing government-wide skills gaps, as well as mission-critical competencies, over a 4-year cycle. OPM officials said that, in its first year, FAST intends to meet regularly until it identifies a new set of government- wide skills gaps. OPM officials expect this to occur by June 2015. Because we identified a number of shortcomings in the implementation of FAST, our January 2015 report recommended that the Director of OPM, in conjunction with the CHCO Council, take the following actions: Assist FAST in developing goals for closing skills gaps with targets that are both clear and measurable. Work with FAST to design outcome-oriented performance metrics that align with overall targets for closing skills gaps and link to the activities for addressing skills gaps. Incorporate greater input from subject matter experts, as planned. OPM concurred with these recommendations and has reported that it will implement all of these actions. However, not all actions will be implemented through FAST but instead will rely on subject matter experts from across the federal workforce. In the same report, we recommended that the Director of OPM work with agency CHCOs to bolster the ability of agencies to assess workforce competencies by sharing competency surveys, lessons learned, and other tools and resources. These actions will help ensure that OPM builds the predictive capacity to identify emerging skills gaps across the government—including the ability to collect and use reliable information on the competencies of the federal workforce for government-wide workforce analysis. OPM also agreed with this recommendation. Finally, in January 2015 we also reported on OPM’s efforts to assist in addressing skills gaps at the agency level. OPM created HRstat, a process of holding regularly scheduled, data-driven review meetings led by an agency’s CHCO to review performance metrics for driving progress on the agency’s human capital management priorities and goals, such as closing mission-critical skills gaps. OPM launched HRstat as a 3-year pilot program in May 2012, with an initial group of eight agencies. However, our work determined that OPM should take a greater leadership role in helping agencies include a core set of metrics in their HRstat reviews so that OPM and agency leaders can have a clear view of progress made closing skills gaps. While it is important for agencies to have ownership over their HRstat reviews, OPM should also maximize its opportunity to use HRstat to gain greater visibility over the federal workforce. Therefore, in our January 2015 report we recommended that the Director of OPM take the following actions: Work with the CHCO Council to develop a core set of metrics that all agencies should use as part of their HRstat data-driven reviews. Coordinate with FAST personnel and explore the feasibility of collecting information needed by FAST as part of agencies’s HRstat reviews. OPM agreed with our recommendation to develop a core set of metrics and plans to convene agency officials responsible for conducting HRstat reviews within their agencies, and have them identify a useful set of core metrics. OPM expects to complete this by the end of 2015. In regards to coordinating the efforts of FAST and agencies’ HRstat reviews, OPM stated that integrating these efforts would not be appropriate because of differing data requirements and goals of the two processes. We continue to believe that OPM should explore coordinating these efforts to gain greater visibility over the federal workforce and to monitor progress toward closing skills gaps. Efforts to close mission-critical skills gaps are often couched in discussions about interagency initiatives and working groups, as well as technical terms, such as staffing numbers, competencies, and metrics. Yet, the ultimate goal is a higher-performing, cost-effective government. With a continual focus on implementing the recommendations we have made in these areas, we believe that OPM, the CHCO Council, and agencies should begin to make progress on addressing current and emerging skills gaps. In May 2014, we reported on strategies for managing the federal workforce and planning for future needs in an era of constrained resources. The strategies we identified included the following: Strengthening collaboration to address a fragmented human capital community. Our analysis found that the federal human capital community is highly fragmented with multiple actors inside government informing and executing personnel policies and initiatives in ways that are not always aligned with broader, government-wide human capital efforts. The CHCO Council was established to improve coordination across federal agencies on personnel issues. But, according to CHCOs we spoke to, the council is not carrying out this responsibility as well as it could. This challenge manifests itself in two ways. First, across organizations, many actors are making human capital decisions in an uncoordinated manner. Second, within agencies, CHCOs and the human capital staff are excluded from key agency decisions. Using enterprise solutions to address shared challenges. Our analysis found that agencies have many common human capital challenges. But, they tend to address these issues independently without looking to enterprise solutions (i.e., government-wide) that could resolve them more effectively. Across government, there are examples of agencies and OPM initiating enterprise solutions to address crosscutting issues, including the consolidation of federal payroll systems into shared-services centers. CHCOs we spoke to highlighted human resource information technology and strategic workforce planning as two areas that are ripe for government-wide collaboration. Creating more agile talent management to address inflexibilities in the current system. Our analysis found talent management tools lack two key ingredients for developing an agile workforce, namely the ability to (1) identify the skills available in their existing workforces, and (2) move people with specific skills to address emerging, temporary, or permanent needs within and across agencies. In our May 2014 report, we stated that the CHCOs said OPM needs to do more to raise awareness and assess the utility of the tools and guidance they provide to agencies to address key human capital challenges. The CHCOs said they were either unfamiliar with OPM’s tools and guidance, or they fell short of their agency’s needs. OPM officials said they had not evaluated the tools and guidance they provide to the agencies. As a result, a key resource for helping agencies improve the capacity of their personnel offices is likely being underutilized. Therefore, we recommended that OPM, in conjunction with the CHCO Council, (1) strengthen coordination and leadership on government-wide human capital issues, (2) explore expanded use of enterprise solutions to more efficiently and effectively address shared challenges, (3) review the extent to which new capabilities are needed to promote agile talent management, and (4) evaluate the communication strategy for and effectiveness of tools, guidance, or leading practices OPM or agencies provide for addressing human capital challenges. OPM and the CHCO Council concurred with our recommendations. Managing employee performance has been a long-standing government- wide issue and the subject of numerous reforms since the beginning of the modern civil service. Without effective performance management, agencies risk losing (or failing to utilize) the skills of top talent. They also may miss the opportunity to observe and correct poor performance. Our past work has shown that a long-standing challenge for federal agencies has been developing credible and effective performance management systems that can serve as a strategic tool to drive internal change and achieve results. More than a decade ago, we reported that day-to-day performance management activities benefit from performance management systems that, among other things, (1) create a clear “line of sight” between individual performance and organizational success; (2) provide adequate training on the performance management system; (3) use core competencies to reinforce organizational objectives; (4) address performance regularly; and (5) contain transparent processes that help agencies address performance “upstream” in the process within a merit- based system that contains appropriate safeguards. Implementing such a system requires supervisors to communicate clear performance standards and expectations, to provide regular feedback, and to document instances of poor performance. Managers’ ability to deal with poor performers is also a concern of federal employees. OPM’s Federal Employee Viewpoint Survey (FEVS) data from 2011 to 2014 show that around 30 percent of respondents provided positive responses to whether managers took steps to deal with poor performers. In 2014, over 40 percent of respondents disagreed that managers consistently take steps to deal with poor performers. Almost 30 percent neither agreed nor disagreed. In general, agencies have three means to address employees’ poor performance: (1) day-to-day performance management activities (which should be provided to all employees, regardless of their performance levels), (2) dismissal during probationary periods, and (3) use of formal procedures. Agencies’ choices will depend on the circumstances at hand. Day-to-day performance management activities such as providing regular performance feedback to employees can produce more desirable outcomes for agencies and employees than dismissal options, which are a last resort. As we reported in February 2015, supervisors do not always have the skills to identify, communicate, and help address employee performance issues. Given the critical role that supervisors play in performance management, it is important for agencies to identify, promote and continue to develop effective supervisors. Probationary periods for new employees provide supervisors with an opportunity to evaluate an individual’s performance to determine if an appointment to the civil service should become final. However, CHCOs we interviewed told us supervisors often do not use this time to make decisions about an employee’s performance because they may not know that the probationary period is ending or they have not had time to observe performance in all critical areas. We agree with OPM that notifying supervisors that a probationary period is coming to an end is an agency’s responsibility. However, we maintain that more could be done to educate agencies on the benefits of using automated notifications to notify supervisors that an individual’s probationary period is ending and that the supervisor needs to make an affirmative decision or otherwise take appropriate action. OPM also needs to determine whether occupations exist in which—because of the nature of work and complexity—the probationary period should extend beyond 1-year and, if so, take appropriate actions which may include developing legislative proposals for congressional consideration. OPM agreed to consult with stakeholders to determine whether longer probationary periods are needed for certain complex positions. In our February 2015 report, we noted that OPM provides guidance, tools, and training to help agencies attain human capital management goals that meet its strategic goal of enhancing the integrity of the federal workforce. In addition to its regulations, OPM makes a range of different tools and guidance available to help agencies address poor performance through multiple formats, including through its website, webinars, webcasts, in-person training, guidebooks, and through one-on-one assistance and consultation with agencies, according to OPM officials. We identified in our report promising practices that some agencies employ to more effectively ensure that that they have a well-qualified cadre of supervisors capable of effectively addressing poor performance. The practices include: extending the employee’s supervisory probationary period beyond 1 year to include at least one full employee appraisal cycle; providing temporary duty opportunities outside the agency or rotational assignments to supervisory candidates prior to promotion, where the candidate can develop and demonstrate supervisory competencies; and using a dual career ladder structure as a way to advance employees who may have particular technical skills or education but who are not interested in or inclined to pursue a management or supervisory track. We recommended that OPM determine if these practices should be more widely used government-wide. OPM partially concurred with our recommendation, noting that agencies already have authority to take these actions. We acknowledged OPM’s point, but maintain that OPM can still take a leadership role and encourage agencies to take these steps. Also in our February 2015 report, we found that OPM, in conjunction with the CHCO Council and other key stakeholders, needs to assess the adequacy of leadership training that agencies provide to supervisors to help ensure supervisors obtain the skills needed to effectively conduct performance management responsibilities. We recommended that OPM assess the adequacy of leadership training and OPM concurred. In 2012, OPM facilitated development of an SES performance appraisal system with a more uniform framework to communicate expectations and evaluate the performance of executive branch agency SES members, the government’s cadre of senior leaders. The system is expected to promote consistency, clarity, and transferability of SES performance standards and ratings across agencies. Career SES employees receive a base salary and benefits. But, pay increases—as well as performance awards—are to be performance driven, based on annual ratings of executives’ performance following reviews within their agencies. To obtain SES appraisal system certification for agencies seeking access to higher levels of pay, agencies are required to make meaningful distinctions based on the relative performance of their executives as measured through performance and pay criteria. OPM stressed that a major improvement of the system included dealing with the wide disparity in distribution of ratings by agency through the provision of clear, descriptive performance standards and rating score ranges that establish mid-level ratings as the norm and top-level ratings as truly exceptional. In our January 2015 report, we found that more than 85 percent of career Chief Financial Officers Act agency SES were rated in the top two of five categories for fiscal years 2010 through 2013, and career SES received approximately $42 million in awards for fiscal year 2013. In a closer examination of five departments (Departments of Defense, Energy, Health and Human Services, Justice, and Treasury) for fiscal year 2013, we found that, similar to the government-wide results, these five departments rated SES primarily in the top two categories. In addition, four of five departments awarded the same or higher performance awards to some SES with lower ratings. Effective performance management systems recognize that merit-based pay increases should make meaningful distinctions in relative performance. This principle is central to the SES performance management system, where under the law, to be certified and thereby able to access the higher levels of pay, the appraisal system must make meaningful distinctions based on relative performance. OPM certification guidelines state that the SES modal rating—the rating level assigned most frequently among the actual ratings—should be below “outstanding” and that multiple rating levels should be used. However, OPM’s guidelines also state that if an agency’s modal rating level is “outstanding,” the appraisal system can still be certified if accompanied with a full, acceptable justification. Nonetheless, the continued concentration of senior executives at the top two rating levels indicates that meaningful distinctions in SES performance may not be being made across government. OPM plans to convene a cross-agency working group in 2015 to revisit the SES certification process. In our January 2015 report, we recommended that the Director of OPM consider various refinements to better ensure the SES performance appraisal system certification guidelines promote making meaningful distinctions in performance without using a forced distribution. Options could include not certifying appraisal systems where the modal rating is “outstanding” or increasing transparency in cases where the modal rating is “outstanding.” OPM disagreed with the recommendation stating that, among other things, it could result in forced distributions in ratings. We maintain that additional action should be considered to ensure equity in ratings and performance awards across departments. A growing body of research on both private- and public-sector organizations has found that increased levels of engagement—generally defined as the sense of purpose and commitment employees feel towards their employer and its mission—can lead to better organizational performance. Engaged employees are more than simply satisfied with their jobs. Rather, they take pride in their work, are passionate about what they do, and are committed to the organization, the mission, and their job. They are also more likely to put forth extra effort to get the job done. Put another way, if a talented workforce is the engine of productivity and mission accomplishment, then a workplace that fosters high levels of employee engagement helps fuel that engine. Preliminary observations from our ongoing work have found that government-wide levels of employee engagement have recently declined 4 percentage points, from an estimated 67 percent in 2011, to an estimated 63 percent in 2014, as measured by the OPM FEVS, and a score derived by OPM from FEVS— the Employee Engagement Index (EEI). However, our ongoing work also indicates that the recent government- wide average decline in EEI masks the fact that the majority of federal agencies either sustained or increased employee engagement levels during the same period. The decline is the result of several large agencies bringing down the government-wide average. Our preliminary work indicates that 13 of 47 agencies saw a statistically significant decline in their EEIs from 2013 to 2014. While this is only 28 percent of agencies, nearly 69 percent of federal employees are at one of those agencies, including the Departments of Defense, Homeland Security, and Veterans Affairs. Meanwhile, the majority of agencies sustained or improved engagement. Between 2013 and 2014, of 47 agencies included in our analysis of the EEI, 3 increased their scores; 31 held steady; and 13 declined, as shown in figure 3. Even one agency with a downward trending engagement score is not to be taken lightly. There is room for improvement with all federal agencies. Yet, the large number of agencies that sustained or increased their levels of employee engagement during challenging times suggests that agencies can influence employee engagement levels in the face of difficult external circumstances. For example, the Federal Trade Commission maintained a consistent estimate of 75 percent engagement index score—well above the government-wide average—throughout the period of general decline. In conclusion, strategic human capital management must be the centerpiece of any serious effort to ensure federal agencies operate as high-performing organizations. A high-quality federal workforce is especially critical now given the complex and cross-cutting issues facing the nation. Through a variety of initiatives, Congress, OPM, and individual agencies have strengthened the government’s human capital efforts since we first identified strategic human capital management as a high-risk area in 2001. Still, while many actions towards progress have been taken over the last 13 years, the job is far from over. Indeed, the focus areas discussed today are not an exhaustive list of challenges facing federal agencies and are long-standing in nature. Greater progress will require continued collaborative efforts between OPM, the CHCO Council, and individual agencies, as well as the continued attention of top-level leadership. Progress will also require effective planning, responsive implementation, robust measurement and evaluation, and continued congressional oversight to hold agencies accountable for results. In short, while the core human capital processes and functions—such as workforce planning and talent management— may sound somewhat bureaucratic and transactional, our prior work has consistently shown the direct link between effective strategic human capital management and successful organizational performance. At the end of the day, strategic human capital management is about mission accomplishment, accountability, and responsive, cost-effective government. Chairman Lankford, Ranking Member Heitkamp, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions you may have at this time. For further information regarding this statement, please contact Yvonne D. Jones, Director, Strategic Issues, at (202) 512-2717 or jonesy@gao.gov. Individuals making key contributions to this statement include Clifton G. Douglas Jr., Assistant Director; Dewi Djunaidy, Analyst-in-Charge; Joseph Fread; Sara Daleski; and Robert Robinson. Key contributors for the earlier work that supports this testimony are listed in each product. Human Capital: OPM Needs to Improve the Design, Management, and Oversight of the Federal Classification System (July 2014) To improve the classification system and to strengthen OPM’s management and oversight, the Director of OPM, working through the Chief Human Capital Officer Council, and in conjunction with key stakeholders such as the Office of Management and Budget, unions, and others, should use prior studies and lessons learned from demonstration projects and alternative systems to examine ways to make the GS system’s design and implementation more consistent with the attributes of a modern, effective classification system. To the extent warranted, develop a legislative proposal for congressional consideration. In July 2014, OPM stated that it partially concurred with our recommendation to work with key stakeholders to use prior studies and lessons learned to examine ways to make the GS system more consistent with the attributes of a modern, effective classification system. OPM agreed that the system needs reform but OPM noted several efforts to assist agencies with classification issues, including its interagency classification policy forum and partnering with agencies to address challenges related to specific occupational areas. While these examples of assisting agencies to better implement the GS system on a case-by-case basis are helpful, they are not fully addressing the fundamental challenges facing the GS system, which we and others have said is not meeting the needs of federal agencies. To improve the classification system and to strengthen OPM’s management and oversight, the Director of OPM should develop cost- effective mechanisms to oversee agency implementation of the classification system as required by law, and develop a strategy to systematically track and prioritize updates to occupational standards. In July 2014, OPM stated that it did not concur with our recommendation to develop a strategy to systematically track and prioritize updates to occupational standards. Specifically, OPM noted that occupational standards are updated in response to a systematic, prioritized process informed by working with agencies and other stakeholders and analysis of occupational trends. However, OPM officials were unable to provide us with the documentation of their efforts. As noted in our report, OPM has not published a review or update of 124 occupations, roughly 30 percent of the total number of occupations on the GS system, since 1990. Further, OPM officials could not provide the near- or long-term prioritization of occupations schedule for review. As a result, OPM cannot demonstrate whether it is keeping pace with agencies’ needs nor does it have reasonable assurance that it is fulfilling its responsibilities to establish new, or revise existing occupational standards based on the highest priorities. We continue to believe that OPM should take action to fully address our recommendation. To improve the classification system and to strengthen OPM’s management and oversight, the Director of OPM should develop cost- effective mechanisms to oversee agency implementation of the classification system as required by law, and develop a strategy that will enable OPM to more effectively and routinely monitor agencies’ implementation of classification standards. In July 2014, OPM stated that it partially concurred with our recommendation to develop a strategy to more effectively and routinely monitor agencies’ implementation of classification standards. OPM stated that it will continue to leverage the classification appeals program to provide interpretative guidance to agencies to assist them in classifying positions. OPM also stated it will direct consistency reviews as appropriate, however as we noted in the report, OPM does not review agencies’ internal oversight efforts. Federal Workforce: OPM and Agencies Need to Strengthen Efforts to Identify and Close Mission- Critical Skills Gaps (January 2015) To assist the interagency working group, known as the Federal Agency Skills Team (FAST), to better identify government-wide skills gaps having programmatic impacts and measure its progress towards closing them, the Director of OPM-in conjunction with the CHCO Council- should strengthen its approach and methodology by (1) assisting FAST in developing goals for closing skills gaps with targets that are both clear and measurable; (2) working with FAST to design outcome-oriented performance metrics that align with overall targets for closing skills gaps and link to the activities for addressing skills gaps; (3) incorporating greater input from subject matter experts, as planned; and (4) ensuring FAST consistently follows key practices for project planning. In January 2015, OPM stated that it partially concurred with our recommendation to strengthen the approach and methodology used by the interagency working group, known as FAST, to better identify skills gaps. OPM noted it agreed with, and planned to implement, the principles of each recommended action. However, OPM said it needed to clarify how its terminology and planned process differs from the description in our recommendation. In particular, OPM stated its process will identify government-wide rather than agency-specific skills gaps as it believes our draft recommendation suggests. We recognize that FAST was established to address government-wide skills gaps and have clarified the language in our recommendation accordingly. To ensure that OPM builds the predictive capacity to identify emerging skills gaps across the government-including the ability to collect and use reliable information on the competencies of the federal workforce for government-wide workforce analysis-the Director of OPM should (1) establish a schedule specifying when OPM will modify its Enterprise Human Resources Integration (EHRI) database to capture staffing data that it currently collects from agencies through its annual workforce data reporting process; and (2) work with agency CHCOs to bolster the ability of agencies to assess workforce competencies by sharing competency surveys, lessons learned, and other tools and resources. In January 2015, OPM stated that it did not concur with our recommendation. Regarding EHRI, OPM maintained that it is impossible for the EHRI database to automatically capture staffing data currently included in MCO Resource Charts because some of these data includes specific agency projections and targets, which are provided via a manual data feed. OPM stated that it is assessing whether EHRI can be modified to allow agencies to supply these manual feed data into the database system. We have modified our report to recognize that EHRI cannot automatically capture the same agency staffing data that are captured through the MCO Resource Charts. In addition, OPM noted that there are funding implications associated with its ability to anticipate whether and when a modification schedule to the EHRI online database could be established. To help agencies and OPM better monitor progress toward closing skills gaps within agencies and government-wide, the Director of OPM should (1) work with the CHCO Council to develop a core set of metrics that all agencies should use as part of their HRstat data-driven reviews; and (2) coordinate with FAST personnel and explore the feasibility of collecting information needed by FAST as part of agencies’ HRstat reviews. In January 2015, OPM concurred with our recommendation to develop a core set of metrics that all agencies should use as part of their HRstat data-driven reviews, and explore the feasibility of collecting information needed by FAST as part of agencies’ HRstat reviews. Human Capital: Strategies to Help Agencies Meet Their Missions in an Era of Highly Constrained Resources (May 2014) To create a more effective human capital system that is more responsive to managing priorities and future workforce needs, the Director of OPM, in conjunction with the CHCO Council, should strengthen OPM’s coordination and leadership of government-wide human capital issues to ensure government-wide initiatives are coordinated, decision makers have all relevant information, and there is greater continuity in the human capital community for key reforms. Such actions could include: (1) developing a government-wide human capital strategic plan that, among other things, would establish strategic priorities, time frames, responsibilities, and metrics to better align the efforts of members of the federal human capital community with government-wide human capital goals and issues; and (2) coordinating communication on government-wide human capital issues with other members of the human capital community so that there is greater consistency, transparency, and completeness in exchanging and using information by stakeholders and decision makers. In April 2014, OPM provided examples of working groups and other efforts to address issues such as closing skills gaps and developing HRStat, many of which are described in our report. Further, although the CHCO Council agreed that more could be done to coordinate, share resources, and explore talent management strategies, the CHCO Council disagreed with our finding that the human capital community was highly fragmented. Our analysis of the comments made by the CHCO Council found that the human capital community is fragmented and that our recommendation for a government-wide human capital strategic plan could help to coordinate these efforts to ensure initiatives were not duplicative and were aligned with the most pressing human capital challenges. A government-wide strategic plan should include input from the many participants in the human capital community—reflecting the different perspectives, missions, and resources of these organizations. To create a more effective human capital system that is more responsive to managing priorities and future workforce needs, the Director of OPM, in conjunction with the CHCO Council, should explore the feasibility of expanded use of enterprise solutions to more efficiently and effectively address shared or government-wide human capital challenges. Such actions could include: (1) seeking cost savings and improved functionality through coordinated government- wide Human Resources Information Technology planning and acquisition, (2) seeking agency input to ensure OPM’s workforce planning tools provide effective guidance for agencies, and (3) sharing workforce planning lessons learned and successful models across the government. Recommendation inform agency recruitment, retention, and training needs; and (2) mechanisms for increasing staff mobility within an agency and government-wide to assist agencies in aligning their workforces with evolving needs. To create a more effective human capital system that is more responsive to managing priorities and future workforce needs, the Director of OPM, in conjunction with the CHCO Council, should ensure agencies are getting the guidance and tools that they need by evaluating the communication strategy for and effectiveness of relevant tools, guidance, or leading practices created by OPM or the agencies to address crosscutting human capital management challenges. In April 2014, OPM stated that it would expand its collaboration with agencies to design and deliver the tools agencies need through use of the LAB@OPM, OPM’s innovation lab. We previously reported that OPM needs clear and specific outcome measures to help meet its goals of enhancing skills in innovation and supporting project-based problem solving. Otherwise, OPM’s innovation lab efforts may not be able to demonstrate the types of results initially envisioned. It will be important for OPM to understand how the tools and guidance it develops through the innovation lab and other methods are being used by agencies. Federal Employees: Opportunities Exist to Strengthen Performance Management Pilot (September 2013) Recognizing that moving toward a more performance-oriented culture within federal agencies is likely to be a continuous effort and to ensure that the opportunity GEAR recommendations offer to improve performance management is not lost, the Acting Director of OPM, in collaboration with the CHCO Council, should define roles and responsibilities of OPM, the CHCO Council, and participating federal agencies going forward as the GEAR framework is implemented government-wide. In doing so, OPM, in collaboration with the CHCO Council, could define roles and responsibilities such as supplementing the GEAR report and updating the diagnostic toolkit as needed to reflect additional promising practices and lessons learned (such as those GAO identified) and guidance on using metrics. This should include considering whether connecting performance expectations to crosscutting goals should be part of the GEAR framework. As of June 2010, the Executive Director of the CHCO Council told us that the implementation of GEAR needs to be a community effort and individual agencies need take ownership for implementing the parts of the GEAR framework that best suit their needs. The CHCO Council would like to avoid dictating roles and responsibilities to agencies on what to do and how to do it. OPM and CHCO Council officials did not indicate whether they planned to connect performance expectations to cross-cutting goals. To improve agencies’ GEAR implementation plans, the Secretary of the Department of Homeland Security (DHS) should direct the Commandant of the Coast Guard to update the agency’s GEAR implementation plan to include (1) performance measures that permit comparison between desired outcomes and actual results and (2) additional information schedules that are linked to specific actions. As of September 2014, DHS had not provided updates on the status of the Coast Guard’s effort to update its GEAR implementation plan to include (1) performance measures that permit comparison between desired outcomes and actual results or (2) additional information schedules that are linked to specific actions. Recommendation Results Oriented Management: OPM Needs to Do More to Ensure Meaningful Distinctions Are Made in SES Ratings and Performance Awards (January 2015) As OPM convenes the cross-agency working group, the Director of OPM, as the head of the agency that certifies-with OMB concurrence-SES performance appraisal systems, should consider the need for refinements to the performance certification guidelines addressing distinctions in performance and pay differentiation. Options could include Revisiting and perhaps eliminating the guideline that allows OPM to certify agencies’ performance management systems with an SES modal rating of “outstanding. Strengthening the accountability and transparency of this guideline by activities such as Reporting agencies’ justifications for high ratings to OPM on its website. Reporting agencies’ justifications for high ratings to Congress. Obtaining third party input on agencies’ justifications for high ratings, such as by the Chief Human Capital Officers Council. In January 2015, OPM generally agreed with the information in our report but did not agree with our recommendation. OPM expressed concerns that imposing such a criterion would lead to arbitrary manipulation of the final ratings rather than an appropriate comparison of performance to standards. OPM asserted that this situation would be ripe for forced distribution of the ratings, which is explicitly prohibited by regulation. OPM also stated that the more appropriate action is to continue emphasizing the importance of setting appropriate, rigorous performance requirements and standards that logically support meaningful distinctions in performance. As recognized in our report, OPM’s regulations contemplate that it is possible to apply standards that make meaningful performance distinctions and to use a range of ratings while avoiding the use of forced distributions. As we also note, since our 2008 report on SES performance management systems—continuing through the career SES performance ratings for fiscal year 2013—questions persist about the extent to which meaningful distinctions based on relative SES performance are being made... OPM stated that it did not support the second part of our recommendation regarding three suggestions for increasing transparency for those agencies that are certified with a modal rating of “outstanding.” Although we suggested that OPM report high rating justifications to Congress through its Annual Performance Report, we understand that this may not be the most appropriate vehicle to use; another avenue of reporting to Congress would certainly be acceptable, and we adjusted the text accordingly. Federal Workforce: Improved Supervision and Better Use of Probationary Periods Are Needed to Address Substandard Employee Performance (February 2015) To help strengthen the ability of agencies to deal with poor performers and to help ensure supervisors obtain the skills needed to effectively conduct performance management responsibilities, the Director of OPM, in conjunction with the CHCO Council and, as appropriate, with key stakeholders such as federal employee labor unions, should assess the adequacy of leadership training that agencies provide to supervisors. In January 2015, OPM said that it concurred with our recommendation. OPM stated it would assess what and how agencies are training new supervisors and provide feedback for improving the curriculum. In addition, OPM stated that it would continue to provide agencies guidance on evaluating the effectiveness of leadership training. Status employees to advance without taking on supervisory or managerial duties. In each of these cases, OPM noted that agencies already have authority to take these actions. We acknowledged OPM’s point and clarified the report accordingly. We maintain, however, that OPM can still play a leadership role and encourage agencies to take these steps. educate agencies on the benefits of using automated notifications to notify supervisors that an individual’s probationary period is ending and that the supervisor needs to make an affirmative decision or otherwise take appropriate action, and encourage its use to the extent it is appropriate and cost- effective for the agency; and determine whether there are occupations in which-because of the nature of work and complexity-the probationary period should extend beyond 1-year to provide supervisors with sufficient time to assess an individual’s performance. If determined to be warranted, initiate the regulatory process to extend existing probationary periods and, where necessary, develop a legislative proposal for congressional action to ensure that formal procedures for taking action against an employee for poor performance (and a right to appeal such an action) are not afforded until after the completion of any extended probationary period. In January 2015, OPM said that it partially concurred with the part of our recommendation calling on OPM to determine if certain occupations require a probationary period longer than 1-year to allow supervisor sufficient time to assess and individual’s performance. In particular OPM agreed to consult with stakeholders to determine, among other things, if an extension to the probationary period for certain complex occupations is needed and, if necessary, pursue the established Executive Branch deliberation process for suggesting legislative proposals. OPM noted that it has authority to provide for longer probationary periods under certain circumstances and we have modified the recommendation so that it also calls on OPM to initiate the regulatory process to do so if warranted. As stated in our report, however, extending the probationary period and concurrently limiting appeal rights during that time would require legislative action under certain circumstances. At the same time, OPM did not concur with the part of our recommendation for OPM to determine the benefits and costs of providing automated notifications to supervisors that an individual’s probationary period is ending and that the supervisor needs to make an affirmative decision. OPM stated that choosing the best method to ensure that supervisors are aware that the probationary period is ending and appeal rights will accrue is an agency responsibility. We agreed. OPM also wrote that HR systems at all Shared Service Centers have the functionality to notify supervisors when an employee’s probationary period is ending. However, as our report notes, even though OPM considers having a tool in place to notify supervisors that a probationary period is ending to be a leading practice, not all agencies have implemented that practice. Accordingly, we clarified the recommendation so that it calls on OPM to educate agencies on the benefits and availability of automated notifications to alert supervisors. Recommendation To help strengthen the ability of agencies to deal with poor performers, and to help ensure OPM’s tools and guidance for dealing with poor performers are cost-effectively meeting agencies’ and supervisors’ needs, the Director of OPM, in conjunction with the CHCO Council and, as appropriate, with key stakeholders such as federal employee labor unions, should use Strategic Human Capital Management survey results (once available), Federal Employee Viewpoint Survey results, Performance Appraisal Assessment Tool responses, and other existing information, as relevant, to inform decisions on content and distribution methods. The importance of effective performance management and addressing poor performance may need to be reinforced with agency supervisors so that they more routinely seek out tools and guidance. Status OPM partially concurred with our recommendation to use the results of various surveys such as the FEVS and other information sources to help determine the extent to which its tools and guidance for dealing with poor performers are cost-effectively meeting agencies’ needs. Specifically, OPM said it would use relevant data from these resources to inform decisions about content and distribution methods for the material OPM makes available to agencies. At the same time, OPM noted that the information contained in these surveys and other data sources had certain limitations and may not always be relevant. We agreed and clarified the recommendation accordingly. Federal Telework: Program Measurement Continues to Confront Data Reliability Issues (April 2012) To improve OPM’s annual reporting of telework to Congress, the OPM Director should continue efforts to improve data collection and gather information that allows for the appropriate qualification of year-to-year comparisons and informs users about the effects of data collection changes going forward. As of June 2012, OPM had revised its collection of telework participation data from agencies to include full FY participation data from FY12. This action should enable OPM to report year-to-year comparisons of telework participation in its 2014 Status of Telework in the Federal Government Report to Congress. This report is expected to be issued by OPM in late 2014/early 2015. Reporting an accurate year to year comparison of telework participation would complete the implementation of this recommendation. Recommendation include requesting that agencies consistently utilize Standard Form (SF) 182 to document and report training costs associated with the different delivery mechanisms employed. Status sound training programs and financial plans for training. To improve federal training investment decision- making processes, the Director of OPM should, in line with statutory and regulatory provisions on maintenance and reporting of training information, work with the CHCO Council to improve the reliability of agency training investment information by: (1) ensuring that agencies are familiar with and follow guidance outlined in OPM’s Guide for the Collection and Management of Training Information regarding which training events should be documented as training and reported to OPM; (2) developing policies to strengthen the utilization of Standard Form- 182 to document and report training costs; (3) encouraging agencies through guidance and technical assistance, to develop policies that require consistent reporting of training data to their learning management systems; and (4) encouraging each agency to assess its existing training information system(s) and identify whether it is providing complete and reliable data and, if not, to develop approaches to improve the system(s), in order to do so. In February 2015, OPM provided a document that summarized efforts that are underway to address the recommendation. According to the document, during FY 14, OPM & the Chief Learning Officers (CLO) Council co- chaired a working group to develop proposed standardized data elements/metrics and data quality scorecard. This task has been folded into the agenda of the OPM-led working groups under the OMB/GSA Category Management Initiative. OPM stated that by September 30, 2015, it expects to develop and approve proposed standardized data elements and metrics and a quality scorecard. In the summer of 2014, OPM administered a survey to the Training & Development List Serv members on the utilization of OPM’s Training and Development Wiki on opm.gov. Survey results revealed that over 50 percent of the respondents Were not aware of the Wiki. A plan to revitalize the Wiki in order to provide improved guidance to agencies has been developed but OPM’s Employee Services still needs to determine what funding is available for the product. To improve federal training investment decision- making processes, the Director of OPM should provide regular report summaries to agencies on Enterprise Human Resources Integration (EHRI) training investment data and its reliability, in order to improve the transparency and reliability of federal training investment data. In February 2015, OPM provided a document that summarized efforts that are underway to address our recommendation. According to the document, OPM stated that by September 30, 2015, it expects to develop and approve proposed standardized data elements and metrics and a quality scorecard. In addition, OPM stated that the agency will provide agencies their training data reports from EHRI for FY 14 in FY 15. To improve federal training investment decision- making processes, the Director of OPM should, once federal training data reliability has been sufficiently improved, consistent with Executive Order No. 11348, use EHRI data to: a) counsel heads of agencies and other agency officials on the improvement of training, and b) assist agencies in developing sound programs and financial plans for training and provide advice, information, and assistance to agencies on planning and budgeting training programs. Status in late 2015. To improve federal training investment decision- making processes, the Director of OPM should, in collaboration with the CHCO and Chief Learning Officer (CLO) Councils, identify the best existing courses that fulfill government-wide training requirements, such as mandatory Equal Employment Opportunity training, or training in common federal occupations, such as basic training in financial management, and offer them to all agencies through HR University or other appropriate platform to reduce costly and duplicative federal training investments. In February 2015, OPM officials provided a document that summarized OPM’s continuing efforts to address our recommendation. According to the document, OPM is designing and building a Government-wide University prototype known as Gov U. Through Gov U all federal employees will access accredited training and education through a centralized portal that links them to federally mandated training, occupational and management/leadership courses and degree programs, so the government can reduce costs, increase quality and assure access for all employees. According to OPM, the CLO Council’s Mandatory Training Working Group drafted the government-wide mandatory training curriculum and also met to discuss the process for selecting federally mandated training courses to share across agencies in different modalities. The working group is currently developing the Domestic Violence, Sexual Assault and Stalking training. The course structure and interface have been designed and the course storyboarding is expected to be completed by the end of April 2015. Federal Employees: Office of Personnel Management’s 2012 Telework Report Shows Opportunities for Improvement (June 2013) In preparation for the 2014 telework report, OPM should provide goal setting assistance for agencies not yet able to report telework goals, including agencies which intend to establish nonparticipation goals but are not yet able to report on these goals. OPM should request in its data call that each of these agencies report by what year the agency will be able to report its goals, including each agency’s timetable for complete reporting and the status of action steps and milestones they established to gauge progress. While OPM has taken several actions to implement our recommendation, it is premature to assess the results of these efforts. OPM has conducted several training sessions with agencies and added an appendix to its 2013 data call to assist agencies establish standards for setting and evaluating telework goals. Our analysis of OPM’s 2012 telework report did not indicate a high number of agencies had set numeric goals, calling into question the value of OPM’s techniques to assist agencies in setting goals. Since the time of our report, the evidence OPM has provided continues to emphasis similar training techniques it has traditionally used with no evidence it has yielded improvements. We will review the status of this recommendation when OPM releases its 2014 telework report. OPM should include in its 2014 report to Congress the amount of cost savings resulting from the impacts of telework each agency may have identified, and the method the agency used to assess or verify the savings. OPM added questions to its 2013 telework data call to gather the amount of cost savings and the method the agency used to assess the savings. When OPM issues its 2014 telework report to Congress in 2015, we will assess the extent to which OPM has identified cost savings and how agencies assess or verify the savings. To improve the reliability of data collection, OPM should work with the Chief Human Capital Officers (CHCO) Council and its leadership to develop documented agreements and a timetable to complete an automated tracking system or other reliable data gathering method that can be validated by OPM. Status payroll providers, or the CHCO Council, or (2) a timetable to complete an automated tracking system or other reliable data gathering methods that can be validated by OPM. We followed-up with OPM in August and September 2014, and OPM confirmed there was no new information to report. Human Capital: Agencies Should More Fully Evaluate the Costs and Benefits of Executive Training (January 2014) To help ensure that agencies track and report comparable and reliable cost data and perform evaluations that assess the impact of executive training on agency performance or missions, the Director of OPM, in coordination with the CHCO Council, should establish interim milestones for meeting with agencies in order to address training data deficiencies and to establish well- defined timeframes for improving the reliability of the data in its Enterprise Human Resources Integration database. In May 2014, OPM outlined its action plan to address our recommendation. According to OPM, the agency will work with agencies via the Chief Human Capital Officers Council and Chief Learning Officers Council to poll agencies to establish an “as is” state of training data reliability and deficiencies. Based on evidence gathered, OPM plans to develop proposed standardized data elements, metrics and a data quality scorecard. Once both Councils approve the proposal, OPM plans to make changes to training data elements in the Enterprise Human Resource Integration data warehouse and Guide to Human Resources Reporting. OPM also has plans to monitor agency progress for improving data. To help ensure that agencies track and report comparable and reliable cost data and perform evaluations that assess the impact of executive training on agency performance or missions, the Director of OPM, in coordination with the CHCO Council, should improve assistance to agencies regarding evaluating the impact of executive training on mission and goals, for example by sharing information and examples of how agencies could better conduct such evaluations. In May 2014, OPM outlined its action plan to address our recommendation. According to OPM, the agency will work through the Chief Learning Officers Council to encourage agencies to incorporate training evaluation in their executive training in a more robust way. OPM plans to use OPM-hosted roundtables and best practice sessions to provide agencies assistance on evaluating the impact of executive training. OPM will encourage agencies to adopt an evaluation approach that considers individual agency management practices while being consistent with OPM’s Training Evaluation Field Guide. To enhance the efficiency of executive training, the Director of OPM, in coordination with the CHCO Council, should assess potential efficiencies identified by agencies for possible government-wide implementation, and then take the steps necessary to implement these, such as updating the guidance governing executive training programs. In May 2014, OPM outlined its action plan to address our recommendation. According to OPM, the agency plans to survey agencies about the components and effectiveness of their executive onboarding programs and use the information, and other current research, to offer Government-wide Best Practice sessions. OPM will also use the results to update guidance governing executive onboarding programs and the Federal Leadership Development Program website. Recommendation targets and measures should correspond to the lab’s overarching goals to build organizational capacity to innovate and achieve specific innovations in concrete operational challenges. Status suite of measures also includes outcome related measures for the lab, including the amount of estimated tax dollars saved as a result of lab activities and the satisfaction levels of participants in lab activities. OPM indicated that draft targets for these measures Were in review by the agency and that Lab officials had also developed a retrospective document on the lab that highlights key projects and the results of those projects. To help substantiate the lab’s original goals of enhancing skills in innovation and supporting project-based problem solving, the Director of OPM should direct lab staff to review and refine the set of survey instruments to ensure that taken as a whole, they will yield data of sufficient credibility and relevance to indicate the nature and extent to which the lab is achieving what it intends to accomplish or is demonstrating its value to those who use the lab space. For example, lab staff should consider the following actions: (1) Developing a standard set of questions across all service offerings. (2) Revising the format and wording of existing questions related to skills development to diminish the likelihood of social desirability bias and use post-session questions that ask, in a straight-forward way, about whether, or the extent to which, new information was acquired. (3) Replacing words or phrases that are ambiguous or vague with defined or relevant terminology (e.g., terms actually used in the session) so that the respondent can easily recognize a link between what is being asked and the content of the session. As of March 2015, OPM had revised its survey instruments to include standard, understandable surveys for (1) those receiving coaching skills in human centered design, (2) Lab Fellows who will use human-centered design techniques in their home agencies, and (3) human- centered design workshops. The surveys aim to measure participant’s satisfaction with sessions in the lab, as well as anticipated return on investment and other job-related improvements from work conducted in OPM’s and innovation lab. To help substantiate the lab’s original goals of enhancing skills in innovation and supporting project-based problem solving, the Director of OPM should direct lab staff to build on existing efforts to share information and knowledge within the federal innovation community. For example, OPM lab staff could reach out to other agencies with labs such as Census, the Department of Housing and Urban Development, and the National Aeronautics and Space Administration’s Kennedy Space Center to share best practices and develop a credible evaluation framework. In March 2015, OPM stated that it participates regularly in communities of practice with government innovation professionals and experts in human-centered design. OPM also stated that its lab staff members make presentations about OPM’s lab and innovation practices to multiple audiences, including members of the federal innovation community. For example, in December 2014 the director of OPM’s lab provided subject matter expertise to the Department of Health and Human Services to try to help ignite its innovation program curriculum. In addition, OPM noted that it engages regularly with its innovation lab workshop alumni to support their efforts to bring design-led innovation to their agencies. Recommendation other approaches to developing its cost estimate. considering whether to continue using its current methodology. OPM stated that its cost estimates have been based on (1) official time and average salary data provided to OPM through EHRI; (2) official time data manually provided directly to OPM by certain agencies; and (3) official time data manually updated by a number of agencies. OPM said that the approach we used in the report linking official time hours taken by specific individuals to those individuals’ actual salaries is not always possible using EHRI in all instances and is a labor intensive, and thus more costly process to undertake for the entire executive branch. The methodology we used was intended as an example of an alternative method for producing a cost estimate. OPM reported in October 2014 that 52 of the 62 agencies that reported fiscal year 2012 official time data to OPM did so using EHRI, thus OPM would be able to link official time hours used by specific individuals to the actual salaries for the overwhelming majority of reporting agencies. Although our approach may be slightly more labor intensive, it provides greater assurance that the cost reported is more representative of actual cost and, ultimately, more useful for oversight purposes. To help ensure that OPM and agencies collect, track, and report reliable data on the use of official time, the Director of OPM should work with agencies to identify opportunities to increase efficiency of data collection and reporting through EHRI. To help ensure that OPM and agencies collect, track, and report reliable data on the use of official time, the Director of OPM should consider whether it would be useful to share agencies’ practices on monitoring use of official time through existing forums such as the Employee Labor Relations (ELR) network. Status strengthen its assistance to agencies by sharing techniques and approaches on monitoring official time in a collaborative manner through its membership in the ELR network. Federal Paid Administrative Leave: Additional Guidance Needed to Improve OPM Data (October 2014) To help ensure that agencies report comparable and reliable data to Enterprise Human Resources Integration (EHRI), the Director of OPM, in coordination with agencies and payroll service providers Develop guidance for agencies on which activities to enter, or not enter, as paid administrative leave in agency time and attendance systems Provide updated and specific guidance to payroll service providers on which activities to report, or not report, to the paid administrative leave data element in EHRI. In October 2014, OPM partially agreed with our recommendation. OPM agreed that (1) some reporting requirements should be clarified, in particular, guidance regarding reporting holiday time; (2) it would clarify that the paid administrative leave category is a catch-all category for paid leave that does not fall into another EHRI category; and (3) it will collaborate with agencies and payroll providers in developing changes in guidance and EHRI payroll data elements. OPM said that its role does not include directing guidance to agencies on how to collect time and attendance data, but it does including issuing guidance on EHRI data requirements that agency systems should support. We believe that in directing EHRI data requirements to all responsible agency officials and payroll providers, OPM can provide such guidance to agencies. We continue to believe our recommendation is valid because we found that payroll providers were reporting time for activities as paid administrative leave that they should not, according to OPM. Human Capital: OPM Needs to Better Analyze and Manage Dual Compensation Waiver Data (December 2014) To improve OPM’s assistance to agencies and management of its dual compensation waiver program, the Director of OPM should analyze dual compensation waivers to identify trends that can inform OPM’s human capital management tools. Status analyze waivers and identify trends that could improve its other tools. To improve OPM’s assistance to agencies and management of its dual compensation waiver program, the Director of OPM should establish policies and procedures for documenting the dual compensation waiver review process. In December 2014, OPM stated that it partially concurred with our recommendation to establish policies and procedures for documenting the dual compensation waiver review process. OPM noted that it has policies and procedures for adjudicating waivers and that it is in compliance with the National Archives and Records Administration policies. However, OPM was unable to provide evidence of any such policies and procedures. In fact, OPM could not demonstrate adherence to federal internal control standards stating agencies should clearly document significant transactions and events and the documentation should be readily available for examination. Further, while OPM was able to ultimately produce 16 waiver decision letters, it was unable to provide a single complete, agency waiver application along with the supporting documentation and corresponding OPM decision letter. OPM also could not identify the total number of waivers for any given time period, meaning that even if OPM individually reviewed the thousands of documents in its document management system, it would not know if all materials were maintained appropriately. We continue to believe that OPM should take action to fully address this recommendation and comply with federal internal control standards. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Strategic human capital management plays a critical role in maximizing the government's performance and assuring its accountability to Congress and to the nation as a whole. GAO designated strategic human capital management as a government-wide, high-risk area in 2001. Since then, important progress has been made. However, retirements and the potential loss of leadership and institutional knowledge, coupled with fiscal pressures, underscore the importance of a strategic and efficient approach to acquiring and retaining individuals with needed critical skills. As a result, strategic human capital management remains a high-risk area. This testimony is based on a large body of GAO work issued from January 2014 through February 2015; and ongoing work related to employee engagement. This testimony, among other things, focuses on key human capital areas where some actions have been taken but attention is still needed by OPM and federal agencies on issues such as: (1) the GS classification system; (2) mission-critical skills gaps; (3) performance management; and (4) employee engagement. Serious human capital shortfalls can erode the capacity of federal agencies and threaten their ability to cost-effectively carry out their missions. GAO's prior work has shown that continued attention is needed to ensure agencies have the human resources to drive performance and achieve the results the nation demands. Key areas where the federal government has taken some actions but additional attention is still needed include the following: General Schedule (GS) Classification System: In 2014, GAO identified eight key attributes of a modern, effective classification system, such as, flexibility, transparency, and simplicity. The GS system's design reflects some of these eight attributes, but when the Office of Personnel Management (OPM) implemented the system, the attributes of transparency, internal equity, simplicity, flexibility, and adaptability were reduced. This occurred, in part, because some attributes are at odds with others so fully achieving one comes at the expense of another. GAO recommended and OPM partially concurred with the need to examine ways to make the GS system consistent with the eight attributes of an effective classification system. Mission-Critical Skills Gaps: The challenges that agencies face were not fully captured by the Chief Human Capital Officers Council Working Group's efforts that identified skills gaps in six government-wide, mission-critical occupations. In 2015, GAO identified skills gaps in nearly two dozen occupations with significant program implementation impacts. As a result, GAO recommended OPM take a number of steps to address this issue. OPM concurred and in response has established an interagency working group, which is expected to identify a new set of government-wide skills gaps by June 2015. Improving Performance Management: OPM makes a range of tools and guidance available to help agencies address poor performance. In 2015, GAO concluded that improved supervision and better use of probationary periods are needed to address substandard employee performance. In response, OPM agreed to consult with stakeholders regarding the need for longer probationary periods for some complex positions. In 2015, GAO also found that OPM needed to do more to ensure meaningful distinctions are made in senior executive ratings and performance awards. OPM disagreed with the recommendation. GAO maintains that additional action should be considered to ensure equity in ratings and performance awards across departments. Strengthening Employee Engagement: GAO's ongoing work indicates that the recent government-wide decline in engagement, as measured by OPM's Employee Engagement Index, masks the fact that the majority of agencies either have sustained or increased their employee engagement levels. Government-wide, engagement has declined 4 percentage points from an estimated 67 percent in 2011 to an estimated 63 percent in 2014. However, this decline is primarily attributable to 13 agencies where employee engagement declined from 2013 to 2014. In contrast, 31 of 47 agencies have sustained and 3 agencies have increased their employee engagement levels from 2013 to 2014. Over the years, GAO has made numerous recommendations to agencies and OPM to improve their strategic human capital management efforts. While OPM and the agencies have implemented some of GAO's recommendations, actions are still needed on others to continue to make progress in these areas in the future. |
Since our last high-risk update, while progress has varied, many of the 32 high-risk areas on our 2015 list have shown solid progress. One area related to sharing and managing terrorism-related information is now being removed from the list. Agencies can show progress by addressing our five criteria for removal from the list: leadership commitment, capacity, action plan, monitoring, and demonstrated progress. As shown in table 1, 23 high-risk areas, or two-thirds of all the areas, have met or partially met all five criteria for removal from our High-Risk List; 15 of these areas fully met at least one criterion. Compared with our last assessment, 11 high-risk areas showed progress in one or more of the five criteria. Two areas declined since 2015. These changes are indicated by the up and down arrows in table 1. Of the 11 high-risk areas showing progress between 2015 and 2017, sufficient progress was made in 1 area—Establishing Effective Mechanisms for Sharing and Managing Terrorism-Related Information to Protect the Homeland—to be removed from the list. In two other areas, enough progress was made that we removed a segment of the high-risk area—Mitigating Gaps in Weather Satellite Data and Department of Defense (DOD) Supply Chain Management. The other eight areas improved in at least one criterion rating by either moving from “not met” to “partially met” or from “partially met” to “met.” We removed the area of Establishing Effective Mechanisms for Sharing and Managing Terrorism-Related Information to Protect the Homeland from the High-Risk List because the Program Manager for the Information Sharing Environment (ISE) and key departments and agencies have made significant progress to strengthen how intelligence on terrorism, homeland security, and law enforcement, as well as other information (collectively referred to in this section as terrorism-related information), is shared among federal, state, local, tribal, international, and private sector partners. As a result, the Program Manager and key stakeholders have met all five criteria for addressing our high-risk designation, and we are removing this issue from our High-Risk List. While this progress is commendable, it does not mean the government has eliminated all risk associated with sharing terrorism-related information. It remains imperative that the Program Manager and key departments and agencies continue their efforts to advance and sustain ISE. Continued oversight and attention is also warranted given the issue’s direct relevance to homeland security as well as the constant evolution of terrorist threats and changing technology. The Program Manager, the individual responsible for planning, overseeing, and managing ISE, along with the key departments and agencies—the Departments of Homeland Security (DHS), Justice (DOJ), State (State), and Defense (DOD), and the Office of the Director of National Intelligence (ODNI)—are critical to implementing and sustaining ISE. Following the terrorist attacks of 2001, Congress and the executive branch took numerous actions aimed explicitly at establishing a range of new measures to strengthen the nation’s ability to identify, detect, and deter terrorism-related activities. For example, ISE was established in accordance with the Intelligence Reform and Terrorism Prevention Act of 2004 (Intelligence Reform Act) to facilitate the sharing of terrorism-related information. Figure 1 depicts the relationship between the various stakeholders and disciplines involved with the sharing and safeguarding of terrorism-related information through ISE. The Program Manager and key departments and agencies met the leadership commitment and capacity criteria in 2015, and have subsequently sustained efforts in both these areas. For example, the Program Manager clearly articulated a vision for ISE that reflects the government’s terrorism-related information sharing priorities. Key departments and agencies also continued to allocate resources to operations that improve information sharing, including developing better technical capabilities. The Program Manager and key departments and agencies also developed, generally agreed upon, and executed the 2013 Strategic Implementation Plan (Implementation Plan), which includes the overall strategy and more specific planning steps to achieve ISE. Further, they have demonstrated that various information-sharing initiatives are being used across multiple agencies as well as state, local, and private-sector stakeholders. For example, the project manager has developed a comprehensive framework for managing enterprise architecture to help share and integrate terrorism-related information among multiple stakeholders in ISE. Specifically, the Project Interoperability initiative includes technical resources and other guidance that promote greater information system compatibility and performance. Furthermore, the key departments and agencies have applied the concepts of the Project Interoperability Initiative to improve mission operations by better linking different law enforcement databases and facilitating better geospatial analysis, among other things. In addition, the Program Manager and key departments and agencies have continued to devise and implement ways to measure the effect of ISE on information sharing to address terrorist and other threats to the homeland. They developed performance metrics for specific information- sharing initiatives (e.g., fusion centers) used by various stakeholders to receive and share information. The Program Manager and key departments and agencies have also documented mission-specific accomplishments (e.g., related to maritime domain awareness) where the Program Manager helped connect previously incompatible information systems. The Program Manager has also partnered with DHS to create an Information Sharing Measure Development Pilot that intends to better measure the effectiveness of information sharing across all levels of ISE. Further, the Program Manager and key departments and agencies have used the Implementation Plan to track progress, address challenges, and substantially achieve the objectives in the National Strategy for Information Sharing and Safeguarding. The Implementation Plan contains 16 priority objectives, and by the end of fiscal year 2016, 13 of the 16 priority objectives were completed. The Program Manager transferred the remaining three objectives, which were all underway, to other entities with the appropriate technical expertise to continue implementation through fiscal year 2019. In our 2013 high-risk update, we listed nine action items that were critical for moving ISE forward. In that report, we determined that two of those action items—demonstrating that the leadership structure has the needed authority to leverage participating departments, and updating the vision for ISE—had been completed. In our 2015 update, we determined that the Program Manager and key departments had achieved four of the seven remaining action items—demonstrating that departments are defining incremental costs and funding; continuing to identify technological capabilities and services that can be shared collaboratively; demonstrating that initiatives within individual departments are, or will be, leveraged to benefit all stakeholders; and demonstrating that stakeholders generally agree with the strategy, plans, time frames, responsibilities, and activities for substantially achieving ISE. For the 2017 update, we determined that the remaining three action items have been completed: establishing an enterprise architecture management capability; demonstrating that the federal government can show, or is more fully developing a set of metrics to measure, the extent to which sharing has improved under ISE; and demonstrating that established milestones and time frames are being used as baselines to track and monitor progress. Achieving all nine action items has, in effect, addressed our high-risk criteria. While this demonstrates significant and important progress, sharing terrorism-related information remains a constantly evolving work in progress that requires continued effort and attention from the Program Manager, departments, and agencies. Although no longer a high-risk issue, sharing terrorism-related information remains an area with some risk, and continues to be vitally important to homeland security, requiring ongoing oversight as well as continuous improvement to identify and respond to changing threats and technology. Table 2 summarizes the Program Manager’s and key departments’ and agencies’ progress in achieving the action items. As we have with areas previously removed from the High-Risk List, we will continue to monitor this area, as appropriate, to ensure that the improvements we have noted are sustained. If significant problems again arise, we will consider reapplying the high-risk designation. Additional Information on Establishing Effective Mechanisms for Sharing and Managing Terrorism-Related Information to Protect the Homeland is provided on page 653 of the report. In the 2 years since our last high-risk update, sufficient progress has been made in two areas—DOD Supply Chain Management and Mitigating Gaps in Weather Satellite Data—that we are narrowing their scope. DOD manages about 4.9 million secondary inventory items, such as spare parts, with a reported value of approximately $91 billion as of September 2015. Since 1990, DOD’s inventory management has been included on our High-Risk List due to the accumulation of excess inventory and weaknesses in demand forecasting for spare parts. In addition to DOD’s inventory management, the supply chain management high-risk area focuses on materiel distribution and asset visibility within DOD. Based on DOD’s leadership commitment and demonstrated progress to address weaknesses since 2010, we are removing the inventory management component from the supply chain management high-risk area. Specifically, DOD has taken the following actions: Implemented a congressionally mandated inventory management corrective action plan and institutionalized a performance management framework, including regular performance reviews and standardized metrics. DOD has also developed and begun implementing a follow-on improvement plan. Reduced the percentage and value of its “on-order excess inventory” (i.e., items already purchased that may be excess due to subsequent changes in requirements) and “on-hand excess inventory” (i.e., items categorized for potential reuse or disposal). DOD’s data show that the proportion of on-order excess inventory to the total amount of on- order inventory decreased from 9.5 percent at the end of fiscal year 2009 to 7 percent at the end of fiscal year 2015, the most recent fiscal year for which data are available. During these years, the value of on- order excess inventory also decreased from $1.3 billion to $701 million. DOD’s data show that the proportion of on-hand excess inventory to the total amount of on-hand inventory dropped from 9.4 percent at the end of fiscal year 2009 to 7.3 percent at the end of fiscal year 2015. The value of on-hand excess inventory also decreased during these years from $8.8 billion to $6.8 billion. Implemented numerous actions to improve demand forecasting and began tracking department-wide forecasting accuracy metrics in 2013, resulting in forecast accuracy improving from 46.7 percent in fiscal year 2013 to 57.4 percent in fiscal year 2015, the latest fiscal year for which complete data are available. Implemented 42 of our recommendations since 2006 and is taking actions to implement an additional 13 recommendations, which are focused generally on reassessing inventory goals, improving collaborative forecasting, and making changes to information technology (IT) systems used to manage inventory. Additional information on DOD Supply Chain Management is provided on page 248 of the report. Mitigating Gaps in Weather Satellite Data The United States relies on two complementary types of satellite systems for weather observations and forecasts: (1) polar-orbiting satellites that provide a global perspective every morning and afternoon, and (2) geostationary satellites that maintain a fixed view of the United States. Both types of systems are critical to weather forecasters, climatologists, and the military, who map and monitor changes in weather, climate, the oceans, and the environment. Federal agencies are planning or executing major satellite acquisition programs to replace existing polar and geostationary satellite systems that are nearing or beyond the end of their expected life spans. The Department of Commerce’s National Oceanic and Atmospheric Administration (NOAA) is responsible for the polar satellite program that crosses the equator in the afternoon and for the nation’s geostationary weather satellite program; DOD is responsible for the polar satellite program that crosses the equator in the early morning orbit. Over the last several years, we have reported on the potential for a gap in satellite data between the time that the current satellites are expected to reach the end of their lifespans and the time when the next satellites are expected to be in orbit and operational. We added this area to our High- Risk List in 2013. According to NOAA program officials, a satellite data gap would result in less accurate and timely weather forecasts and warnings of extreme events—such as hurricanes, storm surges, and floods. Such degraded forecasts and warnings would endanger lives, property, and our nation’s critical infrastructures. Similarly, according to DOD officials, a gap in space-based weather monitoring capabilities could affect the planning, execution, and sustainment of U.S. military operations around the world. In our prior high-risk updates, we reported on NOAA’s efforts to mitigate the risk of a gap in its polar and geostationary satellite programs. With strong congressional support and oversight, NOAA has made significant progress in its efforts to mitigate the potential for gaps in weather satellite data on its geostationary weather satellite program. Specifically, the agency demonstrated strong leadership commitment to mitigating potential gaps in geostationary satellite data by revising and improving its gap mitigation/contingency plans. Previously, in December 2014, we reported on shortfalls in the satellite program’s gap mitigation/contingency plans and made recommendations to NOAA to address these shortfalls. For example, we noted that the plan did not sufficiently address strategies for preventing a launch delay, timelines and triggers to prevent a launch delay, and whether any of its mitigation strategies would meet minimum performance levels. NOAA agreed with these recommendations and released a new version of its geostationary satellite contingency plan in February 2015 that addressed the recommendations, thereby meeting the criterion for having an action plan. We rated capacity as partially met in our 2015 report due to concerns about NOAA’s ability to complete critical testing activities because it was already conducting testing on a round-the-clock, accelerated schedule. Since then, NOAA adjusted its launch schedule to allow time to complete critical integration and testing activities. In doing so, the agency demonstrated that it met the capacity criterion. NOAA has also met the criterion for demonstrating progress by mitigating schedule risks and successfully launching the satellite. In September 2013, we reported that the agency had weaknesses in its schedule- management practices on its core ground system and spacecraft. We made recommendations to address those weaknesses, which included sequencing all activities, ensuring there are adequate resources for the activities, and analyzing schedule risks. NOAA agreed with the recommendations and the Geostationary Operational Environmental Satellite-R series (GOES-R) program improved its schedule management practices. By early 2016, the program had improved the links between remaining activities on the spacecraft schedule, included needed schedule logic for a greater number of activities on the ground schedule, and included indications on the ground schedule that the results of a schedule risk analysis were used in calculating its durations. In addition, the program successfully launched the GOES-R satellite in November 2016. Oversight by Congress has been instrumental in reducing the risk of geostationary weather satellite gaps. For example, Subcommittees of the House Science, Space, and Technology Committee held multiple hearings to provide oversight of the satellite acquisition and the risk of gaps in satellite coverage. As a result, the agency now has a robust constellation of operational and backup satellites in orbit and has made significant progress in addressing the risk of a gap in geostationary data coverage. Accordingly, there is sufficient progress to remove this segment from the high-risk area. Additional information on Mitigating Gaps in Weather Satellite Data is provided on pages 19 and 430 of the high-risk report. Below are selected examples of areas where progress has been made. Strengthening Department of Homeland Security Management Functions. The Department of Homeland Security (DHS) continues to strengthen and integrate its management functions and progressed from partially met to met for the monitoring criterion. Since our 2015 high-risk update, DHS has strengthened its monitoring efforts for financial system modernization programs by entering into a contract for independent verification and validation services to help ensure that the modernization projects meet key requirements. These programs are key to effectively supporting the department’s financial management operations. Additionally, DHS continued to meet the criteria for leadership commitment and a corrective action plan. DHS’s top leadership has demonstrated exemplary support and a continued focus on addressing the department’s management challenges by, among other things, issuing 10 updated versions of DHS’s initial January 2011 Integrated Strategy for High Risk Management. The National Defense Authorization Act for Fiscal Year 2017 reinforces this focus with the inclusion of a mandate that the DHS Under Secretary for Management report to us every 6 months to demonstrate measurable, sustainable progress made in implementing DHS’s corrective action plans to address the high-risk area until we submit written notification of the area’s removal from the High-Risk List to the appropriate congressional committees. Similar provisions were included in the DHS Headquarters Reform and Improvement Act of 2015, the DHS Accountability Act of 2016, and the DHS Reform and Improvement Act. Additional information on this high-risk area is provided on page 354 of the report. Strategic Human Capital Management. This area progressed from partially met to met on leadership commitment. The Office of Personnel Management (OPM), agencies, and Congress have taken actions to improve efforts to address mission critical skills gaps. Specifically, OPM has demonstrated leadership commitment by publishing revisions to its human capital regulations in December 2016 that require agencies to, among other things, implement human capital policies and programs that address and monitor government- wide and agency-specific skills gaps. This initiative has increased the likelihood that skills gaps with the greatest operational effect will be addressed in future efforts. At the same time, Congress has provided agencies with authorities and flexibilities to manage the federal workforce and make the federal government a more accountable employer. For example, Congress included a provision in the National Defense Authorization Act for Fiscal Year 2016 to extend the probationary period for newly-hired civilian DOD employees from 1 to 2 years. This action is consistent with our 2015 reporting that better use of probationary periods gives agencies the ability to ensure an employee’s skills are a good fit for all critical areas of a particular job. Additional information on this high-risk area is provided on page 61 of the report. Transforming the Environmental Protection Agency’s Processes for Assessing and Controlling Toxic Chemicals. Overall, this high- risk area progressed from not met to partially met on two criteria— capacity and demonstrated progress—and continued to partially meet the criterion for monitoring due to progress in one program area. The Environmental Protection Agency’s (EPA) ability to effectively implement its mission of protecting public health and the environment is critically dependent on assessing the risks posed by chemicals in a credible and timely manner. EPA assesses these risks under a variety of actions, including the Integrated Risk Information System (IRIS) program and EPA’s Toxic Substances Control Act (TSCA) program. The IRIS program has made some progress on the capacity, monitoring, and demonstrated progress criteria. In terms of IRIS capacity, EPA has partially met this criterion by finalizing a Multi-Year Agenda to better assess how many people and resources should be dedicated to the IRIS program. In terms of IRIS monitoring, EPA has met this criterion in part by using a Chemical Assessment Advisory Committee to review IRIS assessments, among other actions. In terms of IRIS demonstrated progress, EPA has partially met this criterion as of January 2017 by issuing five assessments since fiscal year 2015. The Frank R. Lautenberg Chemical Safety for the 21st Century Act amended TSCA and was enacted on June 22, 2016. Passing TSCA reform may facilitate EPA’s effort to improve its processes for assessing and controlling toxic chemicals in the years ahead. The new law provides EPA with greater authority and the ability to take actions that could help EPA implement its mission of protecting human health and the environment. EPA officials stated that the agency is better positioned to take action to require chemical companies to report chemical toxicity and exposure data. Officials also stated that the new law gives the agency additional authorities, including the authority to require companies to develop new information relating to a chemical as necessary for prioritization and risk evaluation. Using both new and previously existing TSCA authorities should enhance the agency’s ability to gather new information as necessary to evaluate hazard and exposure risks. Continued leadership commitment from EPA officials and Congress will be needed to fully implement reforms. Additional work will also be needed to issue a workload analysis to demonstrate capacity, complete a corrective action plan, and demonstrate progress implementing the new legislation. Additional information on this high-risk area is provided on page 417 of the report. Managing Federal Real Property. The federal government continued to meet the criteria for leadership commitment, now partially meets the criterion for demonstrated progress, and made some progress in each of the other high-risk criteria. The Office of Management and Budget (OMB) issued the National Strategy for the Efficient Use of Real Property (National Strategy) on March 25, 2015, which directs Chief Financial Officer (CFO) Act agencies to take actions to reduce the size of the federal real property portfolio, as we recommended in 2012. In addition, in December 2016, two real property reform bills were enacted that could address the long-standing problem of federal excess and underutilized property. The Federal Assets Sale and Transfer Act of 2016 may help address stakeholder influence by establishing an independent board to identify and recommend five high-value civilian federal buildings for disposal within 180 days after the board members are appointed, as well as develop recommendations to dispose and redevelop federal civilian real properties. Additionally, the Federal Property Management Reform Act of 2016 codified the Federal Real Property Council (FRPC) for the purpose of ensuring efficient and effective real property management while reducing costs to the federal government. FRPC is required to establish a real property management plan template, which must include performance measures, and strategies and government-wide goals to reduce surplus property or to achieve better utilization of underutilized property. In addition, federal agencies are required to annually provide FRPC a report on all excess and underutilized property, and identify leased space that is not fully used or occupied. In addressing our 2016 recommendation to improve the reliability of real property data, GSA conducted an in-depth survey that focused on key real property data elements maintained in the Federal Real Property Profile, formed a working group of CFO Act agencies to analyze the survey results and reach consensus on reforms, and issued a memorandum to CFO Act agencies designed to improve the consistency and quality of real property data. The Federal Protective Service, which protects about 9,500 federal facilities, implemented our recommendation aimed at improving physical security by issuing a plan that identifies goals and describes resources that support its risk management approach. In addition, the Interagency Security Committee, a DHS-chaired organization, issued new guidance intended to make the most effective use of physical security resources. Additional information on this high-risk area is provided on page 77 of the report. Enforcement of Tax Laws. The Internal Revenue Service’s (IRS) continued efforts to enforce tax laws and address identity theft refund fraud (IDT) have resulted in the agency meeting one criterion for removal from the High-Risk List (leadership commitment) and partially meeting the remaining four criteria (capacity, action plan, monitoring, and demonstrating progress). IDT is a persistent and evolving threat that burdens legitimate taxpayers who are victims of the crime. It cost the U.S. Treasury an estimated minimum of $2.2 billion during the 2015 tax year. Congress and IRS have taken steps to address this challenge. IRS has deployed new tools and increased resources dedicated to identifying and combating IDT refund fraud. In addition, the Consolidated Appropriations Act, 2016, amended the tax code to accelerate Wage and Tax Statement (W-2) filing deadlines to January 31. We had previously reported that the wage information that employers report on Form W-2 was not available to IRS until after it issues most refunds. With earlier access to W-2 wage data, IRS could match such information to taxpayers’ returns and identify discrepancies before issuing billions of dollars of fraudulent IDT refunds. Such matching could also provide potential benefits for other IRS enforcement programs, such as preventing improper payments via the Earned Income Tax Credit. Additional information on this high- risk area is provided on page 500 of the report. In addition to being instrumental in supporting progress in individual high- risk areas, Congress also has taken actions to enact various statutes that, if implemented effectively, will help foster progress on high-risk issues government-wide. These include the following: Program Management Improvement Accountability Act: Enacted in December 2016, the act seeks to improve program and project management in federal agencies. Among other things, the act requires the Deputy Director of the Office of Management and Budget (OMB) to adopt and oversee implementation of government-wide standards, policies, and guidelines for program and project management in executive agencies. The act also requires the Deputy Director to conduct portfolio reviews to address programs on our High-Risk List. It further creates a Program Management Policy Council to act as an interagency forum for improving practices related to program and project management. The Council is to review programs on the High-Risk List and make recommendations to the Deputy Director or designee. We are to review the effectiveness of key efforts under the act to improve federal program management. Fraud Reduction and Data Analytics Act of 2015 (FRDA): FRDA, enacted in June 2016, is intended to strengthen federal anti-fraud controls, while also addressing improper payments. FRDA requires OMB to use our Fraud Risk Framework to create guidelines for federal agencies to identify and assess fraud risks, and then design and implement control activities to prevent, detect, and respond to fraud. Agencies, as part of their annual financial reports beginning in fiscal year 2017, are further required to report on their fraud risks and their implementation of fraud reduction strategies, which should help Congress monitor agencies’ progress in addressing and reducing fraud risks. To aid federal agencies in better analyzing fraud risks, FRDA requires OMB to establish a working group tasked with developing a plan for the creation of an interagency library of data analytics and data sets to facilitate the detection of fraud and the recovery of improper payments. This working group and the library should help agencies to coordinate their fraud detection efforts and improve their ability to use data analytics to monitor databases for potential improper payments. The billions of dollars of improper payments are a central part of the Medicare Program, Medicaid Program, and Enforcement of Tax Laws (Earned Income Tax Credit) high-risk areas. IT Acquisition Reform, Legislation known as the Federal Information Technology Acquisition Reform Act (FITARA): FITARA, enacted in December 2014, was intended to improve how agencies acquire IT and enable Congress to monitor agencies’ progress and hold them accountable for reducing duplication and achieving cost savings. FITARA includes specific requirements related to seven areas: the federal data center consolidation initiative, enhanced transparency and improved risk management, agency Chief Information Officer authority enhancements, portfolio review, expansion of training and use of IT acquisition cadres, government- wide software purchasing, and maximizing the benefit of the federal strategic sourcing initiative. Effective implementation of FITARA is central to making progress in the Improving the Management of IT Acquisitions and Operations government-wide area we added to the High-Risk List in 2015. In the 2 years since the last high-risk update, two areas—Mitigating Gaps in Weather Satellite Data and Management of Federal Oil and Gas Resources—have expanded in scope because of emerging challenges related to these overall high-risk areas. In addition, while progress is needed across all high-risk areas, particular areas need significant attention. While NOAA has made significant progress, as described earlier, in its geostationary weather satellite program, DOD has made limited progress in meeting its requirements for the polar satellite program. In 2010, when the Executive Office of the President decided to disband a tri-agency polar weather satellite program, DOD was given responsibility for providing polar-orbiting weather satellite capabilities in the early morning orbit. This information is used to provide updated information for weather observations and models. However, the department was slow to develop plans to replace the existing satellites that provide this coverage. Because DOD delayed establishing plans for its next generation of weather satellites, there is a risk of a satellite data gap in the early morning orbit. The last satellite that the department launched in 2014 called Defense Meteorological Satellite Program (DMSP)-19, stopped providing recorded data used in weather models in February 2016. A prior satellite, called DMSP-17, is now the primary satellite operating in the early morning orbit. However, this satellite, which was launched in 2006, is operating with limitations due to the age of its instruments. DOD had developed another satellite, called DMSP-20, but plans to launch that satellite were canceled after the department did not certify that it would launch the satellite by the end of calendar year 2016. The department conducted a requirements review and analysis of alternatives from February 2012 through September 2014 to determine the best way forward for providing needed polar-orbiting satellite environmental capabilities in the early morning orbit. In October 2016, DOD approved plans for its next generation of weather satellites, called the Weather System Follow-on—Microwave program, which will meet the department’s needs for satellite information on oceanic wind speed and direction to protect ships on the ocean’s surface. The department plans to launch a demonstration satellite in 2017 and to launch its first operational satellite developed under this program in 2022. However, DOD’s plans for the early morning orbit are not comprehensive. The department did not thoroughly assess options for providing its two highest-priority capabilities, cloud descriptions and area-specific weather imagery. These capabilities were not addressed due to an incorrect assumption about the capabilities that would be provided by international partners. The Weather System Follow-on—Microwave program does not address these two highest-priority capabilities and the department has not yet determined its long-term plans for providing these capabilities. As a result, the department will need to continue to rely on the older DMSP-17 satellite until its new satellite becomes operational in 2022, and it establishes and implements plans to address the high-priority capabilities that the new satellite will not address. Given the age of the DMSP-17 satellite and uncertainty on how much longer it will last, the department could face a gap in critical satellite data. In August 2016, DOD reported to Congress its near-term plans to address potential satellite data gaps. These plans include a greater reliance on international partner capabilities, exploring options to move a geostationary satellite over an affected region, and plans to explore options for acquiring and fielding new equipment, such as satellites and satellite components to provide the capabilities. In addition, the department anticipates that the demonstration satellite to be developed as a precursor to the Weather System Follow-on—Microwave program could help mitigate a potential gap by providing some useable data. However, these proposed solutions may not be available in time or be comprehensive enough to avoid near-term coverage gaps. Such a gap could negatively affect military operations that depend on weather data, such as long-range strike capabilities and aerial refueling. DOD needs to demonstrate progress on its new Weather Satellite Follow- on—Microwave program and to establish and implement plans to address the high-priority capabilities that are not included in the program. Additional information on Mitigating Gaps in Weather Satellite Data is provided on page 430 of the high-risk report. On April 20, 2010, the Deepwater Horizon drilling rig exploded in the Gulf of Mexico, resulting in 11 deaths, serious injuries, and the largest marine oil spill in U.S. history. In response, in May 2010, the Department of the Interior (Interior) first reorganized its offshore oil and gas management activities into separate offices for revenue collection, under the Office of Natural Resources Revenue, and energy development and regulatory oversight, under the Bureau of Ocean Energy Management, Regulation and Enforcement. Later, in October 2011, Interior further reorganized its energy development and regulatory oversight activities when it established two new bureaus to oversee offshore resources and operational compliance with environmental and safety requirements. The new Bureau of Ocean Energy Management (BOEM) is responsible for leasing and approving offshore development plans while the new Bureau of Safety and Environmental Enforcement (BSEE) is responsible for lease operations, safety, and enforcement. In 2011, we added Interior’s management of federal oil and gas resources to the High-Risk List based on three concerns: (1) Interior did not have reasonable assurance that it was collecting its share of billions of dollars of revenue from federal oil and gas resources; (2) Interior continued to experience problems hiring, training, and retaining sufficient staff to oversee and manage federal oil and gas resources; and (3) Interior was engaged in restructuring its oil and gas program, which is inherently challenging, and there were questions about whether Interior had the capacity to reorganize while carrying out its range of responsibilities, especially in a constrained resource environment. Immediately after reorganizing, Interior developed memorandums and standard operating procedures to define roles and responsibilities, and facilitate and formalize coordination between BOEM and BSEE. Interior also revised polices intended to improve its oversight of offshore oil and gas activities, such as new requirements designed to mitigate the risk of a subsea well blowout or spill. In 2013, we determined that progress had been made, because Interior had fundamentally completed reorganizing its oversight of offshore oil and gas activities. As a result, in 2013, we removed the reorganization segment from this high-risk area. However, in February 2016, we reported that BSEE had undertaken various reform efforts since its creation in 2011, but had not fully addressed deficiencies in its investigative, environmental compliance, and enforcement capabilities identified by investigations after the Deepwater Horizon incident. BSEE’s ongoing restructuring has made limited progress enhancing the bureau’s investigative capabilities. BSEE continues to use pre– Deepwater Horizon incident policies and procedures. Specifically, BSEE has not completed a policy outlining investigative responsibilities or updated procedures for investigating incidents—among the goals of BSEE’s restructuring, according to restructuring planning documents, and consistent with federal standards for internal control. The use of outdated investigative policies and procedures is a long-standing deficiency. Post– Deepwater Horizon incident investigations found that Interior’s policies and procedures did not require it to plan investigations, gather and document evidence, and ensure quality control, and determined that continuing to use them posed a risk to the effectiveness of bureau investigations. Without completing and updating its investigative policies and procedures, BSEE continues to face this risk. BSEE’s ongoing restructuring of its environmental compliance program reverses actions taken to address post–Deepwater Horizon incident concerns, and risks weakening the bureau’s environmental compliance oversight capabilities. In 2011, in response to two post–Deepwater Horizon incident investigations that found that BSEE’s predecessor’s focus on oil and gas development might have been at the expense of protecting the environment, BSEE created an environmental oversight division with region-based staff reporting directly to the headquarters- based division chief instead of regional management. This reporting structure was to help ensure that environmental issues received appropriate weight and consideration within the bureau. Under the restructuring, since February 2015, field-based environmental compliance staff again report to their regional directors. BSEE’s rationale for this action is unclear, as it was not documented or analyzed as part of the bureau’s restructuring planning. Under federal standards for internal control, management is to assess the risks posed by external and internal sources and decide what actions to take to mitigate them. Without assessing the risk of reversing its reporting structure, Interior cannot be sure that BSEE will have reasonable assurance that environmental issues are receiving the appropriate weight and consideration, as called for by post–Deepwater Horizon incident investigations. When we reviewed BSEE’s environmental compliance program, we found that the interagency agreements between Interior and EPA designed to coordinate water quality monitoring under the National Pollutant Discharge Elimination System were decades old. According to BSEE annual environmental compliance activity reports, the agreements may not reflect the agency’s current resources and needs. For example, a 1989 agreement stipulates that Interior shall inspect no more than 50 facilities on behalf of EPA per year, and shall not conduct water sampling on behalf of EPA. Almost 30 years later, after numerous changes in drilling practices and technologies, it is unclear whether inspecting no more than 50 facilities per year is sufficient to monitor water quality. Nevertheless, senior BSEE officials told us that the bureau has no plans to update its agreements with EPA, and some officials said that a previous headquarters-led effort to update the agreements was not completed because it did not sufficiently describe the bureau’s offshore oil and gas responsibilities. According to Standards for Internal Control in the Federal Government, as programs change and agencies strive to improve operational processes and adopt new technologies, management officials must continually assess and evaluate internal controls to ensure that control activities are effective and updated when necessary. BSEE’s ongoing restructuring has made limited progress in enhancing its enforcement capabilities. In particular, BSEE has not developed procedures with criteria to guide how it uses enforcement tools—such as warnings and fines—which are among the goals of BSEE’s restructuring, according to planning documents, and consistent with federal standards for internal control. BSEE restructuring plans state that the current lack of criteria causes BSEE to act inconsistently, which makes oil and gas industry operators uncertain about BSEE’s oversight approach and expectations. The absence of enforcement climate criteria is a long- standing deficiency. For example, post–Deepwater Horizon incident investigations recommended BSEE assess its enforcement tools and how to employ them to deter safety and environmental violations. Without developing procedures with defined criteria for taking enforcement actions, BSEE continues to face risks to the effectiveness of its enforcement capabilities. To enhance Interior’s oversight of oil and gas development, we recommended in February 2016 that the Secretary of the Interior direct the Director of BSEE to take the following nine actions as it continues to restructure. To address risks to the effectiveness of BSEE’s investigations, environmental compliance, and enforcement capabilities, we recommended that BSEE complete policies outlining the responsibilities of investigations, environmental compliance, and enforcement programs, and update and develop procedures to guide them. To enhance its investigative capabilities, we recommended that BSEE establish a capability to review investigation policy and collect and analyze incidents to identify trends in safety and environmental hazards; develop a plan with milestones for implementing the case management system for investigations; clearly communicate the purpose of BSEE’s investigations program to industry operators; and clarify policies and procedures for assigning panel investigation membership and referring cases of suspected criminal wrongdoing to the Inspector General. To enhance its environmental compliance capabilities, we conduct and document a risk analysis of the regional-based reporting structure of its Environmental Compliance Division, including actions to mitigate any identified risks; coordinate with the Administrator of the Environmental Protection Agency to consider the relevance of existing interagency agreements for monitoring operator compliance with National Pollutant Discharge Elimination System permits on the Outer Continental Shelf and, if necessary, update agreements to reflect current oversight needs; and develop a plan to address documented environmental oversight staffing needs. To enhance its enforcement capabilities, we recommended that BSEE develop a mechanism to ensure that it reviews the maximum daily civil penalty and adjusts it to reflect changes in the Consumer Price Index within the time frames established by statute. In its written comments, Interior agreed that additional reforms—such as documented policies and procedures—are needed to address offshore oil and gas oversight deficiencies, but Interior neither agreed nor disagreed with our specific recommendations. Additional information on Management of Federal Oil and Gas Resources is provided on page 136 of the high-risk report. Managing Risks and Improving VA Health Care. Since we added Department of Veterans Affairs (VA) health care to our High-Risk List in 2015, VA has acknowledged the significant scope of the work that lies ahead in each of the five areas of concern we identified: (1) ambiguous policies and inconsistent processes; (2) inadequate oversight and accountability; (3) information technology (IT) challenges; (4) inadequate training for VA staff; and (5) unclear resource needs and allocation priorities. It is imperative that VA maintain strong leadership support, and as the new administration sets its priorities, VA will need to integrate those priorities with its high-risk related actions. VA developed an action plan for addressing its high-risk designation, but the plan describes many planned outcomes with overly ambitious deadlines for completion. We are concerned about the lack of root cause analyses for most areas of concern, and the lack of clear metrics and needed resources for achieving stated outcomes. In addition, with the increased use of community care programs, it is imperative that VA’s action plan discuss the role of community care in decisions related to policies, oversight, IT, training, and resource needs. Finally, to help address its high-risk designation, VA should continue to implement our recommendations, as well as recommendations from others. While VA’s leadership has increased its focus on implementing our recommendations in the last 2 years, additional work is needed. We made 66 VA health care-related recommendations in products issued since the VA health care high- risk designation in February 2015, for a total of 244 recommendations from January 1, 2010, through December 31, 2016. VA has implemented 122 (about 50 percent) of the 244 recommendations, but over 100 recommendations remain open as of December 31, 2016 (with about 25 percent being open for 3 or more years). It is critical that VA implement our recommendations in a timely manner. Additional information on Managing Risks and Improving VA Health Care is provided on page 627 of the report. DOD Financial Management. The effects of DOD’s financial management problems extend beyond financial reporting and negatively affect DOD’s ability to manage the department and make sound decisions on mission and operations. In addition, DOD remains one of the few federal entities that cannot demonstrate its ability to accurately account for and reliably report its spending or assets. DOD’s financial management problems continue as one of three major impediments preventing us from expressing an opinion on the consolidated financial statements of the federal government. Sustained leadership commitment will be critical to DOD’s success in achieving financial accountability, and in providing reliable information for day-to-day management decision making as well as financial audit readiness. DOD needs to assure the sustained involvement of leadership at all levels of the department in addressing financial management reform and business transformation. In addition, further action is needed in the areas of capacity and action planning. Specifically, DOD needs to continue building a workforce with the level of training and experience needed to support and sustain sound financial management; continue to develop and deploy enterprise resource planning systems as a critical component of DOD’s financial improvement and audit readiness strategy, as well as strengthen automated controls or design manual workarounds for the remaining legacy systems to satisfy audit requirements and improve data used for day-to-day decision making; and effectively implement its Financial Improvement and Audit Readiness Plan and related guidance to focus on strengthening processes, controls, and systems to improve the accuracy, reliability, and reporting for its priority areas, including budgetary information and mission-critical assets. Further, DOD needs to monitor and assess the progress the department is making to remediate its internal control deficiencies. DOD should (1) require the military services to improve their policies and procedures for monitoring their corrective action plans for financial management-related findings and recommendations, and (2) improve its process for monitoring the military services’ audit remediation efforts by preparing a consolidated management summary that provides a comprehensive picture of the status of corrective actions throughout the department. DOD is continuing to work toward undergoing a full financial statement audit by fiscal year 2018; however, it expects to receive disclaimers of opinion on its financial statements for a number of years. A lack of comprehensive information on the corrective action plans limits the ability of DOD and Congress to evaluate DOD’s progress toward achieving audit readiness, especially given the short amount of time remaining before DOD is required to undergo an audit of the department-wide financial statements for fiscal year 2018. Being able to demonstrate progress in remediating its financial management deficiencies will be useful as the department works toward implementing lasting financial management reform to ensure that it can generate reliable, useful, and timely information for financial reporting as well as for decision making and effective operations. Moreover, stronger financial management would show DOD’s accountability for funds and would help it operate more efficiently. Additional information on DOD Financial Management is provided on page 280 of the high-risk report. Modernizing the U.S. Financial Regulatory System and the Federal Role in Housing Finance. Resolving the role of the federal government in housing finance will require leadership commitment and action by Congress and the administration. The federal government has directly or indirectly supported more than two-thirds of the value of new mortgage originations in the single-family housing market since the beginning of the 2007-2009 financial crisis. Mortgages with federal support include those backed by Fannie Mae and Freddie Mac, two large government-sponsored enterprises (the enterprises). Out of concern that their deteriorating financial condition threatened the stability of financial markets, the Federal Housing Finance Agency (FHFA) placed the enterprises into federal conservatorship in 2008, creating an explicit fiscal exposure for the federal government. As of September 2016, the Department of the Treasury (Treasury) had provided about $187.5 billion in funds as capital support to the enterprises, with an additional $258.1 billion available to the enterprises should they need further assistance. In accordance with the terms of agreements with Treasury, the enterprises had paid dividends to Treasury totaling about $250.5 billion through September 2016. More than 8 years after entering conservatorship, the enterprises’ futures remain uncertain and billions of federal dollars remain at risk. The enterprises have a reduced capacity to absorb future losses due to a capital reserve amount that falls to $0 by 2018. Without a capital reserve, any quarterly losses—including those due to market fluctuations and not necessarily to economic conditions—would require the enterprises to draw additional funds from Treasury. Additionally, prolonged conservatorships and a change in leadership at FHFA could shift priorities for the conservatorships, which in turn could send mixed messages and create uncertainties for market participants and hinder the development of the broader secondary mortgage market. For this reason, we said in November 2016 that Congress should consider legislation establishing objectives for the future federal role in housing finance, including the structure of the enterprises, and a transition plan to a reformed housing finance system that enables the enterprises to exit conservatorship. The federal government also supports mortgages through insurance or guarantee programs, the largest of which is administered by the Department of Housing and Urban Development’s Federal Housing Administration (FHA). During the financial crisis, FHA served its traditional role of helping to stabilize the housing market, but also experienced financial difficulties from which it only recently recovered. Maintaining FHA’s long-term financial health and defining its future role also will be critical to any effort to overhaul the housing finance system. We previously recommended that Congress or FHA specify the economic conditions that FHA’s Mutual Mortgage Insurance Fund would be expected to withstand without requiring supplemental funds. As evidenced by the $1.68 billion FHA received in 2013, the current 2 percent capital requirement for FHA’s fund may not always be adequate to avoid the need for supplemental funds under severe stress scenarios. Implementing our recommendation would be an important step not only in addressing FHA’s long-term financial viability, but also in clarifying FHA’s role. Additional information on Modernizing the U.S. Financial Regulatory System and the Federal Role in Housing Finance is provided on page 107 of the report. Pension Benefit Guaranty Corporation Insurance Programs. The Pension Benefit Guaranty Corporation (PBGC) is responsible for insuring the defined benefit pension plans of nearly 40 million American workers and retirees who participate in nearly 24,000 private sector plans. PBGC faces an uncertain financial future due, in part, to a long-term decline in the number of traditional defined benefit plans and the collective financial risk of the many underfunded pension plans that PBGC insures. PBGC’s financial portfolio is one of the largest of all federal government corporations and, at the end of fiscal year 2016, PBGC’s net accumulated financial deficit was over $79 billion—having more than doubled since fiscal year 2013. PBGC has estimated that, without additional funding, its multiemployer insurance program will likely be exhausted by 2025 as a result of current and projected pension plan insolvencies. The agency’s single- employer insurance program is also at risk due to the continuing decline of traditional defined benefit pension plans, increased financial risk and reduced premium payments. While Congress and PBGC have taken significant and positive steps to strengthen the agency over recent years, challenges related to PBGC’s funding and governance structure remain. Addressing the significant financial risk and governance challenges that PBGC faces requires additional congressional action. To improve the long-term financial stability of PBGC’s insurance programs, Congress should consider: (1) authorizing a redesign of PBGC’s single employer program premium structure to better align rates with sponsor risk; (2) adopting additional changes to PBGC’s governance structure—in particular, expanding the composition of its board of directors; (3) strengthening funding requirements for plan sponsors as appropriate given national economic conditions; (4) working with PBGC to develop a strategy for funding PBGC claims over the long term, as the defined benefit pension system continues to decline; and (5) enacting additional structural reforms to reinforce and stabilize the multiemployer system that balance the needs and potential sacrifices of contributing employers, participants and the federal government. Absent additional steps to improve PBGC’s finances, the long-term financial stability of the agency remains uncertain and the retirement benefits of millions of American workers and retirees could be at risk of dramatic reductions. Additional information on Pension Benefit Guaranty Corporation Insurance Programs is provided on page 609 of the report. Ensuring the Security of Federal Information Systems and Cyber Critical Infrastructure and Protecting the Privacy of Personally Identifiable Information. Federal agencies and our nation’s critical infrastructures—such as energy, transportation systems, communications, and financial services—are dependent on computerized (cyber) information systems and electronic data to carry out operations and to process, maintain, and report essential information. The security of these systems and data is vital to public confidence and the nation’s safety, prosperity, and well-being. However, safeguarding computer systems and data supporting the federal government and the nation’s critical infrastructure is a concern. We first designated information security as a government- wide high-risk area in 1997. This high-risk area was expanded to include the protection of critical cyber infrastructure in 2003 and protecting the privacy of personally identifiable information (PII) in 2015. Ineffectively protecting cyber assets can facilitate security incidents and cyberattacks that disrupt critical operations; lead to inappropriate access to and disclosure, modification, or destruction of sensitive information; and threaten national security, economic well-being, and public health and safety. In addition, the increasing sophistication of hackers and others with malicious intent, and the extent to which both federal agencies and private companies collect sensitive information about individuals, have increased the risk of PII being exposed and compromised. Over the past several years, we have made about 2,500 recommendations to agencies aimed at improving the security of federal systems and information. These recommendations would help agencies strengthen technical security controls over their computer networks and systems, fully implement aspects of their information security programs, and protect the privacy of PII held on their systems. As of October 2016, about 1,000 of our information security– related recommendations had not been implemented. In addition, the federal government needs, among other things, to improve its abilities to detect, respond to, and mitigate cyber incidents; expand efforts to protect cyber critical infrastructure; and oversee the protection of PII, among other things. Additional information on Ensuring the Security of Federal Information Systems and Cyber Critical Infrastructure and Protecting the Privacy of Personally Identifiable Information is provided on page 338 of the report. For 2017, we are adding three new areas to the High-Risk List. We, along with inspectors general, special commissions, and others, have reported that federal agencies have ineffectively administered Indian education and health care programs, and inefficiently fulfilled their responsibilities for managing the development of Indian energy resources. In particular, we have found numerous challenges facing Interior’s Bureau of Indian Education (BIE) and Bureau of Indian Affairs (BIA) and the Department of Health and Human Services’ (HHS) Indian Health Service (IHS) in administering education and health care services, which put the health and safety of American Indians served by these programs at risk. These challenges included poor conditions at BIE school facilities that endangered students, and inadequate oversight of health care that hindered IHS’s ability to ensure quality care to Indian communities. In addition, we have reported that BIA mismanages Indian energy resources held in trust and thereby limits opportunities for tribes and their members to use those resources to create economic benefits and improve the well-being of their communities. Congress recently noted, “through treaties, statutes, and historical relations with Indian tribes, the United States has undertaken a unique trust responsibility to protect and support Indian tribes and Indians.” In light of this unique trust responsibility and concerns about the federal government ineffectively administering Indian education and health care programs and mismanaging Indian energy resources, we are adding these programs as a high-risk issue because they uniquely affect tribal nations and their members. Federal agencies have performed poorly in the following broad areas: (1) oversight of federal activities; (2) collaboration and communication; (3) federal workforce planning; (4) equipment, technology, and infrastructure; and (5) federal agencies’ data. While federal agencies have taken some actions to address the 41 recommendations we made related to Indian programs, there are currently 39 that have yet to be fully resolved. We plan to continue monitoring federal efforts in these areas. To this end, we have ongoing work focusing on accountability for safe schools and school construction, and tribal control of energy delivery, management, and resource development. Education: We have identified weaknesses in how Indian Affairs oversees school safety and construction and in how it monitors the way schools use Interior funds. We have also found limited workforce planning in several key areas related to BIE schools. Moreover, aging BIE school facilities and equipment contribute to degraded and unsafe conditions for students and staff. Finally, a lack of internal controls and other weaknesses hinder Indian Affairs’ ability to collect complete and accurate information on the physical conditions of BIE schools. In the past 3 years, we issued three reports on challenges with Indian Affairs’ management of BIE schools in which we made 13 recommendations. Eleven recommendations below remain open. To help ensure that BIE schools provide safe and healthy facilities for students and staff, we made four recommendations which remain open, including that Indian Affairs ensure the inspection information it collects on BIE schools is complete and accurate; develop a plan to build schools’ capacity to promptly address safety and health deficiencies; and consistently monitor whether BIE schools have established required safety committees. To help ensure that BIE conducts more effective oversight of school spending, we made four recommendations which remain open, including that Indian Affairs develop a workforce plan to ensure that BIE has the staff to effectively oversee school spending; put in place written procedures and a risk-based approach to guide BIE in overseeing school spending; and improve information sharing to support the oversight of BIE school spending. To help ensure that Indian Affairs improves how it manages Indian education, we made five recommendations. Three recommendations remain open, including that Indian Affairs develop a strategic plan for BIE that includes goals and performance measures for how its offices are fulfilling their responsibilities to provide BIE with support; revise Indian Affairs’ strategic workforce plan to ensure that BIA regional offices have an appropriate number of staff with the right skills to support BIE schools in their regions; and develop and implement decision-making procedures for BIE to improve accountability for BIE schools. Health Care: IHS provides inadequate oversight of health care, both of its federally operated facilities and through the Purchase Referred Care program (PRC). Other issues include ineffective collaboration— specifically, IHS does not require its area offices to inform IHS headquarters if they distribute funds to local PRC programs using different criteria than the PRC allocation formula suggested by headquarters. As a result, IHS may be unaware of additional funding variation across areas. We have also reported that IHS officials told us that an insufficient workforce was the biggest impediment to ensuring patients could access timely primary care. In the past 6 years, we have made 12 recommendations related to Indian health care that remain open. Although IHS has taken several actions in response to our recommendations, such as improving the data collected for the PRC program and adopting Medicare-like rates for nonhospital services, much more needs to be done. To help ensure that Indian people receive quality health care, the Secretary of HHS should direct the Director of IHS to take the following two actions: (1) as part of implementing IHS’s quality framework, ensure that agency-wide standards for the quality of care provided in its federally operated facilities are developed, and systematically monitor facility performance in meeting these standards over time; and (2) develop contingency and succession plans for replacing key personnel, including area directors. To help ensure that timely primary care is available and accessible to Indians, IHS should: (1) develop and communicate specific agency- wide standards for wait times in federally-operated facilities, and (2) monitor patient wait times in federally-operated facilities and ensure that corrective actions are taken when standards are not met. To help ensure that IHS has meaningful information on the timeliness with which it issues purchase orders authorizing payment under the PRC program, and to improve the timeliness of payments to providers, we recommended that IHS: (1) modify IHS’s claims payment system to separately track IHS referrals and self-referrals, revise Government Performance and Results Act measures for the PRC program so that it distinguishes between these two types of referrals, and establish separate time frame targets for these referral types; and (2) better align PRC staffing levels and workloads by revising its current practices, where available, used to pay for PRC program staff. In addition, as HHS and IHS monitor the effect that new coverage options available to IHS beneficiaries through PPACA have on PRC funds, we recommend that IHS concurrently develop potential options to streamline requirements for program eligibility. To help ensure successful outreach efforts regarding PPACA coverage expansions, we recommended that IHS realign current resources and personnel to increase capacity to deal with enrollment in Medicaid and the exchanges, and prepare for increased billing to these payers. If payments for physician and other nonhospital services are capped, we recommended that IHS monitor patient access to these services. To help ensure a more equitable allocation of funds per capita across areas, we recommended that Congress consider requiring IHS to develop and use a new method for allocating PRC funds. To develop more accurate data for estimating the funds needed for the PRC program and improve IHS oversight, we recommended that IHS develop a written policy documenting how it evaluates the need for the PRC program, and disseminate it to area offices so they understand how unfunded services data are used to estimate overall program needs. We also recommended that IHS develop written guidance for PRC programs outlining a process to use when funds are depleted but recipients continue to need services. Energy: We have reported on issues with BIA oversight of federal activities, such as the length of time it takes the agency to review energy- related documents. We also reported on challenges with collaboration—in particular, while working to form an Indian Energy Service Center, BIA did not coordinate with key regulatory agencies, including the Department of the Interior’s Fish and Wildlife Service, the U.S. Army Corps of Engineers, and the Environmental Protection Agency. In addition, we found workforce planning issues at BIA contribute to management shortcomings that have hindered Indian energy development. Lastly, we found issues with outdated and deteriorating equipment, technology, and infrastructure, as well as incomplete and inaccurate data. In the past 2 years, we issued three reports on developing Indian energy resources in which we made 14 recommendations to BIA. All recommendations remain open. To help ensure BIA can verify ownership in a timely manner and identify resources available for development, we made two recommendations, including that Interior take steps to improve its geographic information system mapping capabilities. To help ensure BIA’s review process is efficient and transparent, we made two recommendations, including that Interior take steps to develop a documented process to track review and response times for energy-related documents that must be approved before tribes can develop energy resources. To help improve clarity of tribal energy resource agreement regulations, we recommended BIA provide additional guidance to tribes on provisions that tribes have identified to Interior as unclear. To help ensure that BIA streamlines the review and approval process for revenue-sharing agreements, we made three recommendations, including that Interior establish time frames for the review and approval of Indian revenue-sharing agreements for oil and gas, and establish a system for tracking and monitoring the review and approval process to determine whether time frames are met. To help improve efficiencies in the federal regulatory process, we made four recommendations, including that BIA take steps to coordinate with other regulatory agencies so the Service Center can serve as a single point of contact or lead agency to navigate the regulatory process. To help ensure that BIA has a workforce with the right skills, appropriately aligned to meet the agency’s goals and tribal priorities, we made two recommendations, including that BIA establish a documented process for assessing BIA’s workforce composition at agency offices. Congressional Actions Needed: It is critical that Congress maintain its focus on improving the effectiveness with which federal agencies meet their responsibilities to serve tribes and their members. Since 2013, we testified at six hearings to address significant weaknesses we found in the federal management of programs that serve tribes and their members. Sustained congressional attention to these issues will highlight the challenges discussed here and could facilitate federal actions to improve Indian education and health care programs, and the development of Indian energy resources. See pages 200-219 of the high-risk report for additional details on what we found. The federal government’s environmental liability has been growing for the past 20 years and is likely to continue to increase. For fiscal year 2016, the federal government’s estimated environmental liability was $447 billion—up from $212 billion for fiscal year 1997. However, this estimate does not reflect all of the future cleanup responsibilities facing federal agencies. Because of the lack of complete information and the often inconsistent approach to making cleanup decisions, federal agencies cannot always address their environmental liabilities in ways that maximize the reduction of health and safety risks to the public and the environment in a cost-effective manner. The federal government is financially liable for cleaning up areas where federal activities have contaminated the environment. Various federal laws, agreements with states, and court decisions require the federal government to clean up environmental hazards at federal sites and facilities—such as nuclear weapons production facilities and military installations. Such sites are contaminated by many types of waste, much of which is highly hazardous. Federal accounting standards require agencies responsible for cleaning up contamination to estimate future cleanup and waste disposal costs, and to report such costs in their annual financial statements as environmental liabilities. Per federal accounting standards, federal agencies’ environmental liability estimates are to include probable and reasonably estimable costs of cleanup work. Federal agencies’ environmental liability estimates do not include cost estimates for work for which reasonable estimates cannot currently be generated. Consequently, the ultimate cost of addressing the U.S. government’s environmental cleanup is likely greater than $447 billion. Federal agencies’ approaches to addressing their environmental liabilities and cleaning up the contamination from past activities are often influenced by numerous site-specific factors, stakeholder agreements, and legal provisions. We have also found that some agencies do not take a holistic, risk- informed approach to environmental cleanup that aligns limited funds with the greatest risks to human health and the environment. Since 1994, we have made at least 28 recommendations related to addressing the federal government’s environmental liability. These include 22 recommendations to the Departments of Energy (DOE) or Defense (DOD), 1 recommendation to OMB to consult with Congress on agencies’ environmental cleanup costs, and 4 recommendations to Congress to change the laws governing cleanup activities. Of these, 13 recommendations remain unimplemented. If implemented, these steps would improve the completeness and reliability of the estimated costs of future cleanup responsibilities, and lead to more risk-based management of the cleanup work. Of the federal government’s estimated $447 billion environmental liability, DOE is responsible for by far the largest share of the liability, and DOD is responsible for the second largest share. The rest of the federal government makes up the remaining 3 percent of the liability with agencies such as the National Aeronautics and Space Administration (NASA) and the Departments of Transportation, Veteran’s Affairs, Agriculture (USDA), and Interior holding large liabilities (see figure 2). Agencies spend billions each year on environmental cleanup efforts but the estimated environmental liability continues to rise. For example, despite billions spent on environmental cleanup, DOE’s environmental liability has roughly doubled from a low of $176 billion in fiscal year 1997 to the fiscal year 2016 estimate of $372 billion. In the last 6 years alone, DOE’s Office of Environmental Management (EM) has spent $35 billion, primarily to treat and dispose of nuclear and hazardous waste, and construct capital asset projects to treat the waste; however, EM’s portion of the environmental liability has grown over this same time period by over $90 billion, from $163 billion to $257 billion (see figure 3). Progress in addressing the U.S. government’s environmental liabilities depends on how effectively federal departments and agencies set priorities, under increasingly restrictive budgets, that maximize the risk reduction and cost-effectiveness of cleanup approaches. As a first step, some departments and agencies may need to improve the completeness of information about long-term cleanup responsibilities and their associated costs so that decision makers, including Congress, can consider the full scope of the federal government’s cleanup obligations. As a next step, certain departments, such as DOE, may need to change how they establish cleanup priorities. For example, DOE’s current practice of negotiating agreements with individual sites without considering other sites’ agreements or available resources may not ensure that limited resources will be allocated to reducing the greatest environmental risks, and costs will be minimized. We have recommended actions to federal agencies that, if implemented, would improve the completeness and reliability of the estimated costs of future cleanup responsibilities, and lead to more risk-based management of the cleanup work. These recommendations include the following. In 1994, we recommended that Congress amend certain legislation to require agencies to report annually on progress in implementing plans for completing site inventories, estimates of the total costs to clean up their potential hazardous waste sites, and agencies’ progress toward completing their site inventories and on their latest estimates of total cleanup costs. We believe these recommendations are as relevant, if not more so, today. In 2015, we recommended that USDA develop plans and procedures for completing its inventories of potentially contaminated sites. USDA disagreed with this recommendation. However, we continue to believe that USDA’s inventory of contaminated and potentially contaminated sites—in particular, abandoned mines, primarily on Forest Service land—is insufficient for effectively managing USDA’s overall cleanup program. Interior is also faced with an incomplete inventory of abandoned mines that it is working to improve. In 2006, we recommended that DOD develop, document, and implement a program for financial management review, assessment, and monitoring of the processes for estimating and reporting environmental liabilities. This recommendation has not been implemented. We have found in the past that DOE’s cleanup strategy is not risk based and should be re-evaluated. DOE’s decisions are often driven by local stakeholders and certain requirements in federal facilities agreements and consent decrees. In 1995, we recommended that DOE set national priorities for cleaning up its contaminated sites using data gathered during ongoing risk evaluations. This recommendation has not been implemented. In 2003, we recommended that DOE ask Congress to clarify its authority for designating certain waste with relatively low levels of radioactivity as waste incidental to reprocessing, and therefore not managed as high-level waste. In 2004, DOE received this specific authority from Congress for the Savannah River and Idaho Sites, thereby allowing DOE to save billions of dollars in waste treatment costs. The law, however, excluded the Hanford Site. More recently, in 2015, we found that DOE is not comprehensively integrating risks posed by National Nuclear Security Administration’s (NNSA) nonoperational contaminated facilities with EM’s portfolio of cleanup work. By not integrating nonoperational facilities from NNSA, EM is not providing Congress with complete information about EM’s current and future cleanup obligations as Congress deliberates annually about appropriating funds for cleanup activities. We recommended that DOE integrate its lists of facilities prioritized for disposition with all NNSA facilities that meet EM’s transfer requirements, and that EM should include this integrated list as part of the Congressional Budget Justification for DOE. DOE neither agreed nor disagreed with this recommendation. See pages 232-247 of the high-risk report for additional details on what we found. One of the most important functions of the U.S. Census Bureau (Bureau) is conducting the decennial census of the U.S. population, which is mandated by the Constitution and provides vital data for the nation. This information is used to apportion the seats of the U.S. House of Representatives; realign the boundaries of the legislative districts of each state; allocate billions of dollars in federal financial assistance; and provide social, demographic, and economic profiles of the nation’s people to guide policy decisions at each level of government. A complete count of the nation’s population is an enormous challenge as the Bureau seeks to control the cost of the census while it implements several new innovations and manages the processes of acquiring and developing new and modified IT systems supporting them. Over the past 3 years, we have made 30 recommendations to help the Bureau design and implement a more cost-effective census for 2020; however, only 6 of them had been fully implemented as of January 2017. The cost of the census, in terms of cost for counting each housing unit, has been escalating over the last several decennials. The 2010 Census was the costliest U.S. Census in history at about $12.3 billion, and was about 31 percent more costly than the $9.4 billion cost of the 2000 Census (in 2020 dollars). The average cost for counting a housing unit increased from about $16 in 1970 to around $92 in 2010 (in 2020 constant dollars). Meanwhile, the return of census questionnaires by mail (the primary mode of data collection) declined over this period from 78 percent in 1970 to 63 percent in 2010. Declining mail response rates—a key indicator of a cost-effective census—are significant and lead to higher costs. This is because the Bureau sends enumerators to each nonresponding household to obtain census data. As a result, nonresponse follow-up is the Bureau’s largest and most costly field operation. In many ways, the Bureau has had to invest substantially more resources each decade to match the results of prior enumerations. The Bureau plans to implement several new innovations in its design of the 2020 Census. In response to our recommendations regarding past decennial efforts and other assessments, the Bureau has fundamentally reexamined its approach for conducting the 2020 Census. Its plan for 2020 includes four broad innovation areas that it believes will save it over $5 billion (2020 constant dollars) when compared to what it estimates conducting the census with traditional methods would cost. The Bureau’s innovations include (1) using the Internet as a self-response option, which the Bureau has never done on a large scale before; (2) verifying most addresses using “in-office” procedures and on-screen imagery rather than street-by-street field canvassing; (3) re-engineering data collection methods such as by relying on an automated case management system; and (4) in certain instances, replacing enumerator collection of data with administrative records (information already provided to federal and state governments as they administer other programs). These innovations show promise for a more cost-effective head count. However, they also introduce new risks, in part, because they include new procedures and technology that have not been used extensively in earlier decennials, if at all. The Bureau is also managing the acquisition and development of new and modified IT systems, which add complexity to the design of the census. To help control census costs, the Bureau plans to significantly change the methods and technology it uses to count the population, such as offering an option for households to respond to the survey via the Internet or phone, providing mobile devices for field enumerators to collect survey data from households, and automating the management of field operations. This redesign relies on acquiring and developing many new and modified IT systems, which could add complexity to the design. These cost risks, new innovations, and acquisition and development of IT systems for the 2020 Census, along with other challenges we have identified in recent years, raise serious concerns about the Bureau’s ability to conduct a cost-effective enumeration. Based on these concerns, we have concluded that the 2020 Census is a high-risk area and have added it to the High-Risk List in 2017. To help the Bureau mitigate the risks associated with its fundamentally new and complex innovations for the 2020 Census, the commitment of top leadership is needed to ensure the Bureau’s management, culture, and business practices align with a cost-effective enumeration. For example, the Bureau needs to continue strategic workforce planning efforts to ensure it has the skills and competencies needed to support planning and executing the census. It must also rigorously test individual census-taking activities to provide information on their feasibility and performance, their potential for achieving desired results, and the extent to which they are able to function together under full operational conditions. We have recommended that the Bureau also ensure that its scheduling adheres to leading practices and be able to support a quantitative schedule risk assessment, such as by having all activities associated with the levels of resources and effort needed to complete them. The Bureau has stated that it has begun maturing project schedules to ensure that the logical relationships are in place and plans to conduct a quantitative risk assessment. We will continue to monitor the Bureau’s efforts. The Bureau must also improve its ability to manage, develop, and secure its IT systems. For example, the Bureau needs to prioritize its IT decisions and determine what information it needs in order to make those decisions. In addition, the Bureau needs to make key IT decisions for the 2020 Census in order to ensure they have enough time to have the production systems in place to support the end-to-end system test. To this end, we recommended the Bureau ensure that the methodologies for answering the Internet response rate and IT infrastructure research questions are determined and documented in time to inform key design decisions. Further, given the numerous and critical dependencies between the Census Enterprise Data Collection and Processing and 2020 Census programs, their parallel implementation tracks, and the 2020 Census’s immovable deadline, we recommended that the Bureau establish a comprehensive and integrated list of all interdependent risks facing the two programs, and clearly identify roles and responsibilities for managing this list. The Bureau stated that it plans to take actions to address our recommendations. It is also critical for the Bureau to have better oversight and control over its cost estimation process and we have recommended that the Bureau ensure its cost estimate is consistent with our leading practices. For example, the Bureau will need to, among other practices, document all cost-influencing assumptions; describe estimating methodologies used for each cost element; ensure that variances between planned and actual cost are documented, explained, and reviewed; and include a comprehensive sensitivity analysis, so that it can better estimate costs. We also recommended that the Bureau implement and institutionalize processes or methods for ensuring control over how risk and uncertainty are accounted for and communicated within its cost estimation process. The Bureau agreed with our recommendations, and we are currently conducting a follow-up audit of the Bureau’s most recent cost estimate and will determine whether the Bureau has implemented them. Sustained congressional oversight will be essential as well. In 2015 and 2016, congressional committees held five hearings focusing on the progress of the Bureau’s preparations for the decennial. Going forward, active oversight will be needed to ensure these efforts stay on track, the Bureau has needed resources, and Bureau officials are held accountable for implementing the enumeration as planned. We will continue monitoring the Bureau’s efforts to conduct a cost- effective enumeration. To this end, we have ongoing work focusing on such topics as the Bureau’s updated lifecycle cost estimate and the readiness of IT systems for the 2018 End-to-End Test. See pages 219–231 of the high-risk report for additional details on what we found. After we remove areas from the High-Risk List we continue to monitor them, as appropriate, to determine if the improvements we have noted are sustained and whether new issues emerge. If significant problems again arise, we will consider reapplying the high-risk designation. DOD’s Personnel Security Clearance Program is one former high-risk area that we continue to closely monitor in light of government-wide reform efforts. The Office of the Director of National Intelligence (ODNI) estimates that approximately 4.2 million federal government and contractor employees held or were eligible to hold a security clearance as of October 1, 2015. Personnel security clearances provide personnel with access to classified information, the unauthorized disclosure of which could, in certain circumstances, cause exceptionally grave damage to national security. High profile security incidents, such as the disclosure of classified programs and documents by a National Security Agency contractor and the OPM data breach of 21.5 million records, demonstrate the continued need for high quality background investigations and adjudications, strong oversight, and a secure IT process, which have been areas of long- standing challenges for the federal government. In 2005, we designated the DOD personnel security clearance program as a high-risk area because of delays in completing background investigations and adjudications. We continued the high-risk designation in the 2007 and 2009 updates to our High-Risk List because of issues with the quality of investigation and adjudication documentation and because delays in the timely processing of security clearances continued. In our 2011 high-risk report, we removed DOD’s personnel security clearance program from the High-Risk List because DOD took actions to develop guidance to improve its adjudication process, develop and implement tools and metrics to assess quality of investigations and adjudications, and improve timeliness for processing clearances. We also noted that DOD continues to be a prominent player in the overall security clearance reform effort, which includes entities within the OMB, OPM, and ODNI that comprise the Performance Accountability Council (PAC) which oversees security clearance reform. The executive branch has also taken steps to monitor its security clearance reform efforts. The GPRA Modernization Act of 2010 requires OMB to report through a website—performance.gov—on long-term cross-agency priority goals, which are outcome-oriented goals covering a limited number of crosscutting policy areas, as well as goals to improve management across the federal government. Among the cross-agency priority goals, the executive branch identified security clearance reform as one of the key areas it is monitoring. Since removing DOD’s personnel security clearance program from the High-Risk List, the government’s overall reform efforts that began after passage of the Intelligence Reform and Terrorism Prevention Act of 2004 have had mixed progress, and key reform efforts have not yet been implemented. In the aftermath of the June 2013 disclosure of classified documents by a former National Security Agency contractor and the September 2013 shooting at the Washington Navy Yard, OMB issued, in February 2014, the Suitability and Security Processes Review Report to the President, a 120-day review of the government’s processes for granting security clearances, among other things. The 120-day review resulted in 37 recommendations, 65 percent of which have been implemented, as of October 2016, including the issuance of executive branch-wide quality assessment standards for investigations in January 2015. Additionally, the recommendations led to expanding DOD’s ability to continuously evaluate the continued eligibility of cleared personnel. However, other recommendations from the 120-day review have not yet been implemented. For example, the reform effort is still trying to fully implement the revised background investigation standards issued in 2012 and improve data sharing between local, state, and federal entities. In addition, the 120-day review further found that performance measures for investigative quality are neither standardized nor implemented consistently across the government, and that measuring and ensuring quality continues to be a challenge. The review contained three recommendations to address the development of quality metrics, but the PAC has only partially implemented those recommendations. We previously reported that the executive branch had developed some metrics to assess quality at different phases of the personnel security clearance process; however, those metrics had not been fully developed and implemented. The development of metrics to assess quality throughout the security clearance process has been a long-standing concern. Since the late 1990s we have emphasized the need to build and monitor quality throughout the personnel security clearance process. In 2009, we again noted that clearly defined quality metrics can improve the security clearance process by enhancing oversight of the time required to process security clearances and the quality of the investigation and adjudicative decisions. We recommended that OMB provide Congress with results of metrics on comprehensive timeliness and the quality of investigations and adjudications. According to ODNI, in October 2016, ODNI began implementation of a Quality Assessment and Reporting Tool to document customer issues with background investigations. The tool will be used to report on the quality of 5 percent of each executive branch agency’s background investigations. ODNI officials stated that they plan to develop metrics in the future as data are gathered from the tool, but did not identify a completion date for these metrics. Separately, the NDAA for Fiscal Year 2017, among other things, requires DOD to institute a program to collect and maintain data and metrics on the background investigation process, in the context of developing a system for performance of background investigations. The PAC’s effort to fully address the 120-day review and our recommendations on establishing metrics on the quality of investigations as well as DOD’s efforts to address the broader requirements in the NDAA for Fiscal Year 2017 remain open and will need to be a continued focus of the department moving forward in its effort to improve its management of the security clearance process. Further, in response to the 2015 OPM data breach, the PAC completed a 90-day review which led to an executive order establishing the National Background Investigations Bureau, within OPM, to replace the Federal Investigative Services and transferred responsibility to develop, maintain and secure new IT systems for clearances to DOD. Additionally, the Executive Order made DOD a full principal member of the PAC. The Executive Order also directed the PAC to review authorities, roles, and responsibilities, including submitting recommendations related to revising, as appropriate, executive orders pertaining to security clearances. This effort is ongoing. In addition to addressing the quality of security clearances and other goals and recommendations outlined in the 120-day and 90-day reviews and the government’s cross-agency priority goals, the PAC has the added challenge of addressing recent changes that may result from the NDAA for Fiscal Year 2017. Specifically, section 951 of the act requires the Secretary of Defense to develop an implementation plan for the Defense Security Service to conduct background investigations for certain DOD personnel—presently conducted by OPM—after October 1, 2017. The Secretary of Defense must submit the plan to the congressional defense committees by August 1, 2017. It also requires the Secretary of Defense and Director of OPM to develop a plan by October 1, 2017, to transfer investigative personnel and contracted resources to DOD in proportion to the workload if the plan for DOD to conduct the background investigations were implemented. It is unknown if these potential changes will impact recent clearance reform efforts. Given the history and inherent challenges of reforming the government- wide security clearance process, coupled with recent amendments to a governing Executive Order and potential changes arising from the NDAA for Fiscal Year 2017, we will continue reviewing critical functions for personnel security clearance reform and monitor the government’s implementation of key reform efforts. We have ongoing work assessing progress being made on the overall security clearance reform effort and in implementing a continuous evaluation process, a key reform effort considered important to improving the timeliness and quality of investigations. We anticipate issuing a report on the status of the government’s continuous evaluation process in the fall of 2017. Additionally, we have previously reported on the importance of securing federal IT systems and anticipate issuing a report in early 2017 that examines IT security at OPM and efforts to secure these types of critical systems. Continued progress in reforming personnel security clearances is essential in helping to ensure a federal workforce entrusted to protect U.S. government information and property, promote a safe and secure work environment, and enhance the U.S. government’s risk management approach. The high-risk assessment continues to be a top priority and we will maintain our emphasis on identifying high-risk issues across government and on providing insights and sustained attention to help address them, by working collaboratively with Congress, agency leaders, and OMB. As part of this effort, with the new administration and Congress in 2017 we hope to continue to participate in regular meetings with the incoming OMB Deputy Director for Management and with top agency officials to discuss progress in addressing high-risk areas. Such efforts have been critical for the progress that has been made. This high-risk update is intended to help inform the oversight agenda for the 115th Congress and to guide efforts of the administration and agencies to improve government performance and reduce waste and risks. Thank you, Chairman Chaffetz, Ranking Member Cummings, and Members of the Committee. This concludes my testimony. I would be pleased to answer any questions. For further information on this testimony, please contact J. Christopher Mihm at mihmj@gao.gov or (202) 512-6806. Contact points for the individual high-risk areas are listed in the report and on our high-risk website. Contact points for our Congressional Relations and Public Affairs offices may be found on the last page of this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The federal government is one of the world's largest and most complex entities: about $3.9 trillion in outlays in fiscal year 2016 funded a broad array of programs and operations. GAO's high-risk program identifies government operations with greater vulnerabilities to fraud, waste, abuse, and mismanagement or the need for transformation to address economy, efficiency, or effectiveness challenges. This biennial update describes the status of high-risk areas listed in 2015 and actions that are still needed to assure further progress, and identifies new high-risk areas needing attention by Congress and the executive branch. Solutions to high-risk problems potentially save billions of dollars, improve service to the public, and strengthen government performance and accountability. GAO uses five criteria to assess progress in addressing high-risk areas: (1) leadership commitment, (2) agency capacity, (3) an action plan, (4) monitoring efforts, and (5) demonstrated progress. Since GAO's last high-risk update, many of the 32 high-risk areas on the 2015 list have shown solid progress. Twenty-three high-risk areas, or two-thirds of all the areas, have met or partially met all five criteria for removal from the High-Risk List; 15 of these areas fully met at least one criterion. Progress has been possible through the concerted efforts of Congress and leadership and staff in agencies. For example, Congress enacted over a dozen laws since GAO's last report in February 2015 to help address high-risk issues. GAO removed 1 high-risk area on managing terrorism-related information, because significant progress had been made to strengthen how intelligence on terrorism, homeland security, and law enforcement is shared among federal, state, local, tribal, international, and private sector partners. Sufficient progress was made to remove segments of 2 areas related to supply chain management at the Department of Defense (DOD) and gaps in geostationary weather satellite data. Two high-risk areas expanded—DOD's polar-orbiting weather satellites and the Department of the Interior's restructuring of offshore oil and gas oversight. Several other areas need substantive attention including VA health care, DOD financial management, ensuring the security of federal information systems and cyber critical infrastructure, resolving the federal role in housing finance, and improving the management of IT acquisitions and operations. GAO is adding 3 areas to the High-Risk List, bringing the total to 34: Management of Federal Programs That Serve Tribes and Their Members. GAO has reported that federal agencies, including the Department of the Interior's Bureaus of Indian Education and Indian Affairs and the Department of Health and Human Services' Indian Health Service, have ineffectively administered Indian education and health care programs and inefficiently developed Indian energy resources. Thirty-nine of 41 GAO recommendations on this issue remain unimplemented. U.S. Government's Environmental Liabilities. In fiscal year 2016 this liability was estimated at $447 billion (up from $212 billion in 1997). The Department of Energy is responsible for 83 percent of these liabilities and DOD for 14 percent. Agencies spend billions each year on environmental cleanup efforts but the estimated environmental liability continues to rise. Since 1994, GAO has made at least 28 recommendations related to this area; 13 are unimplemented. The 2020 Decennial Census. The cost of the census has been escalating over the last several decennials; the 2010 Census was the costliest U.S. Census in history at about $12.3 billion, about 31 percent more than the 2000 Census (in 2020 dollars). The U.S. Census Bureau (Bureau) plans to implement several innovations—including IT systems—for the 2020 Census. Successfully implementing these innovations, along with other challenges, risk the Bureau's ability to conduct a cost-effective census. Since 2014, GAO has made 30 recommendations related to this area; however, only 6 have been fully implemented. GAO's 2017 High-Risk List This report contains GAO's views on progress made and what remains to be done to bring about lasting solutions for each high-risk area. Perseverance by the executive branch in implementing GAO's recommended solutions and continued oversight and action by Congress are essential to achieving greater progress. |
The Medicaid program is one of the largest social programs in the federal budget, and one of the largest components of state budgets. Although it is one federal program, Medicaid consists of 56 distinct state-level programs created within broad federal guidelines and administered by state Medicaid agencies. Each state develops its own Medicaid administrative structure for carrying out the program. It also establishes eligibility standards; determines the type, amount, duration, and scope of covered services; and sets payment rates. Each state is required to describe the nature and scope of its program in a comprehensive plan submitted to CMS, with federal funding depending on CMS’s approval of the plan. In general, the federal government matches state Medicaid spending for medical assistance according to a formula based on each state’s per capita income. The federal contribution ranges from 50 to 77 cents of every state dollar spent on medical assistance in fiscal year 2004. For most state Medicaid administrative costs, the federal match rate is 50 percent. For skilled professional medical personnel engaged in program integrity activities, such as those who review medical records, 75 percent federal matching is available. States and CMS share responsibility for protecting the integrity of the Medicaid program. States are responsible for ensuring proper payment and recovering misspent funds. CMS has a role in facilitating states’ program integrity efforts and seeing that states have the necessary processes in place to prevent and detect improper payments. With varying levels of staff and resources, states conduct Medicaid program integrity activities that include screening providers and monitoring provider billing patterns. CMS requires that states collect and verify basic information on potential providers, including whether they meet state licensure requirements and are not prohibited from participating in federal health care programs. CMS also requires that each state Medicaid agency have certain information processing capabilities, including a Medicaid Management Information System (MMIS) and a Surveillance and Utilization Review Subsystem (SURS). The SURS staff use claims data to develop statistical profiles on services, providers, and beneficiaries to identify potential improper payments. They refer suspected overpayments or overutilization cases to other units in the Medicaid agency for corrective action and potential fraud cases to their state’s Medicaid Fraud Control Unit for investigation and prosecution. Medicaid Fraud Control Units can, in turn, refer some cases to the HHS OIG, the Federal Bureau of Investigation (FBI), and the Department of Justice for further investigation and prosecution. State Medicaid programs have experienced a wide range of abusive and fraudulent practices by providers. States have prosecuted providers that bill for services, drugs, and supplies that are not authorized or are not provided. States’ investigators have also uncovered deliberate provider upcoding—billing for more expensive procedures than were actually provided—to increase their Medicaid reimbursement. In some cases, they have prosecuted providers for marketing irregularities, such as offering cash, free services, or gifts to induce referrals. While the covert nature of these schemes makes it difficult to quantify the dollars lost to Medicaid fraud or abuse, recent cases provide examples of substantial financial losses. As shown in table 1, these range from a nearly $1.6 million state case that involved billing for transportation services never provided and deliberate upcoding to a $50 million nationwide settlement with a major pharmaceutical and equipment supplier over illegal marketing practices. States take various approaches to conducting program integrity activities that can result in substantial cost savings. Tightened enrollment controls allow states to more closely scrutinize those providers considered to be at high risk for improper billing. Through provider screening, stricter enrollment procedures, and reenrollment programs, states may prevent high-risk providers from enrolling or remaining in their Medicaid programs. Some states require providers to use advanced technologies to confirm beneficiary eligibility before services are rendered. States also use information systems that afford them the ability to query multiple databases efficiently in order to identify improper claims and types of providers and services most likely to foster problems. In addition, state legislatures have assisted their Medicaid agencies by directing that certain preventive or detection controls be used, or by broadening the sanctions they can use against providers that bill improperly. In general, states target their program integrity procedures to those providers that pose the greatest financial risk to their Medicaid programs. They may focus on types of providers whose billing practices have exhibited unusual trends or that are not subject to state licensure. States may also focus on individual providers that have been excluded from the program in the past or for other reasons. For such providers, most states impose more rigorous enrollment checks than the minimum required by CMS. Expanded measures applied to high-risk providers include on-site inspections of the applicant’s facility prior to enrollment, criminal background checks, requirements to obtain surety bonds that protect the state against certain financial losses, and time-limited enrollment. Thirty- four of the states that completed our inventory reported using at least one of these enrollment controls. Twenty-nine states reported conducting on-site inspections for providers considered at high-risk for inappropriate billing before allowing them to enroll or reenroll in their Medicaid programs. Such visits help validate a provider’s existence and generate information on its service capacity. Illinois and Florida officials reported that performing on-site inspections of some providers’ facilities is a valuable part of their statewide Medicaid provider enrollment control efforts. For each targeted provider group, Illinois Medicaid staff inspect the facilities, inventory, and vehicles (in the case of nonemergency transportation providers). Officials told us that their on-site inspections prevented 49 potential providers that did not meet requirements from enrolling. By not approving these providers to bill Medicaid, Illinois officials estimated that the state avoided a total of $1 million in potentially improper payments for 2001 and 2002. Florida uses a contractor to conduct on-site inspections of potential providers. Since April 2003, Florida Medicaid officials have required its contractor to randomly select and inspect 10 percent of all new applicants, including pharmacies, physicians, billing agents, nurses, and other types of providers. Thirteen states reported that they conduct criminal background checks for certain high-risk providers rather than relying solely on applicants’ self- disclosures. These background checks entail verifying with law enforcement agencies the information given in provider enrollment applications regarding criminal records. As of December 2003, states conducting criminal background checks included New Jersey (for employees of pharmacies, clinical laboratories, transportation services, adult medical day care, and physician group practices), Wisconsin (for employees of licensed agencies, such as home health care agencies), and Illinois (for employees of nonemergency transportation providers). Four states that conduct criminal background checks also have the authority to require surety bonds for the targeted providers. Surety bonds, also known as performance bonds, protect the state against financial loss in case the terms of a contract are not fulfilled. Florida officials established a $50,000 bonding requirement for durable medical equipment (DME) suppliers, independent laboratories, certain transportation companies, and non-physician-owned physician groups. In Washington, home health agencies must be Medicare-certified to participate in the state’s Medicaid program. Medicare requires a surety bond of $50,000 or 15 percent of annual Medicare payments to the home health agency based on the agency’s most recent cost report to CMS, whichever is greater. Twenty-five states require all of their Medicaid providers to periodically reapply for enrollment. This process allows state officials to verify provider information such as medical specialty credentials and ownership and licensure status. Eleven states reported having probationary and time- limited enrollment policies specifically for high-risk providers, with reenrollment requirements ranging from 6 months to 3 years. Examples of their probationary and reenrollment policies follow: California officials estimated avoiding over $200 million in Medicaid expenditures in state fiscal year 2003 by increasing scrutiny of new provider applications and placing providers in provisional status for the first 12 to 18 months of their enrollment. Those who continue to meet the standards for enrollment and have not been terminated are converted automatically to enrolled provider status. In Illinois, nonemergency transportation providers are on probation for the first 180 days of their enrollment. Medicaid officials explained that this probationary period gives the state time to monitor the provider’s billing patterns and conduct additional on-site inspections, as needed. They said that any negative findings uncovered during the probationary period would result in a provider’s immediate termination without cause, meaning the provider could not grieve the termination decision. Nevada officials reported that certain types of providers located in the state—including dentists, DME suppliers, and home health agencies—are permitted to enroll for only a 1-year period and must reapply each year to continue billing Medicaid. Out-of-state providers are limited to a 3-month enrollment period and must reapply to continue to bill the Nevada program. Wisconsin officials reported that the state requires nonemergency transportation providers to reenroll annually, while all other types of providers must submit new enrollment applications every 3 years. Many states deter fraud, abuse, and error by using advanced technologies and keeping their provider rolls up to date. States seek to enhance program integrity activities by investing in information technologies that enable them to preauthorize services and improve their data processing capabilities. They also contract with companies that specialize in claims and utilization review—analyses of claims to identify aberrant billing patterns—to augment their in-house capabilities. In addition, nearly all states take steps to eliminate paying claims billed under unauthorized provider numbers. Most states use advanced technology to prevent improper payments by requiring providers to validate beneficiary eligibility before services are rendered. For example, 32 states use online systems that require pharmacies to obtain state approval confirming a beneficiary’s eligibility before filling a prescription. Using a different technology, New York implemented a system that stores information on the magnetic strip of a beneficiary’s Medicaid card, which also includes the beneficiary’s photo. By swiping the card, providers are able to verify eligibility before providing a service. In another application, New York uses technology to track prescribing patterns and curb overutilization. New York officials told us that physicians ordering drugs and medical supplies must use the state’s interactive telephone system to obtain payment authorization numbers. This system leads physicians through a menu-driven series of questions about patient diagnosis and treatment alternatives before an authorization number is given. Officials estimated that during the 6-month period from April to September 2003, the state saved $15.4 million by using its interactive phone system for prior approvals. In addition to verifying beneficiary eligibility and controlling utilization, many states also use technology to better target their claims review efforts. Of the 47 states that completed our inventory, 34 reported targeting their reviews to claims from high-risk providers. These reviews entail verifying the appropriateness of the services billed by, and payments made to, a provider within a certain period. Twenty-one of the 34 states reported using advanced information technology to more effectively pinpoint aberrant billing patterns. These states developed data warehouses to store several years of information on claims, providers, and beneficiaries in integrated databases, and they use data-mining software to look for unusual patterns that might indicate provider abuse. Additional software detects claims with incongruous billing code combinations. For example, a state can link related service claims, such as emergency transportation invoices and hospital emergency department claims for the same client. States that use these technologies to enhance their targeted reviews include the following: New York officials reported that targeted reviews of claims submitted by part-time clinics, mobile radiology service providers, midwives, and physician assistants saved an estimated $24.9 million in state fiscal years 2002 through 2003. Ohio officials reported that targeted reviews by Ohio’s in-house utilization review staff saved an estimated $14 million in state fiscal years 2000 through 2002. Texas officials reported recouping over $18.9 million in state fiscal year 2003. Officials also noted that the state’s targeted reviews and queries enabled them to identify weaknesses in state payment safeguards. For example, the state identified hospital “unbundling”—billing separately for services that were already included in a combined reimbursement— through its analysis of claims data. Some states rely on contractors to supply claims review expertise that either is lacking in-house or that supplements existing staff resources. Of the states completing our national inventory, 24 states use contractors to review Medicaid claims either before or after payments are made. Colorado used contractors to increase the volume of claims reviewed. Kansas reported that its contractor’s 2003 review of hospital inpatient claims resulted in recovering over $4.7 million. North Carolina officials estimated that since 1999, the state’s contractors’ reviews of inpatient claims resulted in an estimated 4-to-1 return on investment. Out-of-date information increases the risk that Medicaid will pay individuals who are not eligible to bill the program. For instance, in California, individuals were found to have falsely billed the Medicaid program using the provider billing numbers of retired practitioners. Forty- three states reported that, at a minimum, they cancel or suspend inactive provider billing numbers. For example: New Jersey deactivates billing numbers that have been inactive for 12 months. To reactivate their numbers, providers must submit their requests using their office letterhead. If a number is reactivated and there is no billing activity within 6 months, New Jersey will again deactivate the number. North Carolina notifies providers with billing numbers that have been inactive for 12 months before taking any action. The state terminates the number if the provider does not respond within 30 days and updates the state’s provider database each month, listing which billing numbers have been terminated. Many states have made Medicaid program integrity a priority, either through directives to employ certain preventive or detection controls or by expanding enforcement authority to use against providers that bill improperly. In some states, legislative initiatives have encouraged Medicaid program integrity units to adopt information technology; in others, legislation has expanded Medicaid agencies’ authority to investigate providers and beneficiaries and impose sanctions. Of the states that completed our inventory, 24 reported having legislation mandating sanctions against fraudulent providers or beneficiaries. Examples of legislative activities from 2 states are as follows: New Jersey: Under a 1996 law, all licensed prescribers and certain licensed health care facilities are required to use tamper-proof, nonreproducible prescription order blanks. State Medicaid officials estimated annual savings of at least $6 million since the law’s implementation in 1997. The law also made prescription forgery a third-degree felony. Texas: In September 2003, Texas law consolidated responsibility for Medicaid program integrity in the Office of Inspector General in the Health and Human Services Commission and funded 200 additional positions to investigate Medicaid fraud. The legislation also expanded the state’s powers to conduct claims reviews, impose prior authorization and surety bond requirements, and issue subpoenas. The law also required that the state explore the feasibility of using biometric technology—such as fingerprint imaging—as an eligibility verification tool. Texas budget officials estimated that over a 2-year period, net savings would exceed $1 billion. CMS has provided states with information, tools, and training to improve their Medicaid program integrity efforts. The agency has funded a pilot that measures payment accuracy rates and another pilot that analyzes provider billing patterns across the Medicare and Medicaid programs. In addition, CMS has facilitated states’ sharing of information on program integrity issues and related federal policies. Also, CMS has conducted occasional reviews of state program integrity operations. However, these reviews are infrequent and limited in scope. CMS is conducting a 3-year Payment Accuracy Measurement (PAM) pilot to develop estimates of the level of accuracy in Medicaid claims payments, taking into account administrative error and estimated loss due to abuse or fraud. At its conclusion, in fiscal year 2006, PAM will become a permanent, mandatory program—to be known as the Payment Error Rate Measurement (PERM) initiative—satisfying requirements of the Improper Payments Information Act of 2002. Under PERM, states will be expected to ultimately reduce their payment error rates by better targeting program integrity activities in their Medicaid programs and the State Children’s Health Insurance Program (SCHIP) and tracking their performance over time. PERM is intended to develop an aggregate measure of states’ claims payment errors as well as error rates for seven health care service areas— inpatient hospital services, long-term care services, independent physicians and clinics, prescription drugs, home and community-based services, primary care case management, and other services and supplies. CMS proposes developing annual national error rate estimates from rates developed by one-third of the states rather than requiring each state to compute an error rate each year. CMS further proposes that in the 2-year period after a state determines its error rate, the state develop and implement a plan to address the causes of improper payments uncovered in its review. CMS is in the third and final year of PAM. Each year, CMS tested various measurement methodologies and expanded participation to additional states. CMS used information from the 9 states participating in PAM’s first year, fiscal year 2002, to help refine the measurement methodologies for subsequent years. CMS also constructed a single model to be used by all 12 states participating in the second year of PAM, which began in fiscal year 2003. Those states that reported on Medicaid fee-for-service payment accuracy had rates ranging from 81.4 percent to 99.7 percent. Sources of inaccurate payments included incomplete documentation of a service, inappropriate coding, clerical errors, as well as provision of medically unnecessary services. In PAM’s final year, fiscal year 2004, the 27 participating states will include in their claims reviews payments made under SCHIP and verification of recipient eligibility, among other things. Beginning in fiscal year 2006, the PAM pilot will transition into the PERM initiative to produce both state-specific and national estimates of Medicaid program error rates. Although state responses to CMS’s pilot were generally positive, program integrity officials raised concerns about the cyclical nature of the permanent program. Officials in several states—including Illinois, Louisiana, and North Carolina—indicated concern that the 3-year cycle presents significant staffing challenges. They contend that it is impractical for a state to employ sufficient staff, with the necessary expertise, to perform these functions only once every 3 years. Officials in other states, such as New York and Washington, expressed concern that the measurement effort might result in diverting staff from ongoing, and potentially more productive, program integrity activities. In its April 2004 final report on the second year of the pilot, CMS identified high state staff turnover and limited availability of medical records as obstacles that kept some states from completing their pilots on time. In another effort to support states’ program integrity activities, CMS facilitates the sharing of health benefit and claims information between the Medicaid and Medicare programs. For example, it arranged for state Medicaid agency officials to gain access to confidential provider information contained in Medicare’s restricted fraud alerts (a warning against emerging schemes), provider suspension notices, and databases. One of the Medicare-Medicaid information-sharing activities is a data match pilot that received funding from several sources. The purpose of this state-operated pilot is to identify improper billing and utilization patterns by matching Medicare and Medicaid claims information on providers and beneficiaries. Such matching is important, as fraudulent schemes can cross program boundaries. CMS initiated the Medicare-Medicaid data match pilot in California in September 2001. CMS estimated that in its first year, the pilot achieved a 21-to-1 return on investment, with about $58 million in cost avoidance, savings, and overpayment recoupments to the Medicaid and Medicare programs. In addition, over 80 cases were opened against suspected fraudulent providers. For example, the pilot identified the following: One provider billed more than 24 hours a day. Although the Medicare claims alone were not implausible, once the Medicare and Medicaid dates of service were matched, the provider showed up as billing for more than a reasonable number of hours in a day. Several providers serving beneficiaries eligible for both programs purposely submitted flawed Medicare bills, received full payment from Medicaid based on the denied Medicare claims, then resubmitted corrected Medicare bills and were paid again. In assessing the results of the California pilot, CMS officials noted challenges that delayed implementation for about a year. These included time-consuming activities such as negotiating data-sharing agreements with the contractors that process Medicare claims and reconciling data formatting differences in Medicare and Medicaid claims. CMS officials believe that these challenges were largely due to the novel nature of the effort and that implementation should proceed more smoothly in other states. In fiscal year 2003, CMS expanded the data match pilot to six additional states: Florida, Illinois, New Jersey, North Carolina, Pennsylvania, and Texas. CMS also sponsors a Medicaid fraud and abuse technical assistance group (TAG), which provides a forum for states to discuss issues, solutions, resources, and experiences. TAG meets monthly by teleconference and convenes annually in one location. Each of four geographic areas— Midwest, Northeast, South, and West—has two TAG delegates from state Medicaid program integrity units who participate in the teleconferences. Any state may participate in the teleconferences and 18 do so regularly. Delegates discuss concerns raised by the states in their geographic regions and convey information on agenda items to their states. For example, state officials told us that they have discussed issues such as new data systems and other fraud and abuse detection tools. TAG members also use this forum to alert one another to emerging schemes. In one instance, TAG members discussed a drug diversion operation involving serostim—a drug used to treat AIDS patients for degenerative weight loss—from a Pennsylvania mail-order pharmacy. Serostim—which costs about $5,000 for a month’s supply—was being sold to body builders to enhance muscle tissue. According to New York officials, over a 2-year period, the state’s Medicaid expenditures for serostim increased from $4 million to $50 million. Following this discovery, several states, including New York, instituted prior authorization policies for the drug. In addition, states use TAG to communicate and propose policy changes to CMS. For example, through TAG, the states proposed that CMS modify the federal 60-day repayment rule. This rule implements a statutory requirement that state Medicaid agencies refund the federal portion of any identified overpayments within 60 days of discovery, except in cases where providers or other entities have filed for bankruptcy or gone out of business. Some states participating in TAG contend that complying with the 60-day repayment rule discourages states from pursuing complex cases for which recoveries may prove difficult and instead gives them an incentive to focus on easy overpayment cases. CMS has supported and endorsed legislative proposals to amend the statute in the case of overpayments resulting from fraud or abusive practices, proposing that the federal share be returned 60 days after recovery versus 60 days after discovery. However, CMS’s efforts to change the policy have not been successful. CMS officials point to compliance reviews of the states’ program integrity activities as the agency’s principal means for exercising oversight. CMS conducts on-site reviews to assess whether state Medicaid program integrity efforts comply with federal requirements, such as those governing provider enrollment, claims review, utilization control, and coordination with each state’s Medicaid Fraud Control Unit. Such on-site reviews typically last 5 days and are announced 30 days in advance. If reviewers find states significantly out of compliance, they may revisit the states to verify that they have taken corrective action. However, teams conducting these reviews do not evaluate the effectiveness of state activities on reducing improper payments. Staffing and funding constraints have limited this oversight effort. From January 2000 through December 2003, CMS completed reviews of 29 states. At its current pace of conducting eight state compliance reviews each year, CMS would not begin a second round of nationwide reviews before fiscal year 2007. CMS officials explained that the agency can conduct only eight reviews per year, given the resources allocated for Medicaid program integrity. For fiscal year 2004, CMS allocated eight staff nationally—about four full-time equivalent (FTE) staff in headquarters and four FTEs distributed across the agency’s 10 regional offices—and an operating budget of $26,000 for overseeing the states’ Medicaid program integrity activities, including the cost of conducting compliance reviews. This level of funding represents a $14,000, or 35 percent, decline from the previous year. At the peak of its funding in fiscal year 2002, CMS’s operating budget for these activities was about $80,000. According to agency officials, the size of the federal Medicaid program integrity group relative to its responsibilities has resulted in its use of Medicare’s program integrity resources to help implement pilot projects and conduct technical assistance activities. From the states’ perspective, compliance reviews have provided useful information for identifying needed areas of improvement and potential best practices. For example, Michigan officials told us that after CMS’s review, they took steps to strengthen their provider enrollment activities. In another state, CMS discovered numerous areas of noncompliance. The state agency’s provider enrollment processes did not require applicants to disclose prior criminal convictions or business ownership and control. The state agency also did not investigate potential instances of fraud and abuse identified by its SURS unit or beneficiary complaints, or make the required referrals to the state Medicaid Fraud Control Unit. As a result of these findings, CMS required the state to develop a corrective action plan. About a year later, the review team revisited the state and learned that it had begun to implement corrective actions. CMS has pointed to its compliance reviews of the states’ program integrity activities as providing the agency with information on the states’ strengths and vulnerabilities to improper payments. However, as we reported in February 2002, these structured site reviews focus on state compliance and do not evaluate the effectiveness of the states’ fraud and abuse prevention and detection activities for reducing improper payments. The varied and substantial cases of Medicaid fraud or abuse that have been uncovered around the country reaffirm the need for Medicaid agencies to safeguard program dollars. Such losses have prompted program integrity units and legislatures in many states to take active roles in prevention and detection efforts. In their attempts to limit improper payments, states have pursued a broad range of methods, such as tightened provider enrollment and advanced claims review techniques. As some states report identifying substantial cost savings, further enhancements in program integrity activities are likely to generate positive returns on such investments. At the same time, there may be a disparity between the level of CMS resources devoted to Medicaid program integrity and the program’s vulnerability to financial losses. On its current schedule for conducting state program integrity compliance reviews, CMS will not obtain a programwide picture of states’ prevention and detection activities more than once every 6 years. Moreover, because these reviews are limited in scope, CMS does not evaluate states’ effectiveness in addressing improper payments. In addition, findings from the payment accuracy pilot indicate a need for CMS to further enhance state efforts to prevent and detect payment errors. In written comments on a draft of this report, CMS officials took issue with our observation that the level of resources devoted to federal oversight of states’ program integrity activities may be inconsistent with the financial risks to the program. They pointed out that the agency’s program integrity work should be viewed as part of its broader financial management of state Medicaid programs. Officials noted that 65 financial management staff in CMS regional offices review Medicaid expenditures, conduct financial management reviews, provide technical assistance to states on financial policy issues, and analyze state cost allocation and administrative claiming plans. Officials also stated that the agency expects to hire 100 new Medicaid financial management staff this fiscal year and has contracted with HHS OIG to perform additional auditing. (See app. II.) We commend CMS for the actions it has begun to take to address its Medicaid financial management challenges. As we have reported in recent years, CMS had fallen short in providing the level of oversight required to ensure states’ Medicaid financial responsibility. When fully implemented, CMS’s efforts to increase the number of staff dedicated to reviewing the states’ financial management reports should help it strengthen the fiscal integrity of Medicaid’s state and federal partnership. However, financial management and program integrity, while related functions, are not interchangeable. Financial management focuses on the propriety of states’ claims for federal reimbursement, such as the matching, administrative, and disproportionate share funds that CMS provides the states. In contrast, program integrity—the focus of this report—addresses federal and state efforts to ensure the propriety of payments made to providers. Unlike the commitment to expand resources for Medicaid financial management activities, CMS has not indicated a similar commitment to enhancing its support and oversight of states’ program integrity efforts. CMS officials also provided technical comments, which we incorporated into the report where appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days after its date. At that time, we will send copies of this report to the Secretary of HHS, Administrator of CMS, appropriate congressional committees, and other interested parties. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. We will also make copies available to others upon request. If you or your staff have any questions about this report, please call me at (312) 220-7600. Another contact and key contributors to this report are listed in appendix III. Appendix I: States’ Approaches to Medicaid Program Integrity A surety bond may protect the state against certain financial losses. A data warehouse stores information on claims, providers, and beneficiaries in an integrated database. Data mining is the analysis of large databases to identify unusual utilization patterns. Data matching and modeling are techniques that allow comparisons of providers within specialties to determine normative patterns in claims data so that aberrant patterns can be identified. Smart technology is software that analyzes patterns in claims data and feeds the information back into the system to identify new patterns. A drug formulary is a list of prescription medications approved for coverage. National Association of Surveillance and Utilization Review Officials. In addition to the contact named above, Enchelle Bolden, Helen Chung, Hannah Fein, Shirin Hormozi, and Geri Redican made key contributions to this report. | During fiscal year 2002, Medicaid--a program jointly funded by the federal government and the states--provided health care coverage for about 51 million low-income Americans. That year, Medicaid benefit payments reached approximately $244 billion, of which the federal share was about $139 billion. The program is administered by state Medicaid agencies with oversight provided by the Centers for Medicare & Medicaid Services (CMS) in the Department of Health and Human Services. Medicaid's size and diversity make it vulnerable to improper payments that can result from fraud, abuse, or clerical errors. States conduct program integrity activities to prevent, or detect and recover, improper payments. This report provides information on (1) the types of provider fraud and abuse problems that state Medicaid programs have identified, (2) approaches states take to ensure that Medicaid funds are paid appropriately, and (3) CMS's efforts to support and oversee state program integrity activities. To address these issues, we compiled an inventory of states' Medicaid program integrity activities, conducted site visits in eight states, and interviewed CMS's Medicaid program integrity staff. Various forms of fraud and abuse have resulted in substantial financial losses to states and the federal government. Fraudulent and abusive billing practices committed by providers include billing for services, drugs, equipment, or supplies not provided or not needed. Providers have also been found to bill for more expensive procedures than actually provided. In recent cases, 15 clinical laboratories in one state billed Medicaid $20 million for services that had not been ordered, an optical store falsely claimed $3 million for eyeglass replacements, and a medical supply company agreed to repay states nearly $50 million because of fraudulent marketing practices. States report that their Medicaid program integrity activities generated cost savings by applying certain measures to providers considered to be at high risk for inappropriate billing and by generally strengthening their program controls for all providers. Thirty-four of the 47 states that completed our inventory reported using one or more enrollment controls with their high-risk providers, such as on-site inspections of the applicant's facility, criminal background checks, or probationary or time-limited enrollment. States also reported using information technology to integrate databases containing provider, beneficiary, and claims information and conduct more efficient utilization reviews. For example, 34 states reported conducting targeted claims reviews to identify unusual patterns that might indicate provider abuse. In addition, states cited legislation that directed the use of certain preventive or detection controls or authorized enhanced enforcement powers as lending support to their Medicaid program integrity efforts. At the federal level, CMS is engaged in several initiatives designed to support states' program integrity efforts; however, its oversight of these state efforts is limited. CMS initiatives include two pilots, one to measure the accuracy of each state's Medicaid claims payments and another to identify aberrant provider billing by linking Medicaid and Medicare claims information. CMS also provides technical assistance to states by sponsoring monthly teleconferences where states can discuss emerging issues and propose policy changes. To monitor Medicaid program integrity activities, CMS teams conduct on-site reviews of states' compliance with federal requirements, such as referring certain cases to the state agency responsible for investigating Medicaid fraud. In fiscal year 2004, CMS allocated $26,000 and eight staff positions nationally for overseeing the states' Medicaid program integrity activities, including the cost of compliance reviews. With this level of resources, CMS aims to review 8 states each year until all 50 states and the District of Columbia have been covered. From January 2000 through December 2003, CMS has conducted reviews of 29 states and, at its current pace, would not begin a second round of reviews before fiscal year 2007. This level of effort suggests that CMS's oversight of the states' Medicaid program integrity efforts may be disproportionately small relative to the risk of serious financial loss. |
As identified and described in figure 1, we reported in October 2016 that agencies have frequently used five open innovation strategies to collaborate with citizens and other external parties, and encourage their participation in agency initiatives. As we described in our October 2016 report, agencies can use these open innovation strategies singularly, or in combination as part of a larger open innovation initiative, to achieve a number of different purposes. Agencies can use them to efficiently engage a broad range of citizens and external stakeholders in developing new ideas, solutions to specific problems, or new products ranging from software applications to physical devices. For example, DOE’s Wave Energy Prize competition was designed to achieve a ground-breaking advancement in technology that produces electricity by capturing energy from ocean waves. DOE’s goals for the competition were to engage a diverse collection of developers to create more efficient devices that would double the energy captured from ocean waves. According to the Wave Energy Prize website, the competition, which was concluded in November 2016, produced a significant advancement in this technology. Ultimately, 4 of 9 finalist teams—selected from 92 that registered to participate—developed devices that surpassed DOE’s goal, while the winning team achieved a five-fold improvement. Agencies can also use these strategies to enhance collaboration among citizens and external stakeholders or organizations interested in an issue, and leverage the resources, knowledge, and expertise of citizens and stakeholders to supplement that which is available within the agency. These contributions enhance the agency’s capacity and its ability to achieve goals that would be more difficult to reach without this additional capacity or expertise. In another example highlighted in our October 2016 report, the Federal Highway Administration’s (FHWA) Every Day Counts ideation initiative helps states implement innovations that improve the efficiency and quality of their highway transportation and construction projects. Every two years, FHWA engages a range of stakeholders to identify innovative technologies and practices that merit more widespread deployment. Once innovations are identified, FHWA creates deployment teams comprised of experts from FHWA, state transportation agencies, and industry to help states and other stakeholders implement these innovations. According to an FHWA report, this open innovation initiative has had significant and measurable effects in participating states. In one case, FHWA reported that deploying Accelerated Bridge Construction as an Every Day Counts innovation has allowed states to reduce the time it takes to plan and construct bridges by years, significantly reducing traffic delays, road closures, and often project costs. In our October 2016 report, we also identified seven practices and 18 related key actions, described in figure 2, that federal agencies can use to help effectively design, implement, and assess their open innovation initiatives. To do so, we analyzed and synthesized suggested practices from relevant federal guidance and literature, including public and business administration journals, and publications from research organizations, as well as interviews with experts and agency officials with experience implementing open innovation initiatives. In that report, we also illustrated how the application of these practices helped agencies effectively implement open innovation initiatives in a way that achieved intended results. Three agencies with government-wide responsibilities—OMB, OSTP, and GSA—have taken steps to support and encourage the use of open innovation strategies by federal agencies. OMB is the largest component of the Executive Office of the President, and is responsible for helping agencies across the federal government implement the commitments and priorities of the President. Among other things, it develops policies and provides direction on the use of Internet-based technologies that make it easier for citizens to interact with the federal government. OSTP is also a component of the Executive Office of the President. Among other things, it has responsibility for leading interagency efforts to develop and implement sound science and technology policies. GSA is responsible for helping federal agencies obtain the facilities, products, and services that they need to serve the public. Among other things, GSA builds, provides, and shares technology, platforms, and processes that support initiatives to invite participation from stakeholders and make data available to the public. OMB, OSTP, and GSA developed government-wide policies and implementation guidance to encourage agencies to use open innovation strategies to, among other things, advance their missions, improve programs and services, and inspire new ideas to address specific challenges. According to staff from these agencies, policies and guidance do this by: clarifying the legal authorities available to use specific strategies; highlighting the benefits and results that agencies can produce using suggesting actions for agency staff to take when designing and implementing an open innovation initiative. Table 1 lists the government-wide policies and guidance that we, in consultation with OMB, OSTP, and GSA staff, identified as key for implementing each open innovation strategy. As table 1 shows, the focus of the guidance varies across the different types of open innovation strategies. For crowdsourcing and citizen science, as well as prize competitions and challenges, toolkits provide step-by-step guidance for agency staff to follow when implementing those types of initiatives. OMB, OSTP, and GSA staff identified the U.S. Public Participation Playbook as key guidance for a variety of approaches to engage the public, including two open innovation strategies: ideation and open dialogues. They also identified the Open Data Policy and Project Open Data as the key policy and source of guidance, respectively, that cover open data collaboration initiatives. They told us that these and other resources have a broader focus, generally to help agencies inventory, manage, and release their open data. These sources do suggest some actions for agencies to take when implementing open data collaboration initiatives. However, OMB, OSTP, and GSA staff have not yet developed more detailed, step-by-step guidance for implementing specific initiatives where agencies use events or websites to engage and collaborate with outside stakeholders using open data. As one example of policy and guidance for an open innovation strategy, in September 2015, the Director of OSTP issued a memorandum encouraging agencies to use crowdsourcing and citizen science to enhance scientific research and address problems by drawing on the voluntary participation of the public. The memorandum also directs agencies to improve their ability to use crowdsourcing and citizen science by identifying internal agency coordinators responsible for seeking opportunities to use this strategy to meet agency goals. Crowdsourcing and citizen science coordinators from four of the selected agencies— DOE, HHS, EPA, and NASA—told us that having this clear statement of support has helped enhance awareness of and interest in the strategy. The memorandum was supplemented by the Crowdsourcing and Citizen Science Toolkit, which provides a step-by-step list of actions agencies can take to carry out a specific initiative, as shown in figure 3. These officials from DOE, EPA, HHS, and NASA told us that that the toolkit is an important educational resource for agency staff, helping those interested in using crowdsourcing and citizen science understand what the strategy can be used to achieve and the full range of actions they can take to develop and implement an initiative. An official from GSA’s Technology Transformation Service (TTS) told us GSA is planning updates to the toolkit to reflect the authorities provided by, and requirements of, the American Innovation and Competitiveness Act (AICA) for crowdsourcing and citizen science initiatives. In addition to developing relevant policies and implementation guidance, staff from OMB, OSTP, and GSA told us they also provide ongoing support for the use of open innovation strategies across the federal government by: answering questions from agency staff, sharing lessons learned, and providing advice and assistance; hosting training sessions and other events for agency staff to highlight how agencies can use open innovation strategies and how agencies have had success using them; and matching agency staff in need of assistance with mentors, advisors, or partners in other agencies. Table 2 provides additional information on the types of support they provide. One example of staff supporting the use of open innovation strategies is GSA’s Challenge.gov program management team. According to an official from GSA’s TTS, this team is the primary point-of-contact for staff from other federal agencies, fielding their questions about the site or other aspects of managing a prize competition or challenge. To help familiarize agency staff with how to manage a prize competition or challenge and use the Challenge.gov platform, the team conducts training sessions and has developed on-demand videos and webinars that are available through the website. In addition, to provide tailored assistance, the team created the Challenge and Prize Mentorship Program, which matches staff seeking support with experienced practitioners from other agencies who can assist with all aspects of planning and implementing an initiative using this strategy. OMB, OSTP, and GSA staff also support communities of practice for various open innovation strategies. According to staff from these agencies, these communities provide venues to bring experienced and knowledgeable agency staff together so that they can learn from one another. Through regularly-scheduled meetings and e-mail lists, the communities allow members to share experiences and lessons learned, seek and provide advice and assistance, and ensure members are up to date on relevant issues, such as the development and release of new resources or upcoming initiatives. OMB, OSTP, and GSA staff also have leveraged the expertise of community members to develop toolkits and case studies, which capture leading practices and lessons learned to ensure they are readily available to inform future initiatives. Collectively, these communities, which are described in table 3, help staff across the federal government communicate and collaborate with one another. The federal Challenges and Prizes Community of Practice further illustrates the support such communities can provide. According to a GSA official, this community was created in 2010 to bring together staff from across the federal government interested in that open innovation strategy. According to information on the Challenge.gov website, the community is comprised of federal employees representing a wide range of agency perspectives and experience levels. Staff from OSTP’s Technology and Innovation Division and GSA’s TTS told us that they have worked with volunteers from the community to develop resources, including the federal Challenges and Prizes Toolkit, to try to meet the diverse needs of its members. The official from GSA’s TTS also told us that they plan to work with members of the community to revise the toolkit to reflect the updated authorities and requirements for prize competitions and challenges in AICA. This official further stated that the Challenge.gov program team provides ongoing support to this community by maintaining its e-mail list, which allows for information sharing between members, and working with members to identify topics and arrange speakers for the community’s quarterly meetings. For instance, the Challenge.gov team helped organize the March 2017 meeting, which focused on how AICA updates agency authorities and requirements for prize competitions and challenges. According to staff from OMB, OSTP, and GSA, their agencies developed several websites, described in table 4, that support the use of open innovation strategies by: making relevant guidance and information easy to access in a single, providing agency staff access to additional resources, such as applications, templates, and documents from other agencies, that they can replicate to save time and effort; and providing online platforms for agencies to reach the public with information on their open innovation initiatives. As examples of such websites, OMB, OSTP, and GSA developed Project Open Data and Data.gov to help agencies manage and release their open data and meet requirements under OMB’s Open Data Policy. According to officials from DOE, EPA, HHS, HUD, and NASA, Project Open Data has made it easier for them to find the guidance they need by making it accessible in one place. Project Open Data also encourages agencies to hold open data events to engage with data users, and to use these opportunities to expand awareness about their open data, and collect feedback and ideas. The website provides an overview of the main types of open data community events that federal agencies hold, as well as instructions and templates to help agencies organize, publicize, and carry out various types of events. To increase awareness about the open data events that agencies are holding, and to help agencies increase participation in these events, GSA has also added an “Events” page to Data.gov. As shown in figure 4, this provides agencies with an additional online platform to inform potential participants about open data events they will be hosting. We determined that key government-wide guidance developed by OMB, OSTP, and GSA to support the implementation of various open innovation strategies reflects practices for effective implementation to differing extents. Several factors led to these variances, including differing scopes and methodologies used in their development and the dates when they were issued. We identified key guidance for the implementation of initiatives using each open innovation strategy in consultation with OMB, OSTP, and GSA staff. As noted above, staff from these agencies identified the U.S. Public Participation Playbook as key guidance for both ideation initiatives and open dialogues. Therefore, we present our assessment of guidance for those two strategies together below. OMB, OSTP, and GSA staff also identified the Open Data Policy and Project Open Data as key sources of guidance for the implementation of open data collaboration initiatives. These sources suggest some actions for agencies to take when implementing such initiatives. However, as noted earlier, they have a broader focus to help agencies inventory, manage, and release their open data. As such, our assessment found that this guidance generally did not reflect practices for effective implementation of individual open data collaboration initiatives. Our October 2016 report identified two key actions agencies should take when attempting to select the most appropriate strategy or strategies for their initiatives: 1. Clearly articulate the purpose(s) they hope to achieve by engaging the public. As we found in that report, agencies have used open innovation strategies for a number of purposes, including to develop solutions to specific problems and to leverage the expertise of external stakeholders to enhance an agency’s ability to achieve a goal. 2. Consider the agency’s capability to implement a strategy. Considerations should include whether agency leadership supports the use of a strategy, and whether they have legal authorities, financial and technological resources, and staff available to support an initiative. As shown in figure 5, we determined that guidance for all but one of the strategies—open data collaboration—fully reflects these key actions. For example, the Challenges and Prize Toolkit encourages agencies to clarify their purpose in engaging the public, including by developing a detailed understanding of the specific problem they want to address through the competition. In addition, when considering a prize competition or challenge, the toolkit recommends that agency staff: secure the approval of agency leadership to move forward; work with legal counsel to identify the most appropriate authority under which to conduct the competition, and to address any other potential legal issues; estimate the budget and resource needs of a competition; and ensure the availability of staff to monitor or run the challenge throughout its life cycle. Guidance for open data collaboration initiatives does not address the first key action—defining the purpose—and partially reflects the second key action. Although it encourages agencies to consider agency capacity and applicable laws, regulations, and policies, and the availability of resources, it does not address leadership support. To guide agencies in designing and implementing initiatives, and to provide those involved with a clear understanding of what they are working to achieve, we identified three key actions that agencies should take: 1. Define specific and measurable goals for their initiatives. 2. Identify performance measures to assess progress toward those goals. 3. Align the goals of their open innovation initiatives with the agency’s broader mission. This final action helps to demonstrate the relevance and value of an initiative to others in the agency, and reinforces the connection between the agency’s goals and the day-to-day actions of those carrying out the initiative. As shown in figure 6, our assessment shows that the guidance for all but one of the strategies fully reflects these three key actions. For example, the Federal Crowdsourcing and Citizen Science Toolkit encourages agencies to define their goals for an initiative while considering how they will measure and evaluate these outcomes. In addition, the toolkit encourages agencies to identify the specific measures that they will use to track the initiative’s outputs and activities, such as the number of samples collected or training sessions held, and to determine whether the initiative is achieving its goals. Lastly, it states that those managing initiatives should ensure that the initiative is aligned with their agency’s mission, and be able to specify how it will help the agency meet its goals. As also shown in figure 6, we determined that guidance for the implementation of open data collaboration initiatives partially addresses the first key action. Specifically, the guidance identifies illustrative goals for various types of open data events. For example, it states that the goal of a “hackathon” is to build relationships with a community of developers and designers, and to see immediate tools and prototypes built using open data. However, as written, these illustrative goals are not measurable, nor does the guidance directly encourage agencies to develop such goals for their own specific initiatives. Furthermore, the guidance does not reflect the other two key actions. To leverage the experience, insights, and expertise of those interested or engaged in the area to be addressed by an initiative, agencies should: 1. Identify and engage with external stakeholders. These are individuals or organizations that share an interest in the issue being addressed and may already be active in related efforts. For a federal agency, external stakeholders can include representatives of relevant nonprofit organizations and foundations, community or citizens’ groups, universities and academic institutions, the private sector, members of Congress and their staffs, other federal agencies, and state and local governments. 2. Look for opportunities to partner with other groups and organizations. Partners are organizations and individuals that play a direct role in designing and implementing an initiative. They provide staff capacity, resources, administrative and logistical support, assistance with communications and community building, or ongoing advice and expertise. In considering potential partners, agencies should look for groups and organizations that would be interested in, or could benefit from, the results of an open innovation initiative. As shown in figure 7, we determined that guidance for all of the open innovation strategies fully reflects both key actions. For example, guidance for implementing open data collaboration efforts in OMB’s Open Data Policy encourages agencies to engage with various stakeholders, including entrepreneurs and innovators in the private and nonprofit sectors. This engagement could then lead to these stakeholders participating in specific initiatives to use agency data to build products, applications, and services. Guidance on the Project Open Data website also suggests agencies use open data collaboration events to bring together various stakeholders, such as entrepreneurs, technology leaders, and policy experts, to explore available data and discuss new ideas for tools that the private sector could create using agency data. Project Open Data also encourages agencies to invite and partner with other government entities and private companies when developing and carrying out these events. To ensure that tasks and time frames are clear for all involved in implementing and managing an open innovation initiative, agencies should document roles and responsibilities and develop implementation plans for those initiatives. The plans should clearly identify specific tasks, the parties responsible for completing them, and the time frames for doing so. They should also outline when and how the agency will reach out to various participant groups, and how data will be collected and evaluated to determine results. As shown in figure 8, we determined that guidance for crowdsourcing and citizen science initiatives, and prize competitions and challenges, fully reflects these key actions. For example, the Challenges and Prizes Toolkit encourages agencies to create an implementation plan that clearly outlines the roles and responsibilities for those involved in an initiative. As the toolkit emphasizes, managing a competition can involve a wide range of government and contract staff with varying responsibilities, from project management to providing subject matter expertise and technical, communications, and legal support. Because of this, it is important to be clear about the role of each team member, the duties each is assigned, and how his or her work fits into the timeline for the competition. As part of this, the toolkit also states that the plan should specify the procedures that will be used to collect relevant data during the course of the challenge. Lastly, the toolkit encourages agencies to develop a communications plan that defines the audiences they want to reach with information on the initiative; the websites, news outlets, social media, and other outlets it will use to reach them; and the messages they will use to reach potential participants and encourage them to participate. In contrast, as also shown in figure 8, we found that guidance for open data collaboration initiatives does not reflect either of these key actions, and guidance for open dialogue and ideation initiatives partially reflects these actions. Specifically, the guidance for open dialogue and ideation initiatives encourages agencies to document roles and responsibilities, and to develop a plan that specifies the tasks and time frames for recruiting participants. However, the guidance does not discuss planning for data collection. To reach the right potential participants, motivate them to participate, and keep partners and participants engaged throughout the implementation of an initiative, we previously identified four key actions agencies should take. They should: 1. Use multiple outlets and venues to reach potential participants. In their outreach, agencies should use the initiative’s website, social media, press releases, journals, newsletters, and professional conferences and networks to reach potential participants. 2. Craft announcements in a way that motivates people to participate. In doing so, agencies should address the interests of potential participants and explain why it is important for them to participate. 3. Engage with participants to provide answers to questions and any necessary assistance. While an initiative is ongoing, agencies can use websites, question-and-answer sessions, e-mails, and other forms of communication to engage with participants and provide ongoing support and assistance. 4. Hold regular check-ins with those involved in implementation. Agencies should hold regular meetings to help ensure that those working to implement an initiative are aware of the status of efforts and have an opportunity to raise and discuss any concerns. As shown in figure 9, our assessment found that guidance for crowdsourcing and citizen science fully reflects all of these key actions, while guidance for the other strategies reflects some, but not all, of these key actions. For example, the Crowdsourcing and Citizen Science Toolkit encourages agencies to find the best platforms for reaching communities with information about their initiatives, and to reach out and communicate with potential participants using media and messages that respond to their interests. The toolkit also encourages agencies to let those participating in an initiative know how they can engage with the agency to provide information and feedback, and states that the agency should pay attention to shifts in participant needs and interests over time. Lastly, the toolkit instructs agency staff to hold regular meetings with an initiative’s implementation team so that everyone can understand how the project is progressing, and discuss new developments and concerns. As also shown in figure 9, for the other strategies, we determined that guidance fully reflected actively engaging participants, but none of them reflected holding regular check-ins with those involved in implementation. Guidance for ideation and open dialogue initiatives, and prize competitions and challenges reflects two additional key actions—using multiple outlets and crafting announcements to participants’ interests— while the guidance for open data collaboration initiatives does not. After an initiative has concluded, or at regular intervals if it is a long- standing or continuous effort, agencies should take three steps to assess and report results and identify potential improvements: 1. Assess the data collected during implementation to determine whether the initiative met its goals. 2. Analyze feedback from partners and participants. Agencies should identify lessons learned about what went well and what would need to be adjusted or improved for similar initiatives in the future. 3. Report publicly on results achieved and lessons learned. Doing so can demonstrate the value of an initiative and sustain a dialogue within the community of interested organizations and individuals. As shown in figure 10, we determined that guidance for crowdsourcing and citizen science initiatives and prize competitions and challenges fully reflected these key actions. For example, the Challenges and Prizes Toolkit states that agencies should assess the data they have collected to determine how well the competition achieved its goals, and to identify other results and outcomes it produced, such as quantifiable improvements to existing solutions and technologies. The toolkit also states that agencies should conduct an after-action assessment to capture feedback, lessons learned, and other institutional knowledge so the agency can improve its challenges in the future. Specifically, it suggests that agencies consider the following questions when conducting this assessment: What worked well? What would you have done differently in challenge design looking back? How might agency clearance and coordination go more smoothly next time? What could have been improved in judging, communications, and operations? Did the evaluation process result in the selection of the best submissions? What were any unintended consequences, both positive and negative? Lastly, the toolkit reminds agencies to complete required public reporting, sharing results, lessons learned, and success stories, which can be critical to improving how challenges are designed and implemented. As figure 10 also illustrates, our assessment found that guidance for open data collaboration initiatives did not reflect any of these key actions, and guidance for ideation and open dialogue initiatives reflected most of them. Specifically, that guidance encourages agencies to collect and analyze data to assess goal achievement and results, conduct an after-action review, and report publicly on results. It does not, however, encourage agencies to report publicly on lessons learned through their experience. Given the time and resources that agencies may invest to build or enhance communities of partners and participants for open innovation initiatives, our October 2016 report identified two key actions agencies should take to sustain these connections over time: 1. Acknowledge and, as appropriate, reward the efforts and achievements of partners and participants. 2. Seek ways to maintain communication with members of the community. Doing so can keep them informed of future initiatives and other opportunities, and facilitate communication within the community. As figure 11 shows, we determined that none of the guidance fully reflects the first action. The guidance for crowdsourcing and citizen science, and prize competitions and challenges, both encourage agencies to acknowledge the contributions of participants, and reward participants with monetary and nonmonetary incentives (as appropriate). However, guidance for these strategies does not also encourage agencies to acknowledge the contributions of partner organizations, which can provide critical resources, expertise, and capacity for open innovation initiatives. The guidance for ideation, open dialogues, and open data collaboration initiatives did not reflect this key action. As also shown in figure 11, our assessment found that guidance for most open innovation strategies fully reflect the second action; however, guidance for open data collaboration does not. For example, the Federal Crowdsourcing and Citizen Science Toolkit encourages agencies to continue actively engaging partners and participants, and help direct participants to other initiatives that might interest them. The toolkit also encourages agencies to create opportunities for participants to socialize and communicate with each other by supporting discussion forums, and identifying group leaders that can help carry forward a discussion among members of the community. In instances where guidance does not fully reflect the practices and key actions identified in our October 2016 report, agency staff may not be aware of certain steps they should take to better ensure the success of their open innovation initiatives. In part, the various guidance resources developed by OMB, OSTP, and GSA do not fully reflect some practices and key actions because those resources almost all pre-date our report. We also used different methodologies and sources in pulling together our practices and key actions than they did in developing their guidance. As noted earlier, we developed our practices by analyzing and synthesizing suggested practices and key actions from a wide range of relevant literature, as well as interviews with nongovernmental experts and agency officials. Our scope also was to identify practices applicable to implementing individual initiatives using any open innovation strategy. The guidance developed by OMB, OSTP, and GSA, by contrast, is generally strategy specific and based on the experiences of and lessons learned from individuals involved in the relevant communities of practice. For example, as was highlighted earlier, OSTP and GSA staff worked with volunteers from the Challenges and Prizes Community of Practice to develop the Challenges and Prizes Toolkit. Other factors contributed to the guidance for open data collaboration not fully reflecting our practices and key actions for effectively implementing open innovation initiatives. As was previously described, OMB, OSTP, and GSA developed various resources, including websites and guidance, to help agencies inventory, manage, and release their open data. According to staff from these agencies, the focus of their efforts and resources generally has been assisting agencies with the management and release of open data, while providing some support for agencies interested in mobilizing and collaborating with participants to use open data (i.e., open data collaboration), which we identify as an open innovation strategy. The existing guidance does suggest some actions for agencies to take when implementing open data collaboration initiatives. In addition, staff from OMB’s Office of the Chief Information Officer and GSA’s TTS shared several ways in which they and members of the Open Data Working Group provide support for staff interested in carrying out open data collaboration initiatives. For example, they told us that when agency staff ask for advice on how to conduct such initiatives, they often match the individual making the request with experienced staff in another agency, or rely upon members of the Open Data Working Group to answer their questions. An official from GSA’s TTS stated that because members of this community meet regularly, there are frequent opportunities to share best practices and lessons learned to inform the planning of any current or future agency open data collaboration initiatives. In addition, the official told us that agency staff can also use the open data community’s e-mail list to seek and receive assistance from fellow members. Although OMB, OSTP, and GSA staff identified various resources that are available to support agency staff interested in carrying out open data collaboration initiatives, those resources do not include detailed and consistent, step-by-step guidance for the implementation of such initiatives. The agencies we selected for this review—DOE, HHS, HUD, DOT, EPA, and NASA—have put in place various resources to support the use of open innovation strategies. These resources—policies and implementation guidance, supporting staff and organizations, and websites—complement what exists at the government-wide level. The selected agencies have generally developed, or are developing, resources for the open innovation strategies they use frequently, to provide staff with tailored guidance and support. This helps ensure staff carry out initiatives in a manner that is informed by the agency’s previous experience and that is consistent with agency procedures. In some instances, agency officials told us that they have not developed certain agency-level resources, generally for one or more of the following reasons: they found government-wide resources to be sufficient; they have not used a strategy, or use it infrequently, limiting the need for agency-specific resources; or they have not yet had sufficient experience using a strategy to be able to craft policies or other resources informed by experiences and lessons learned. Officials from the selected agencies told us that agency-level policies and guidance help to raise awareness among agency staff of the value of using open innovation strategies, and how these strategies can be an effective tool for engaging the public to generate new ideas and solutions. In instances where agency leaders have approved and issued these policies, this also helps demonstrate to agency staff that leadership supports using these strategies. The policies and guidance at these agencies (examples provided in table 5) also clarify agency-specific steps that staff should take when implementing an initiative, which, according to agency officials, helps ensure that staff take actions necessary to meet requirements and successfully carry out open innovation initiatives. As illustrated in table 5, NASA’s Policy Directive on Challenges, Prize Competitions, and Crowdsourcing Activities, which was signed by the NASA Administrator and issued in February 2014, encourages agency staff to use these open innovation strategies. The directive states that it is NASA’s policy to encourage the use of challenge activities to obtain solutions and stimulate innovation. According to an official from NASA’s Center of Excellence for Collaborative Innovation (CoECI), the directive was created to supplement government-wide policies and guidance. It defines the roles that certain agency officials, such as the Associate Administrator for the Space Technology Mission Directorate, are to play in designing and implementing challenges. It also directs staff to develop and maintain agency-level best practices and implementation guidance for challenges. In keeping with this policy, in 2014, CoECI, which provides advice and support for NASA teams to design, implement, and evaluate challenges conducted through the NASA Tournament Lab, developed a Challenge Coordinator Toolkit. The toolkit provides a detailed checklist for CoECI challenge coordinators to follow when supporting teams that are conducting those challenges. According to a CoECI official, staff with varying levels of experience working on challenges serve as challenge coordinators. The toolkit is intended to ensure that coordinators consistently follow a set of specific procedures for managing all phases of a challenge being conducted using the NASA Tournament Lab, from completing necessary documentation before a challenge is launched to holding a close-out interview to capture feedback and lessons learned. See figure 12 for an overview of the steps highlighted in the toolkit. According to officials at the selected agencies, their agencies have created staff positions and internal organizations to serve as a source of information and guidance, sharing agency-specific and government-wide resources with staff who are conducting open innovation initiatives. In some instances, agencies have dedicated staff responsible for leading efforts to use a certain strategy in their agency and providing internal support to other staff interested in using the strategy. In other instances, agency officials told us they also have staff who, given their interest and experience using a strategy, have taken on responsibilities to support others in their agency, but are not devoted full time to these efforts. Some agencies also have brought together multiple staff to create internal organizations responsible for supporting the use of open innovation strategies. These staff and organizations—selected examples of which are highlighted in tables 6 and 7, respectively—can provide tailored advice and support based on their experience with these strategies. According to agency officials, this can involve helping to ensure staff meet requirements and take other actions to successfully propose, design, and implement an open innovation initiative. As noted in table 6, DOT’s Chief Data Officer (CDO) told us that he provides technical support for the department’s open data efforts. As part of his responsibilities, he works with staff to improve how the agency collects, manages, and publishes data. He also has coordinated and supported several open data events that have allowed DOT staff to collaborate with the public, including private sector companies using transportation data, state and local stakeholders, academic researchers, and those working on new technologies. He told us that these events have served to help the agency better understand how people are using transportation data and what improvements that they would like to see made. For example, in 2015, the CDO worked with staff from throughout DOT to organize a Transportation “Datapalooza.” According to the CDO, the event allowed DOT to both highlight its own data initiatives and learn about the applications and products that private companies, developers, and technologists are developing with DOT data. It also gave DOT staff an opportunity to speak directly with data users, identifying areas where they might work collaboratively to expand the use of the data and improve quality. According to the EPA memorandum that established it in July 2015, EPA’s Challenge Review Team (ChaRT), mentioned above in table 7, is intended to help ensure that agency prize competitions and challenges meet applicable legal requirements, and have an adequate scientific foundation, financial support, and a clear communications plan. As shown in figure 13, ChaRT brings together officials from several EPA offices who, according to an official from EPA’s Office of Research and Development, need to be involved in prize competitions and challenges, creating an efficient “one-stop” approach to reviewing and approving proposals. According to the official from EPA’s Office of Research and Development, the officials who serve on ChaRT provide legal, financial, and scientific expertise and support to EPA teams developing proposals for prize competition or challenge initiatives. As part of the team’s review processes, it requires EPA staff to complete a ChaRT review form to ensure that (1) new initiatives will address agency needs, (2) leadership is aware of and supports the challenge, and (3) teams have completed required steps and addressed any concerns before they can proceed. According to the memorandum that established it, ChaRT members must concur before any prize competition or challenge can be announced. In addition to dedicating staff and organizations, according to officials from selected agencies, some of the agencies also created communities of practice (COP) for those staff interested or involved in open innovation initiatives, particularly citizen science. These include NASA’s Citizen Science COP; EPA’s Citizen Science COP and recently-created Prize and Challenge COP; and within HHS, citizen science working groups at the Centers for Disease Control and Prevention and National Institutes of Health (NIH). Like government-wide communities, those COPs and working groups at the agency level can provide a venue for sharing experiences and lessons learned from designing and implementing open innovation initiatives, or can be used to develop guidance for agency staff. For example, according to NIH officials, the agency’s Citizen Science Working Group was established in 2012 to address interest among NIH staff about how the agency could use crowdsourcing and citizen science methods. NIH officials told us the working group brings together, as of March 2017, approximately 60 staff from 14 of the 27 NIH institutes and centers to discuss various topics and share their experiences with and knowledge of implementing these initiatives. The group also hosts presentations by outside speakers. Agencies selected for this review have developed websites that, like the government-wide websites described above, can provide a central, publicly available location for stakeholders and participants to access information, including agency data that can be used in open data collaboration initiatives and details of specific projects in which they can volunteer to participate. However, officials from DOE, EPA, HHS, and NASA told us that websites developed by agencies, unlike the government-wide websites, are often designed to reach specific audiences and stakeholders that may be likely to participate in the agencies’ open innovation initiatives and in some cases provide an online forum for these audiences to collaborate. Table 8 provides examples of websites that agencies have developed. For example, DOE’s Open Energy Data website, as shown in figure 14, provides energy data users and developers with access to DOE’s catalog of open data sets that they can use to conduct research and develop applications. In addition, it provides visitors with information on how to engage with DOE to provide feedback on the department’s open data. Open Energy Data also highlights and links to Open Energy Information (OpenEI), a website sponsored by DOE and other partner organizations that allows users to share datasets and collaborate on energy data initiatives. HUD’s Switchboard, an online ideation platform that was also highlighted in our October 2016 report, allows employees, stakeholders, and other members of the public to offer ideas for improving HUD’s processes, programs, and administration. According to a HUD official, the platform was originally created in 2009 to collect input for an update to the agency’s strategic plan. Over time, however, HUD expanded the purpose of the platform to make it a more general forum for the public and external stakeholders, as well as HUD employees. Now Switchboard is intended to ensure there is an avenue for anyone to reach HUD with any ideas or feedback that could improve the department. Beyond that, the website allows visitors to offer comments on other submissions, and vote in favor of those ideas that they support. It also helps designated HUD staff collect, review, and respond to the ideas submitted, after which they will work to determine whether they can be implemented. Figure 15 illustrates different aspects of Switchboard’s capabilities. OMB, OSTP, GSA, and selected agencies have developed various resources to support the use of open innovation strategies. Of these, government-wide guidance is particularly important, as it helps agency staff understand the full range of actions they should take when designing, implementing, and evaluating an open innovation initiative. However, key government-wide implementation guidance, in particular for open data collaboration, does not always fully reflect demonstrated good practices and key actions. Better incorporating those practices and key actions would help ensure agency staff are aware of, and can take, the full range of steps to effectively design, implement, and assess their open innovation initiatives. Consistently applying these practices can help agencies ensure that their initiatives successfully achieve intended results. To help ensure federal agencies effectively design, implement, and assess open innovation initiatives in line with the practices and key actions identified in our past report, we make 22 recommendations for GSA, OMB, and OSTP to enhance relevant implementation guidance. We recommend that the Director of the Office of Management and Budget, the Director of the Office of Science and Technology Policy, and the Administrator of the General Services Administration enhance key guidance for open data collaboration initiatives to fully reflect the following 15 key actions: Clearly define the purpose of engaging the public; Consider the agency’s capability to implement a strategy, including leadership support, legal authority, the availability of resources, and capacity; Define specific and measurable goals for the initiative; Identify performance measures to assess progress; Align goals of the initiative with the agency’s broader mission and Document the roles and responsibilities for all involved in the initiative; Develop a plan that identifies specific implementation tasks and time frames, including those for participant outreach and data collection; Use multiple outlets and venues to announce the initiative; Craft announcements to respond to the interests and motivations of Hold regular check-ins for those involved in the implementation of the Collect and analyze data to assess goal achievement and results of Conduct an after-action review to identify lessons learned and Report on results and lessons learned publicly; Acknowledge and, where appropriate, reward the efforts and achievements of partners and participants; and Seek to maintain communication with, and promote communication among, members of the community. We recommend that the Administrator of the General Services Administration enhance key guidance for ideation and open dialogue initiatives to fully reflect the following four key actions: Develop a plan that identifies specific implementation tasks and time frames, including those for participant outreach and data collection; Hold regular check-ins for those involved in the implementation of the Report on results and lessons learned publicly; and Acknowledge the efforts and achievements of partners that have contributed to the implementation of an initiative. We recommend that the Director of the Office of Science and Technology Policy and Administrator of the General Services Administration enhance key guidance for crowdsourcing and citizen science initiatives to fully reflect the key action to acknowledge the efforts and achievements of partners that have contributed to implementing an initiative. We recommend that the Director of the Office of Management and Budget and the Administrator of the General Services Administration enhance key guidance for prize competitions and challenges to fully reflect the following two key actions: Hold regular check-ins with those involved in the implementation of an initiative; and Acknowledge the efforts and achievements of partners who contributed to the implementation of an initiative. We provided a draft of the report to the Director of the Office of Management and Budget, the Acting Director of the Office of Science and Technology Policy, the Acting Administrator of the General Services Administration (GSA), the Secretary of Energy, the Secretary of Health and Human Services, the Secretary of Housing and Urban Development, the Secretary of Transportation, the Administrator of the Environmental Protection Agency, and the Acting Administrator of the National Aeronautics and Space Administration for comment. In its comments, reproduced in appendix II, GSA agreed with the recommendations in this report. Staff from OMB’s Office of the Federal Chief Information Officer and Office of General Counsel provided oral comments on May 15, 2017, stating that OMB generally agreed with the recommendations in this report. In comments provided by email, OSTP’s General Counsel stated that OSTP neither agreed nor disagreed with the recommendations in this report. She stated that, given their past and ongoing responsibilities related to open innovation in the federal government, OMB and GSA are best positioned to address these recommendations. She further stated that OSTP may support OMB and GSA in these efforts to the extent that OSTP has appropriate staff in the future. EPA, NASA, and OMB provided technical comments, which we incorporated as appropriate. DOE, DOT, HHS, and HUD informed us that they had no comments. We are sending copies of this report to interested congressional committees, the heads of the agencies identified above and other interested parties. This report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6806 or mihmj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of our report. Key contributors to this report are listed in appendix III. As part of the federal performance management framework originally put into place by the Government Performance and Results Act of 1993 (GPRA), and updated and expanded by the GPRA Modernization Act of 2010 (GPRAMA), agencies are to identify the strategies and resources they will use to achieve their goals. GPRAMA also includes a provision for us to periodically review how implementing the act’s requirements is affecting agency performance. This report is part of our response to that mandate, and builds on our October 2016 report that described how agencies are using open innovation strategies, and identified practices to ensure they are implemented effectively. Specifically, this report (1) identifies key government-wide resources the Office of Management and Budget (OMB), the Office of Science and Technology Policy (OSTP), and the General Services Administration (GSA) put in place to support the use of open innovation strategies; (2) examines the extent to which key government-wide guidance reflects practices for effectively implementing these strategies; and (3) identifies resources that selected agencies have developed to support the use of open innovation strategies. To identify the various government-wide resources that OMB, OSTP, and GSA have developed, we reviewed relevant policy and guidance documents related to each open innovation strategy (see table 9 below). We also reviewed information available on relevant websites, including Citizenscience.gov, Challenge.gov, Digitalgov.gov, Data.gov, and Project Open Data, and corroborated information collected from these websites through interviews with relevant agency staff. Lastly, we interviewed staff from these agencies involved in efforts to support and encourage the use of various open innovation strategies across the federal government. In these interviews, we asked staff to identify key government-wide policies and guidance for each type of strategy, and to describe their roles and responsibilities in supporting and encouraging agency use of those strategies. We also asked about their collaboration with staff in other agencies, among other things. To determine the extent to which government-wide policies and guidance reflect practices and key actions for effectively implementing open innovation strategies, we first, in consultation with OMB, OSTP, and GSA staff, identified those that were considered key for each type of strategy. These key policies and guidance are listed in table 9. For each strategy type, we then compared key guidance contents to the seven practices and 18 related key actions, listed in figure 16, that we identified in October 2016. We identified those practices and key actions by analyzing and synthesizing suggested practices from relevant federal guidance and literature, including public and business administration journals, and publications from research organizations, as well as interviews with experts and agency officials with experience implementing open innovation initiatives. We then analyzed the content of the implementation guidance for each strategy to determine the extent to which it reflects the practices and key actions we previously identified. First, two analysts reviewed each source of guidance to identify all excerpts suggesting actions in line with those we identified. Next, a third analyst separately reviewed the guidance documents to verify the accuracy of the initial determinations, or identify those areas in need of additional discussion. The analysts involved in the first and second stages of this analysis then made final determinations about whether the actions suggested in the guidance reflected those we identified. We considered guidance to fully reflect a key action when it suggested steps in line with those outlined in our report. If guidance suggested some but not all steps, we considered it to partially reflect the key action. Finally, when guidance did not suggest any steps in line with a key action, we considered guidance to not reflect it. Lastly, to identify the resources that selected agencies have put in place, we reviewed agency policies and guidance, and websites related to the use and implementation of open innovation strategies from six agencies: the Departments of Energy (DOE), Health and Human Services (HHS), Housing and Urban Development (HUD), and Transportation (DOT); the Environmental Protection Agency (EPA); and the National Aeronautics and Space Administration (NASA). We selected these agencies based on various criteria, including the number and variety of open innovation strategies outlined in their individual agency Open Government Plans. These selections were also in line with suggestions from knowledgeable staff at OMB, OSTP, and GSA familiar with agencies that have actively used such strategies. In interviews with officials from these agencies, we asked them to identify agency-specific policies, procedures, or other guidance that has been developed to aid staff carrying out open innovation initiatives. Through this consultation, we were able to identify relevant policy documents and guidance developed by these agencies. For instance, we identified EPA’s memorandum establishing the agency’s Challenge Review Team, which is responsible for approving proposals for the agency’s prize competitions and challenges. Similarly, we were able to use this consultation to identify the Federal Highway Administration’s Public Involvement Techniques for Transportation Decision-Making guide, which provides staff with guidance on how to use various open dialogue approaches. We were also able to use these interviews to identify relevant websites the agencies have developed that support open innovation efforts. For instance, we identified the openNASA website, which provides the public with access to NASA’s datasets, code, and other tools, as well as information on opportunities to collaborate with others on the use of NASA’s data. Similarly, we also used this consultation to identify the National Institutes of Health’s (NIH) Biomedical Citizen Science Hub website, which is an online space for researchers and stakeholders interested in the use of citizen science in biomedicine to collaborate. In our interviews with officials supporting the use of various open innovation strategies at these six agencies, we asked them to describe, among other things: the steps staff would go through to design, implement, and evaluate a specific type of open innovation strategy; the support available to staff throughout that process; and the ways resources at the government-wide level have helped support the use of open innovation strategies in their agency. We conducted this performance audit from February 2016 to June 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Benjamin T. Licht (Assistant Director) and Adam Miles supervised the development of this report. Steven Putansu, Lauren Shaman, Erik Shive, Wesley Sholtes, and Andrew J. Stephens made significant contributions to this report. John Hussey and Donna Miller also made key contributions. James Cook, Eric Gorman, Danielle Novak, and Jason Vassilicos verified the information in the report. | To address the complex and crosscutting challenges facing the federal government, agencies need to effectively engage and collaborate with those in the private, nonprofit, and academic sectors; other levels of government; and citizens. Agencies are increasingly using open innovation strategies for these purposes. The GPRA Modernization Act of 2010 requires agencies to identify strategies and resources they will use to achieve goals. The act also requires GAO to periodically review how implementation of the act's requirements is affecting agency performance. This report identifies the open innovation resources developed by GSA, OMB, OSTP, and six selected agencies, and examines the extent to which key guidance reflects practices for effective implementation. To address these objectives, GAO identified various resources by reviewing relevant policies, guidance, and websites, and interviewing staff from each agency. GAO selected the six agencies based on several factors, including the number and type of open innovation initiatives outlined in their agency Open Government Plans. GAO also compared guidance to practices and key actions for effective implementation. Open innovation involves using various tools and approaches to harness the ideas, expertise, and resources of those outside an organization to address an issue or achieve specific goals. Agencies have frequently used several open innovation strategies—crowdsourcing and citizen science, ideation, open data collaboration, open dialogues, and prize competitions and challenges—to engage the public. Staff from the General Services Administration (GSA), Office of Management and Budget (OMB), and Office of Science and Technology Policy (OSTP) developed resources to support agency use of these strategies: to encourage use, clarify legal authorities, and suggest actions for designing and implementing an open innovation initiative; staff to advise and assist agency staff implementing initiatives and open innovation-related communities of practice; and websites to improve access to relevant information and potential participants. Six selected agencies—the Departments of Energy, Health and Human Services, Housing and Urban Development, and Transportation; the Environmental Protection Agency; and the National Aeronautics and Space Administration—also developed resources for those strategies they frequently use. These resources complement those at the government-wide level, providing agency staff with tailored guidance and support to help ensure they carry out initiatives consistent with agency procedures. For the open innovation strategies identified above, GAO determined that key government-wide guidance developed by GSA, OMB, and OSTP reflect to differing extents practices GAO previously identified for effectively implementing specific initiatives (see table). Several factors led to these variances, including differing scopes and methodologies used in their development, and when they were issued. Better incorporating these practices could help ensure agency staff are aware of, and are able to take, steps to effectively design, implement, and assess their initiatives. GAO recommends GSA, OMB, and OSTP enhance key guidance for each open innovation strategy to fully reflect practices for effective implementation. GSA and OMB generally agreed with these recommendations. OSTP neither agreed nor disagreed with the recommendations. |
According to DOE, 603 metric tons of highly enriched uranium and plutonium are at risk of nuclear material theft in Russia. This material, located at civilian research centers, naval fuel storage sites, and Russia’s nuclear weapons laboratories, can be used directly in a nuclear weapon without further enrichment or reprocessing. The material is considered to be highly attractive to theft because it (1) is not very radioactive and therefore relatively safe to handle and (2) can easily be carried by one or two people in portable containers or as components from dismantled weapons. The dissolution of the Soviet Union in 1991 and the subsequent social, political, and economic changes in Russia weakened the existing Soviet-era nuclear security systems. These systems placed a heavy emphasis on internal surveillance of nuclear workers and citizens and severe penalties for violations of nuclear security. The decline in economic conditions, late payment of wages to nuclear workers, and the rise of a strong criminal element increased the risk that employees or criminal elements in Russia would attempt to steal nuclear material for economic gain. Furthermore, Russian nuclear facilities lacked modern equipment that could quickly detect, delay, and respond to attempted thefts of nuclear material. Over the last 7 years, DOE has worked cooperatively with Russia to install modern nuclear security systems consisting of three components: Physical protection systems, such as fences around the buildings that contain nuclear material; metal doors protecting the rooms where material is stored; and video surveillance systems that monitor the storage rooms. Material control systems, such as seals attached to nuclear material containers that indicate whether material may have been stolen from the containers and badge systems that only allow authorized personnel into areas containing nuclear material. Material accounting systems, such as inventories of nuclear material and computerized databases that enable sites to track the amount and type of nuclear material contained in specific buildings. DOE’s Guidelines for Material Protection, Control, and Accounting Upgrades at Russian Facilities provide U.S. project teams with criteria for designing and installing security systems. The criteria were designed to achieve the greatest reduction to the risk of nuclear material theft within the program’s projected budget. While the guidelines are based on DOE’s physical security and material control and accounting requirements, and the International Atomic Energy Agency’s recommendations for physical protection, they are not as stringent as U.S. and international standards used to protect material at similar kinds of sites. According to the guidelines, installing security systems that use multiple components reduces the risk of theft by minimizing the reliance on any one component to detect and delay attempted thefts. Locating the components close to the material, such as around storage vaults and work areas, rather than at a site’s perimeter also reduces risk by minimizing the chance that a thief can bypass security systems and steal material. The guidelines also establish priorities for installing security systems on the basis of how easily the nuclear material being protected could be converted to nuclear weapons. Material that is more readily converted to nuclear weapons receives more extensive security systems than material that poses less of a proliferation risk. DOE is also placing a priority on countering lower-level threats of theft from nonviolent individual employees or a small group of criminals rather than from higher-level threats such as those from violent employees or terrorists equipped with explosives to maximize the amount of material that can be protected within the program’s budget. DOE’s Technical Survey Team reviews project documentation and meets with project team members to ensure that the installed systems meet DOE’s guidelines for reducing the risk of nuclear material theft in Russia. The Team comprises eight national laboratory personnel with expertise in physical protection systems and material control and accounting for nuclear materials. The Technical Survey Team’s reviews include (1) an estimate of the original risk of theft at the site and how the installed security systems will reduce it; (2) the extent to which project activities have reduced the risk of theft at the site, on the basis of completed systems or other risk-reduction activities; and (3) the extent to which the security systems are balanced with appropriate physical security and material control and accounting equipment and procedures. The Team also reviews the project work plans for each site at the beginning of the fiscal year to ensure that project teams are installing systems that are effective and are of the least cost. DOE installed completed and partially completed security systems in 115 buildings with about 32 percent of the 603 metric tons of weapons-usable nuclear material. We found that the systems that were installed are reducing the risk of nuclear material theft in Russia. DOE is not installing security systems in 104 buildings because Russia’s Ministry of Atomic Energy (MINATOM) has restricted access to buildings containing several hundred metric tons of nuclear material because of Russian national security concerns. DOE currently does not have a system in place to periodically measure the effectiveness of the systems to ensure that they continue to detect, delay, and respond to attempts to steal nuclear material. As of February 2001, DOE had installed completed and partially completed security systems in 115 buildings with about 192 metric tons, or about 32 percent, of the 603 metric tons of weapons-usable nuclear material. DOE installed completed systems in 81 buildings protecting about 86 metric tons, or about 14 percent, of the nuclear material. DOE has also installed partially completed security systems known as rapid upgrades in 34 additional buildings protecting about 106 metric tons, or about 18 percent of the nuclear material. According to DOE, rapid upgrades consist of such things as bricking up windows in storage buildings; installing strengthened doors, locks, and nuclear container seals; establishing controlled access areas around nuclear material; and implementing procedures requiring two people be present when nuclear material is handled. By installing rapid upgrades, DOE helps Russian sites establish basic control over nuclear material while U.S. project teams finish installing the security system. DOE officials consider a system to be completed when it includes such components as electronic sensors, motion detectors, and closed circuit television systems to detect intruders; central alarm stations, where guards can monitor cameras and alarms; and computerized material accounting systems. According to DOE, the program also has work under way on an additional 130 metric tons of nuclear material. Table 1 shows the number of buildings and types of sites where completed nuclear security systems have been installed, where rapid upgrades have been installed, where work has started but rapid upgrades have not been completed, and where work has not yet started. Our assessment that the installed systems are reducing the risk of nuclear material theft is based on the Technical Survey Team’s reviews of the security improvements at Russian sites, our visits to nine sites, and our discussions with DOE and Russian officials responsible for installing the systems. From January 1999 through September 2000, the Technical Survey Team reviewed projects at 30 of the 40 sites with nuclear material in Russia. Of the 30 sites reviewed, the Team found that the security systems installed or being installed for 22 sites are reducing the risk of theft. Specifically, the systems increased the site’s ability to detect, delay, and respond to an attempted theft or otherwise strengthened control over their nuclear materials at all times. To evaluate the projects, the Team used DOE’s criteria and determined (1) whether the project teams installed security systems on the basis of how easily the nuclear material being protected could be converted to nuclear weapons, (2) whether the systems were installed close to the nuclear material rather than at the sites’ perimeter, and (3) whether multiple components were installed to minimize reliance on any one component to prevent theft. The following are examples where the Technical Survey Team found that the systems as installed are reducing the risk of nuclear material theft: At the Mayak Production Association, a major producer of plutonium for Russia’s nuclear weapons program, DOE installed 1-ton interlocking concrete blocks over trenches containing over 5,000 containers of plutonium. (See fig. 1.) As of February 2001, the blocks were protecting over 15 metric tons of plutonium. Each container has a computerized bar code and tamper-resistant seal to help the site track its location and to show if any attempts have been made to open the container. Each block provides a barrier to delay a thief from gaining access to the material before being detected. In addition, the site’s ability to detect and respond to an attempted theft is reinforced with additional sensors, surveillance cameras, alarms, and communications systems. According to the Technical Survey Team, the blocks are effective against an adversary using sophisticated methods. At Navy Fuel Storage Sites 49 and 34 (located in Murmansk and Vladivostok, respectively), DOE helped the Russian Navy construct storage complexes to consolidate tens of tons of nuclear reactor fuel that were located in poorly protected sites in the Northern and Pacific Fleets. (Navy Site 49 is shown in fig. 2.) DOE, working with the Russian Navy, strengthened the walls and ceilings of the nuclear storage buildings and installed portal monitors for nuclear material, which scan people and vehicles entering and leaving facilities to ensure that they have not taken nuclear material from storage locations, video surveillance systems, alarms, and fences to increase the ability to detect a theft. In addition, DOE improved the guard forces’ ability to respond to an attempted theft by providing them with helmets, bulletproof vests, strengthened barriers that protect against gunfire, and a radio communication system. According to the Technical Survey Team, the systems have significantly reduced the risk of nuclear material theft at these sites. At the Institute of Physics and Power Engineering at Obninsk, DOE bricked up windows at several buildings that contain several tons of nuclear material and installed high-security vault doors and locks and access control systems. According to the Technical Survey Team, these measures reduce the risk of theft. The project team also developed an inventory strategy that reduced the time it takes to inventory items and encouraged the facility to place nuclear material that it seldom uses in sealed containers. According to the Team, these security improvements are consistent with the guidelines issued by the program. At six of the eight remaining sites, the Technical Survey Team’s reports indicated that activities undertaken to install security systems had achieved little or no risk reduction so far, while at the two remaining sites, it was too soon to tell if the systems were reducing risk. At two of the six sites (the Petersburg Nuclear Physics Institute and the Bochvar Institute), the systems that were installed did not meet the criteria for reducing risk because they were installed at the perimeter of the sites rather than close to the material. DOE’s project teams are currently taking actions to correct the problems. At two other sites—Sarov (also known as Arzamas-16, the primary nuclear weapons design laboratory in Russia) and Elektrostal (a MINATOM facility that fabricates reactor fuel rods of highly enriched uranium for the Russian Navy)—project teams did not have sufficient access to buildings to install systems in accordance with the guidelines. At Sarov, the project team gave Sarov personnel security system components to install in some of the buildings where the project team did not have physical access. However, according to the Technical Survey Team, while incremental improvements to security have occurred at Sarov, the risk of nuclear material theft remains high. At Elektrostal, DOE project teams were limited to providing security improvements only for low enriched uranium, which poses a low risk of proliferation if stolen. Because of the project team’s lack of access to buildings with highly enriched uranium, the program has decided not to enter into any new contracts at the site until access issues are resolved. At Tomsk-7, the team did not verify the type of material it was protecting and installed systems around material that, according to the Technical Survey Team, presented little proliferation risk. At the Lytkarino Research Institute of Scientific Instruments, the strengthened doors installed as part of the site’s rapid upgrades were ineffective, and according to the Team, needed to be replaced. In order to observe how the nuclear security systems are reducing the risk of theft in Russia, we visited nine nuclear sites in Russia where DOE installed systems. During our visits, we toured buildings where the installation of nuclear security systems was complete as well as buildings where work was ongoing or had not been started. We also discussed how the nuclear security systems were working with the Russian site officials and U.S. project team members who accompanied us on the tours. We saw site personnel demonstrate how they use the security systems, and we observed the multiple systems designed to detect or delay an outsider or employee attempting to steal material. The officials at the sites that we visited also showed us nuclear material storage rooms as well as rooms where employees work with the material. We observed the following systems and concluded that they were reducing risk: Storage vaults equipped with strengthened doors, locks, video surveillance systems, and alarms that can detect and delay thieves as they attempt to steal nuclear material. Central alarm stations where guards monitored the video surveillance systems. The guards were equipped with communications equipment to respond to alarms. Nuclear material containers equipped with computerized bar codes and tamper-resistant seals that allow site personnel to perform quick inventories of the material and determine whether containers were tampered with. Access and exit procedures that ensure that only authorized personnel are allowed into areas with nuclear materials. Nuclear material portal monitors that scan people and vehicles entering and leaving facilities to ensure that they have not taken nuclear material from storage locations. However, at three sites, we also observed some problems that appeared to decrease the effectiveness of the new systems. For example, one site left a gate to its central storage facility open and unattended during the day. (See fig. 3.) According to a site official, the gate is left open to allow employees to enter and leave the facility without having to use the combination locks on the gate. When the gate is open, the only other controlled access point is at the perimeter of the site. At another site that we toured, the guards did not respond to metal detectors that were set off when we entered the site, nuclear material portal monitors were not working, and alarm systems had exposed cabling that could allow an adversary to cut the cable and disable the alarm easily. At the third site, DOE had provided heavy metal containers that could be bolted to the floor to make it more difficult for an individual to gain access to the material. But some of the containers were empty, and instead, the site stored material in old containers that did not offer as much protection. In addition, this site did not have access controls, such as metal detectors or nuclear material portal monitors at locations where nuclear material is stored, and the guards did not check the identification of people entering the storage areas. More information on the sites that we visited can be found in appendix III. As of February 2001, DOE was not installing systems in 104 buildings because the U.S. project teams did not have physical access to the buildings. These buildings, mostly located at Russian nuclear weapons laboratories, contain hundreds of metric tons of nuclear material. According to DOE officials, physical access is needed to (1) confirm the type of material to be protected, (2) design systems that provide adequate security for the material to be protected, (3) ensure that equipment is installed properly, and (4) ensure that the sites operate the systems properly and use equipment for the intended purpose. MINATOM is reluctant to grant DOE project teams physical access to the buildings because of Russian national security concerns and Russian laws on the protection of state secrets. For example, rather than allow project teams into buildings where they can determine what security systems are needed, some sites have allowed the project teams only to view the site perimeters. Consequently, the project teams do not obtain enough information on the buildings—for example, information on the type of material and how easy it would be to convert the material into a nuclear weapon—which determines the type of security systems that DOE would install. Because it lacked physical access, in September 1999, DOE suspended new work at six of the nuclear weapons laboratories—Sarov, Snezhinsk (also known as Chelyabinsk-70), and the four nuclear weapons assembly and disassembly sites. Table 2 shows the status of DOE’s physical access to buildings by program sector. In January 2000, DOE issued new guidance to project teams on access to sites. Under the new guidance, physical access is still the preferred means to identify nuclear material that needs protection and to design and install security systems. However, if the Russian site officials do not grant physical access to the project team, DOE officials may pursue alternative means of providing assurances if the alternatives are acceptable to site officials and DOE approves of the alternative. According to the guidance, alternative means of providing assurances may include a combination of photographs and videotapes of areas before and after the installation of security systems, a visual inspection by a single member of the project team, and written certifications by site directors. Once DOE approves the alternate means for providing assurances, it is incorporated into the access provisions that become part of the contract with the site for installing security systems. According to a DOE official, DOE pays only for work performed under the contract once it receives the assurances obtained as stipulated in the access provisions of the contract. DOE officials are currently testing this approach in pilot programs with Sarov and Snezhinsk for work at sensitive buildings at the sites but it has not yet reached any such agreements under the new access guidance. DOE has also reached a draft agreement with MINATOM to provide program personnel with greater access to sensitive MINATOM sites. This agreement is undergoing interagency review in the executive branch. According to DOE, while some of the more sensitive areas at MINATOM’s nuclear facilities may remain inaccessible to program personnel, this agreement will allow the program to further expand its work once it is concluded. DOE has not established a means to systematically measure the effectiveness of the security systems that it has installed at Russian nuclear sites. The Technical Survey Team’s and our observations provide only a snapshot of how effectively the installed systems are reducing the risk of nuclear material theft in Russia. The new security systems’ ability to reduce the risk of theft also depends on whether the site personnel operate the systems on a continuing basis; follow administrative procedures associated with controlling access to material; maintain systems such as alarms, sensors, and television surveillance cameras; and test equipment and procedures periodically. In 1997, DOE asked Lawrence Livermore National Laboratory to develop measures to determine the systems’ effectiveness. Lawrence Livermore ultimately developed a measurement system that looked at 30 elements that make up an effective security system, such as access controls, intrusion detection, the testing of electronic security and alarm systems, and the functioning of the guard forces. The measurement system was designed to provide a baseline to measure progress; identify weaknesses in installed systems; and monitor, on a continuing basis, the functioning of the systems. However, according to a DOE official, this measurement system was not adopted because it was too complex and time-intensive to implement. DOE is currently collecting from individual sites information that would be useful in measuring the new systems’ effectiveness. Project teams make visits to sites and observe systems that have been installed. At certain sites, DOE has contracts with the Russian sites to collect information on the functioning of equipment such as nuclear material portal monitors, which can indicate how often the system has been operating and whether any problems have caused it to malfunction or be turned off. In addition, before installing security systems, DOE and Russian site officials conduct vulnerability assessments, which assess the probability of the existing nuclear security systems at the sites to prevent nuclear material theft. DOE officials also conduct joint visits to the sites with Gosatomnadzor (GAN)—the Federal Nuclear Radiation Safety Authority—and MINATOM officials to observe informal functional testing of such systems as alarms, and sensors and to discuss the operations of the systems with site personnel. DOE is providing sites in Russia with assistance to operate and maintain the new security systems after they are installed. DOE also has projects under way with MINATOM and GAN to develop nuclear material security regulations and enforcement, establish nuclear material security training centers, and install security improvements for trains and trucks that transport nuclear material between and within sites. While DOE has made progress on these projects, DOE does not expect to complete them before 2020. The Department initially planned to assist each site for up to 3 years after the installation of the security systems, but it currently anticipates that some sites will require assistance for longer periods because of poor economic conditions, while other sites may require less assistance. DOE is assisting Russian sites with the long-term operations and maintenance of new security systems after the complete systems are installed. DOE refers to this as operational assistance; it includes the following: Warranties, maintenance, and spare parts that provide the sites with the ability to repair and replace system elements. Training of site personnel on how to operate and maintain equipment. Writing of procedures that instruct site personnel on how to control access to nuclear material, track nuclear material inventories and transfers made among buildings, and otherwise operate the installed systems. According to DOE officials, operational assistance is necessary because the Russian sites where DOE helped install nuclear security systems lack the financial resources, adequately trained staff, and the knowledge of procedures to operate and maintain the systems effectively. For example, many of the sites cannot afford the warranties, parts, or technical support necessary to ensure that the new systems are fully operational. At six of the nine sites we visited, Russian officials stated that without assistance, operating the systems would be difficult. Russian and DOE officials said that while sites would still attempt to operate the equipment if assistance were no longer available, the level of operation and maintenance would be reduced, leaving material more vulnerable to theft. In addition to providing operational assistance for sites with completed security systems, DOE officials are modifying the design and installation of security systems at sites where work is ongoing to minimize the amount of operational assistance that these sites will require once their systems are complete. For example, project teams are designing systems that use equipment produced in Russia rather than foreign-made equipment because Russian equipment may be easier for the sites to service and replacement parts may be more readily available. In addition, when designing security systems, project teams are considering how the sites will be able to integrate the systems into the sites’ activities, for example, by considering how many people enter and exit the sites each day when deciding where to place nuclear material portal monitors. In addition to operational assistance to sites, DOE is assisting Russia with developing regulations and enforcement activities for nuclear material security, developing a national inventory of nuclear material, training personnel on nuclear material security, and improving the security of nuclear material while in transit. The two primary recipients of this assistance, which DOE refers to as national infrastructure assistance, are MINATOM and GAN. DOE is assisting both organizations with writing regulations and developing inspection systems for sites under their control. Currently, about half the necessary nuclear material security regulations have been developed, and DOE anticipates it will be several more years before all the necessary regulations are in place and adopted. Additionally, DOE is supporting GAN’s inspection and enforcement role by training GAN inspectors on how to carry out their responsibilities, providing equipment that the inspectors use to take measurements of the nuclear material when they go to sites, and conducting joint site visits with DOE project teams to ensure that the inspectors understand their roles and responsibilities. DOE is providing MINATOM with assistance to develop a national nuclear material inventory, which is required under Russia’s new regulations. This requirement is an important element in strengthening nuclear material security in Russia. By requiring sites to make inventory information available to a national database on a periodic basis, the Russian government can improve its ability to track the location, type, and quantity of material at its nuclear facilities and detect possible thefts. Currently, 20 percent of the sites with weapons-usable nuclear material in Russia are reporting inventory information to the national database, and DOE officials expect that it will be at least 3 more years before all sites are reporting some level of data. In addition to regulatory and enforcement activities, DOE is also supporting the development of nuclear material training centers in Russia. For example, DOE is supporting two centers that train personnel on how to operate and maintain the systems. The Russian Methodological Training Center specializes in material control and accounting training, and the Interdepartmental Special Training Center specializes in physical protection training. DOE is also supporting a 2-year graduate program in nuclear material security at the Moscow Engineering Physics Institute for site managers and nuclear security officials. DOE is also providing physical protection systems for the trucks and rail cars used in transporting nuclear material. The trucks and rail cars can handle large bulletproof containers equipped with security locks used to carry nuclear material while in transit. The containers are difficult to steal because they are heavy and require cranes for loading on and off the trucks and rail cars. DOE is also supporting other national efforts, such as the provision of materials to be used at sites to calibrate equipment. DOE plans to assist every site to ensure the long-term operation of nuclear security systems after their installation. DOE has limited information on how much assistance each site requires because it has not conducted a programwide assessment of the cost of operating and maintaining the systems and the sites’ ability to cover these costs. Furthermore, DOE only recently began providing completed sites with operational assistance and has limited experience in gauging how much assistance these sites or others will need and for how long. DOE officials initially estimated that sites would require operational assistance for up to 3 years after the new security systems’ installation. However, on the basis of the experience at the sites where the installation of security systems is complete, DOE officials now anticipate that some sites will require assistance for longer periods of time. This shift to support the systems for a longer period than originally anticipated is due to several factors, including (1) the poor economic conditions at some sites, (2) the sites’ need for technical assistance to operate some of the installed equipment, and (3) the low priority that some sites attach to nuclear material security. To determine the amount and type of assistance that is needed, DOE officials are surveying six of the completed civilian sites with regard to their need for spare parts, warranties, procedures, training, and operational funding. On the basis of the results of the survey and discussions with the sites, DOE will determine what type of assistance the sites need to ensure that the systems are properly operated. However, DOE officials have not surveyed other sites to determine what their current security system costs are and whether they have the financial and technical resources to maintain the newly improved systems. Some of these sites where DOE is still installing systems are larger and in better financial condition than the six sites in the study. Because larger sites may have more resources and greater potential to generate revenue, the level of assistance will differ from that required at smaller sites with more limited resources and income potential. DOE estimates that it will complete the Material Protection, Control, and Accounting program in 2020 at a total cost of about $2.2 billion. However, DOE officials said that the cost estimate and time frame are uncertain because DOE faces challenges in implementing the program. For example, DOE’s initiative to consolidate the number of buildings and sites that contain nuclear material could reduce the cost of completing the program, but the initiative is encountering obstacles because MINATOM has not identified which buildings and sites it plans to close. DOE estimated in 1995 that it would spend $400 million through fiscal year 2002 to finish installing the nuclear material security systems. Since 1995, the scope of the Material Protection, Control, and Accounting program has expanded. In response to our March 2000 recommendation to develop a new cost estimate and time frame for completing all the elements of the expanded program, the Department now estimates that it will complete the installation of security systems in 2011 and continue to provide assistance through 2020 at a total cost of $2.2 billion. The 1995 estimate included the cost to install upgrades at buildings in Russia and other newly independent states of the former Soviet Union.The current estimate includes the following: $823.1 million to complete the installation of nuclear material security systems in 288 buildings in Russia by fiscal year 2011. This includes $74.9 million to complete Navy sites by fiscal year 2004, $212.7 million to complete civilian sites by fiscal 2008, and $535.5 million to complete the nuclear weapons laboratories by fiscal 2011. $711.8 million to support the long-term operation and maintenance of the systems through fiscal year 2020, including operational assistance to sites as well as assistance to the federal agencies that regulate and enforce nuclear material security. $387.2 million through fiscal year 2010 on an initiative to reduce the number of buildings and sites that contain nuclear material by consolidating Russia’s nuclear material into fewer buildings and converting some of the material into a form that cannot be used for weapons. $241.3 million through fiscal year 2020 for program management, which includes the cost of the program’s financial management system, compliance with export controls, contract management, travel coordination, administrative and secretarial support, and the Technical Survey Team. The difference between the 1995 estimate and the current estimate is based on changes in DOE’s assumptions about the scope of the nuclear material security problem in Russia, in particular, a threefold increase in the number of buildings in Russia where DOE is installing security systems. In addition, DOE officials’ initial assumption that Russia would reach a level of economic stability by 2000 to support the long-term operation and maintenance of the security systems did not materialize. DOE officials found that the economic decline culminating in the August 1998 collapse of the Russian economy adversely affected the ability of Russian sites to commit the necessary resources to fully sustain the security systems. Similarly, DOE officials found that Russia needs assistance beyond installing security systems, such as assistance with developing nuclear security regulations and enforcement capabilities. Consequently, DOE officials now assume that Russia will achieve the economic and political stability to operate and maintain the nuclear material security systems by 2015 and that DOE will gradually phase out assistance from 2015 through 2020. Finally, the limited access to sensitive buildings that MINATOM has given to DOE’s project teams has caused delays in the plan to complete the installation of security systems by fiscal year 2002. In developing the time frames for completing the program by 2020, DOE officials took into account several factors that limit how quickly it would be able to install security systems. In particular, DOE’s time frame estimates take into account Russia’s short construction season due to weather conditions, the sites’ ability to provide the personnel to install the systems, and the time needed to negotiate access to sensitive sites. DOE officials also assumed that the portion of the Department’s budget devoted to improving security at the 40 nuclear sites would increase from about $118 million in the fiscal year 2001 budget to $155 million in the fiscal 2005 budget. According to a DOE official, if the program’s funding were to remain at current levels, it will take at least 4 additional years to install security systems at Russian sites (from 2011 to 2015). Figure 4 shows DOE’s yearly spending estimates for fiscal years 2001 through 2020. DOE officials expressed uncertainty about the cost estimate and time frame for completing the program because of a number of issues, including the lack of access to sensitive sites and DOE’s limited experience in some types of assistance that it is providing. DOE officials said that the greatest uncertainty in the cost estimate and time frame for completing the installation of security systems stems from the lack of access to sensitive sites, in particular, the nuclear weapons laboratories. In contrast, DOE officials have the most confidence in the cost estimates for sites where its project teams have good access for designing and installing the systems, such as most civilian and Russian Navy sites. The lack of access creates uncertainty because project teams do not know how many buildings at the nuclear weapons laboratories require security systems or when they will be able to start and complete the installation of security systems. The number of buildings is a major factor in the cost of improving security at a site because each building requires that the project team design and install a unique security system. Some of the nuclear weapons laboratories may have more buildings than DOE officials have assumed, and others may have fewer. DOE officials are also uncertain of the cost estimate for installing security systems because project teams have less experience in installing and developing cost estimates for security systems at the large and complex buildings in the nuclear weapons laboratories that enrich uranium or reprocess plutonium for use in weapons. Although DOE has installed security systems for buildings where Russian civilian sites work with nuclear material, the buildings where the weapons laboratories work with nuclear material are much larger. Therefore, DOE cannot assume that the cost of installing security systems at buildings in the weapons laboratories is about the same as it is at civilian sites. Another source of uncertainty in the program’s cost estimate for completing the program stems from DOE’s limited experience in providing operational assistance to sites and assistance to Russia’s regulatory and enforcement agencies. On the basis of its limited experience in providing a handful of small completed civilian sites with operational assistance, DOE officials used generic assumptions about how much assistance it would provide at each site after installing nuclear security systems rather than developing individual estimates for each of the sites. At most sites, DOE officials anticipate that the Department will provide operational assistance, at gradually declining levels, through 2020. Similarly, DOE officials regard their assistance to Russia’s nuclear regulatory and enforcement agencies as a long-term effort to continue through 2020, but DOE has not yet completely determined what the assistance will consist of beyond its plans for the next few years. DOE plans to update its cost estimate and time frame for completing the program annually. DOE officials said that they would develop more confidence in their estimates as they gain more experience in the areas where there is currently more uncertainty. For example, DOE officials expect to complete the installation of security systems at two sensitive uranium-processing sites where project teams have physical access in fiscal year 2001. After completing these two sites, DOE will have a better basis to estimate the costs of installing systems at large processing buildings in the nuclear weapons laboratories. Similarly, DOE is just beginning to implement a pilot project to negotiate alternatives to physical access at sensitive buildings at two nuclear weapons laboratories. The outcome of the pilot project will help DOE officials make better assumptions about the process of gaining access to buildings in the rest of the nuclear weapons laboratories. DOE is in the process of developing for the program a strategic plan that ties together the program’s goals, priorities, and strategies for reducing the risk of theft in Russia with the program’s costs and time frames for completing the program. Such a plan could provide DOE managers with guidance as they adjust the implementation of the program to take into account changes in time frames for installing systems and the amount of access DOE project teams may have to buildings. According to a DOE official, the plan, when completed in April 2001, will tie together the cost estimate and time frame for completing the program with a revised version of the Guidelines for Material Protection, Control, and Accounting Upgrades at Russian Facilities which, among other things, sets out the program’s goals, priorities, and strategies for installing security systems that reduce the risk of theft at Russian sites. Under the Material Consolidation and Conversion initiative, one of DOE’s strategies for completing the program is to reduce the number of buildings and sites that contain nuclear material and need security systems. DOE’s cost estimate and time frame for completing the program sets a goal of closing 50 buildings and five sites by 2010. Under the initiative, the reduction would take place by consolidating nuclear material into fewer buildings and sites and converting 24 metric tons of highly enriched uranium, or about 3 percent of the estimated 603 metric tons of weapons- usable nuclear material in Russia, into low enriched uranium that cannot be used for weapons. DOE estimates that the Material Consolidation and Conversion initiative will cost $387.2 million through fiscal year 2010. According to DOE, about three-quarters of the material to be converted will be uranium enriched to 85 percent in the isotope U-235. DOE officials told us that by converting this material, risk will be reduced for material that is some of the most attractive to theft in Russia. security systems or, if the systems are already installed, providing assistance for their operation and maintenance. In addition, the initiative would completely eliminate the risk of theft at the buildings and sites that no longer contain nuclear material. However, the initiative has had limited success since its inception in 1999. In particular, the Material Consolidation and Conversion initiative has not resulted in the complete removal of weapons-usable nuclear material from any buildings or sites. The program has had more success at removing materials from buildings at sites that are not in the initiative. DOE has helped Russian facilities consolidate materials into fewer buildings at the State Research Institute, Scientific Industrial Association; the Institute of Physics and Power Engineering; Dmitrovgrad; Novosibirsk; and several of the Russian Navy’s nuclear fuel storage sites. uranium. However, both of the buildings still contained nuclear material when we visited the site in October 2000, and site officials told us that they do not plan to provide material for conversion under the initiative for the next 2 to 3 years. We also met with officials at the State Research Institute, Scientific Industrial Association (also known as Luch)—one of the two sites that is converting highly enriched uranium to low enriched uranium. These officials told us that they are encountering difficulties in obtaining highly enriched uranium for conversion because Russian sites believe they will receive more money and support from DOE by retaining their weapons-usable nuclear material. As of December 2000, the initiative resulted in the conversion of about 1.6 metric tons of highly enriched uranium. DOE officials have also successfully negotiated verification measures with both of the sites that are converting the material to provide assurances that the sites actually convert highly enriched uranium to low enriched uranium that cannot be used for weapons. However, DOE’s initiative has not yet resulted in the closure of any buildings or sites; therefore, DOE officials are not sure of the extent to which the initiative will result in an overall cost savings to the program. Furthermore, while material conversion is reducing the proliferation risk for the material converted to low enriched uranium, it is not reducing the risk of theft at the buildings and sites that are contributing the highly enriched uranium because those buildings and sites still contain weapons-usable nuclear material and still require nuclear security systems. Given the lack of progress in closing buildings and sites, DOE officials said that they are reevaluating whether to continue with material conversion. DOE officials said that the initiative’s primary goal is to reduce the risk of nuclear material theft and that they favor continuing the material conversion even if it does not result in the closure of any buildings or sites because the risk of theft for the material that is converted would still be eliminated. DOE is improving the security of 192 metric tons of weapons-usable nuclear material in Russia by installing modern security systems that detect, delay, and respond to attempts to steal nuclear material. These systems, while not as stringent as those installed in the United States, are designed to reduce the risk of nuclear material theft at Russian sites. While Russia and the United States have worked cooperatively to reduce the risk of theft in Russia, Russian officials’ concerns about divulging national security information continue to impede DOE’s efforts to install systems for several hundred metric tons of nuclear material at sensitive Russian sites. Continued progress in reducing the risk of nuclear material theft in Russia hinges on DOE’s ability to gain access to Russia’s sensitive sites and reach agreement with MINATOM to reduce the number of sites and buildings where nuclear material is located. Achieving these two goals would improve security for large amounts of nuclear material and reduce program costs. Regarding the systems that are already installed, DOE currently does not have a means to periodically monitor the systems to ensure that they are operating properly on a continuing basis. Such a mechanism would provide DOE officials with increased confidence that the security systems are reducing the risk of nuclear material theft. The fact that DOE is developing a strategic plan that ties together the program’s goals, priorities, and strategies for reducing the risk of theft in Russia with the cost and time frames estimate is a positive step forward. Such a plan will provide DOE managers with guidance as they adjust the implementation of the program to take into account the changes in the time frames for installing systems and the amount of access that DOE project teams may have to buildings. We believe that the plan developed by DOE should provide an estimate of how much sustainability assistance is required on the basis of an analysis of the costs to operate and maintain the systems and the sites’ ability to cover these costs. In addition, the plan should provide options for completing the program on the basis of the progress made on gaining access to sensitive sites and the closure of buildings and sites. In order to assist DOE in its mission of promoting nuclear nonproliferation and reducing the danger from weapons of mass destruction, we recommend that the Administrator of the National Nuclear Security Administration develop a system, in cooperation with the Russian government, to monitor, on a long-term basis, the security systems installed at the Russian sites to ensure that they continue to detect, delay, and respond to attempts to steal nuclear material and include in the strategic plan being developed by DOE (1) an estimate of how much sustainability assistance is required on the basis of an analysis of the costs to operate and maintain the systems and the sites’ ability to cover these costs and (2) options for completing the program on the basis of the progress made in gaining access to sensitive sites and on the closure of buildings and sites. In commenting on a draft of our report, DOE generally agreed with our findings and concurred with our recommendations. In its comments, DOE stated that in addition to the amount of nuclear material that received the completed and partially completed security systems cited in the report, the program has work under way on an additional 130 metric tons of nuclear material. We incorporated this fact into the report where appropriate. DOE also stated that it has work under way to improve security at 42 nuclear weapon sites that contain about 260 metric tons of material. As discussed in our report, the scope of our work includes DOE’s assistance to improve the security of weapons-usable material controlled by Russia’s civilian authorities, nuclear weapons laboratories, and the naval nuclear fuel storage facilities. Appendix I discusses the status of DOE’s nuclear weapons security work, and we have added the fact that the 42 sites contain about 260 metric tons of nuclear material into appendix I where appropriate. DOE also noted in its comments that it has recently reached a draft agreement with MINATOM to provide DOE personnel with greater access to sensitive MINATOM sites. This agreement is undergoing interagency review with the executive branch. According to the Department, while some of the more sensitive areas at MINATOM’s nuclear sites may remain inaccessible to program personnel, this agreement will allow the program to expand its work once it is concluded. We incorporated this information into the report where appropriate. The scope of our review includes DOE’s assistance to improve the security of weapons-usable nuclear material controlled by Russia’s civilian authorities, nuclear weapons laboratories, and Navy nuclear fuel storage facilities. We reviewed DOE’s program to (1) install nuclear security systems at sites; (2) assist sites with the long-term operation of the installed systems; (3) support the development of regulations and the enforcement of nuclear material security, nuclear material security training centers, and security improvements to trains and trucks used to transport nuclear material between and within sites; and (4) reduce the number of buildings and sites that contain nuclear material through consolidation and conversion. To meet our objectives, we analyzed DOE’s program documents, including the Technical Survey Team’s assessments of the status of nuclear security efforts at sites and their compliance with DOE’s guidance. At the nine sites we visited in Russia, we observed nuclear security systems and spoke with Russian officials responsible for working with DOE project teams to install and operate the systems. We also met with MINATOM and GAN officials to discuss the overall status of cooperation to improve nuclear material security in Russia. In addition, we met with DOE project teams to discuss their efforts to improve nuclear material security. We analyzed information from DOE on the number of buildings where the installation of nuclear material security systems is complete, the number where systems are currently being installed, and the number of buildings where work has yet to be initiated. We met with DOE officials in charge of managing the program to discuss DOE’s policy on access to sensitive Russian sites and how DOE measures the effectiveness of the nuclear security systems. We analyzed DOE’s assistance to sites to support the operation of the nuclear material security systems and assistance to the federal agencies that regulate and enforce nuclear security by reviewing program documents, meeting with DOE officials, and discussing the need for long- term support with Russian officials. We analyzed DOE’s cost estimate and time frame for completing the program, including the estimate for completing the installation of nuclear security systems and helping sites operate the systems after their installation. We met with DOE officials to discuss the methodology for developing the cost estimate and time frame and their assumptions about key factors influencing the estimate. We reviewed the status of the Material Consolidation and Conversion initiative by analyzing DOE documents; meeting with DOE officials responsible for the initiative; and discussing the initiative with MINATOM, GAN, and Russian site officials. We obtained the program’s budget, obligation, and expenditure data through fiscal year 2000 from DOE. We did not independently verify the quality or accuracy of the financial data that program managers and laboratory personnel provided us with, but we compared the data with DOE’s Program Management Information System and found that it matched the data that DOE provided us with. We interviewed officials from DOE’s Office of International Materials Protection and Emergency Cooperation and from the national laboratories, including Brookhaven, Lawrence Livermore, Los Alamos, Oak Ridge, Pacific Northwest, and Sandia. We conducted our review from April 2000 through February 2001 in accordance with generally accepted government auditing standards. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of this report to the Honorable Spencer Abraham, Secretary of Energy; the Honorable Colin L. Powell, Secretary of State; the Honorable Donald H. Rumsfeld, Secretary of Defense; the Honorable Mitchell E. Daniels, Director, Office of Management and Budget; and interested congressional committees. We will make copies available to others on request. In 1999, at the request of the Russian Navy, the Department of Energy (DOE) began installing security systems to protect the Russian Navy’s nuclear weapons. This work is being done under the Department’s Material Protection, Control, and Accounting program. U.S. officials are concerned about the security of nuclear weapons in Russia. Although there have been no known incidences, concerns exist that a Russian nuclear warhead could be lost or stolen. Under the program, DOE is installing security components, such as fences, strengthened vault doors, sensors for the fences and doors, access control systems, strengthened guard towers, video surveillance equipment, and radio communication equipment for the response forces for 42 Russian naval sites where nuclear weapons are stored. According to DOE, the 42 sites contain about 260 metric tons of nuclear material. DOE officials estimate that this work will cost about $474.8 million—$336.8 million for the installation of security systems at the 42 sites by the end of fiscal year 2004, and $138.0 million for long-term operational assistance for the 42 sites through fiscal 2020. As of January 2001, DOE has begun installing the systems at 41 of the 42 sites. DOE installs the systems in two phases. During the first phase, DOE (1) installs security components that are intended to quickly improve the sites’ ability to protect their weapons, such as fences, vehicle barriers, strengthened doors, and mechanical locks, (2) bricks up windows at storage buildings, and (3) strengthens the guard towers on site. In phase two, DOE installs additional components, such as communication systems, interior and exterior detection and assessment systems, and access-delay systems which provide greater protection for the weapons. As of January 2001, DOE had completed the first phase of security improvements at 19 sites and the second phase improvements at 1 site. The Russian Navy has provided the project teams with limited access to the sites. According to a DOE official, project team members have been granted physical access to seven sites. For the other sites where DOE has done work, the Russian Navy has allowed team members to view the sites from a distance, for example, allowing them to drive by it, park at the site to view it, or walk up to the site’s perimeter. DOE obtains confirmation that the equipment has been installed and is being used as intended through photographs of the site after the work is complete, during site visits by project team personnel, and through written certification by the Russian Navy. The cost of the first phase of security improvements is approximately $475,000 for each site, while the cost for the more comprehensive improvements is estimated to be about $8 million per site. In its cost estimate for the Russian Navy’s nuclear weapons sites, DOE officials also anticipate that each site will require about $300,000 per year in long-term operational assistance after the systems are installed, with the amount required diminishing over time. DOE, however, does not know how many years of long-term operational assistance will be required. While DOE estimates that it will complete the installation of security systems at the 42 known sites by the end of 2004, the Russian Navy has indicated that it would also like improved security systems installed at other locations, which could expand the program further. As of January 2001, however, the Navy had not specifically identified additional sites. Northern Fleet Storage Facility (Site 49) is located within the Russian Federation Naval Base at Severomorsk, about 9 miles northeast of Murmansk on the Kola Peninsula. Site 49 is the primary land-based storage facility for reactor fuel assemblies used by the Russian Northern Fleet naval vessels and holds tens of metric tons of weapons-usable nuclear materials. DOE helped install nuclear security systems and provided assistance to expand the storage bunker for the reactor fuel assemblies, which allowed the Northern Fleet to consolidate all of its fresh nuclear fuel at the site. DOE began work to improve the nuclear security at Site 49 in May 1996 and completed the installation of security systems in September 1999. The Krylov Shipbuilding Institute is located in St. Petersburg and employs over 3,000 scientists and support staff. The Institute’s nuclear facility has a research reactor and three critical assemblies containing hundreds of kilograms of weapons-usable nuclear material. DOE began installing physical protection and material control and accounting systems at the site in April 1997 and completed the work in November 1998. The Kurchatov Institute is located in Moscow, about 10 miles from the Kremlin. Founded in 1943 as the Soviet Union’s first nuclear weapons research site, the Institute is an independent laboratory under the direct authority of the Russian government. The Institute’s research activities include the design and development of nuclear reactors for the Russian Navy, for the Russian icebreaker fleet, and for space applications. The Institute operates 6 research reactors and 14 critical assemblies, and has three storage facilities containing several metric tons of nuclear material. DOE began installing security systems at the Institute in August 1994. The Petersburg Nuclear Physics Institute is located in the town of Gatchina, about 30 miles south of St. Petersburg. The Institute is operated by the Russian Academy of Sciences and has one operating nuclear research reactor, one reactor under construction, one critical assembly, and a vault to store reactor fuel with hundreds of kilograms of nuclear material. DOE installed the new security systems at the site from February 1996 to May 1998. The Institute of Physics and Power Engineering is operated by Russia’s Ministry of Atomic Energy and is located in the city of Obninsk, about 66 miles southwest of Moscow. The Institute is involved in the research and development of nuclear power reactors and employs about 5,000 people. The Institute possesses several metric tons of weapons-usable nuclear material. DOE began installing security systems at the Institute in September 1994 and is installing nuclear security systems in 11 buildings as well as in the central alarm station. DOE’s project team also worked with the site to reduce the number of buildings that contain weapons- usable nuclear material from 22 to 7. The A.A. Bochvar All-Russian Scientific Research Institute of Inorganic Materials is located in northwest Moscow and is adjacent to the Kurchatov Institute. The Bochvar Institute was established in 1945 and conducted research for the Soviet Union’s nuclear weapons program. The Institute, operated by Russia’s Ministry of Atomic Energy, currently conducts research on nuclear fuel, including mixed-oxide fuel in support of Russia’s plutonium disposition program, and employs about 1,300 people. Bochvar has several hundred kilograms of weapons-usable nuclear material on site. DOE began work at Bochvar in December 1997 but was limited by the site to installing material control and accounting systems until 1999, when the site agreed that DOE could begin installing physical protection systems. The State Research Institute, Scientific Industrial Association (also known as Luch) is located about 22 miles south of Moscow. Luch is operated by Russia’s Ministry of Atomic Energy and is involved in developing space and mobile reactors, including the TOPAZ reactor used in Russian satellites. DOE started work at Luch in late 1995 and is installing nuclear security systems in five buildings containing nuclear material and in a central alarm station. Luch, which has several metric tons of weapons- usable nuclear material on site, has consolidated the number of buildings where the material is located from 28 to 4. DOE is also contracting with Luch to convert highly enriched uranium to low enriched uranium under the Material Protection Control and Accounting program’s Material Consolidation and Conversion initiative. The Lytkarino Research Institute of Scientific Instruments is located about 31 miles southeast of Moscow and is operated by the Ministry of Atomic Energy. The Institute is the primary organization in Russia for radiation resistance testing of materials, electronics, and electronic systems. DOE has worked with the Institute since September 1997 to install nuclear security systems in three buildings, including two containing nuclear materials and one central alarm station. The Institute contains hundreds of kilograms of weapons-usable material and participates in the program’s Material Consolidation and Conversion initiative. The Moscow Engineering Physics Institute is a large university located in southeast Moscow. The Institute specializes in nuclear physics research and training and operates a research reactor using highly enriched uranium. The Institute has a small quantity of weapons-usable nuclear material on site. DOE worked with the Institute to install physical protection and material control and accounting systems in three buildings containing nuclear material and a central alarm station. DOE also supported the development of a graduate degree program in nuclear material security at the Institute. DOE began installing security systems at the site in February 1996 and completed the work in June 1998. From fiscal year 1993 through fiscal 2000, DOE spent $557.9 million on the Material Protection, Control, and Accounting program in Russia. As figure 5 shows, DOE spent $351.8 million, or 63 percent of the $557.9 million, on installing nuclear security systems at Russia’s civilian sites, nuclear weapons laboratories, Navy nuclear fuel sites, and Navy nuclear weapons sites. DOE spent the remainder of the $557.9 million on operational and national infrastructure assistance, the Material Consolidation and Conversion initiative, and program management. For fiscal year 2000, DOE received an appropriation of $150 million for the program. The amount available for nuclear security assistance to Russia was reduced to $140.5 million by a general reduction of about $4.8 million to reduce the amount that DOE national laboratory personnel spend on travel and the number of national laboratory personnel on temporary assignment to the Washington, D.C., metropolitan area; a rescission of about $0.6 million as part of an omnibus appropriations act; a reprogramming of about $3 million to allow DOE to hire more federal managers for the program; and DOE’s allocation of $1.2 million for International Emergency Cooperation, a related program that is included in the 20-year plan for completing the Material Protection, Control, and Accounting program but that is a separate program for assisting other countries in cases of nuclear accidents, nuclear smuggling, or terrorist incidents. DOE also had a carryover of $85.5 million from fiscal year 1999, which brought the program’s total fiscal year 2000 budget to $226 million. As of September 30, 2000, DOE had spent $138.7 million of its fiscal year 2000 budget, and it carried over $87.3 million into the program’s fiscal 2001 budget. DOE’s national laboratories obligated $59.4 million of the $87.3 million as of the end of fiscal year 2000. DOE had plans for the national laboratories to use the remaining $27.9 million to implement specific nuclear security projects, but the laboratories had not yet obligated these funds as of the end of the fiscal year. | The Department of Energy (DOE) is improving security of 192 metric tons of weapons-usable nuclear material in Russia by installing modern security systems that detect, delay, and respond to attempts to steal nuclear material. These systems, while not as stringent as those installed in the United States, are designed to reduce the risk of nuclear material theft at Russian sites. While Russia and the United States have worked cooperatively to reduce the risk of theft in Russia, Russian officials' concerns about divulging national security information continue to impede DOE's efforts to install systems for several hundred metric tons of nuclear material at sensitive Russian sites. Continued progress in reducing the risk of nuclear material theft in Russia hinges on DOE's ability to gain access to Russia's sensitive sites and reach agreement with the Ministry of Atomic Energy to reduce the number of sites and buildings where nuclear material is located. DOE currently does not have a means to periodically monitor the systems to ensure that they are operating properly on a continued basis. Such as mechanism would provide DOE officials with increased confidence that the security systems are reducing the risk of nuclear material theft. The strategic plan developed by DOE should provide an estimate of how much sustainability assistance is required on the basis of an analysis of the costs to operate and maintain the systems and the sites' ability to cover these costs. In addition, the plan should provide options for completing the program on the basis of the progress made on gaining access to sensitive sites and the closure of buildings and sites. |
Joint STARS is a joint Air Force and Army wide-area surveillance and target attack radar system designed to detect, track, classify, and support the attack of moving and stationary ground targets. This $11 billion major defense acquisition program consists of air and ground segments— refurbished 707 aircraft (designated the E-8) equipped with radar, operation and control, data processing, and communications subsystems, together with ground stations equipped with communications and data processing subsystems. Low-rate initial production (LRIP) of the Joint STARS aircraft began in fiscal year 1993. In line with 10 U.S.C. 2399, DOD’s final decision to proceed beyond LRIP first required the DOD Director of Operational Test and Evaluation (DOT&E) to submit a report to Congress, referred to as the Beyond LRIP report, stating whether (1) the test and evaluation performed was adequate and (2) testing demonstrated that the system is effective and suitable for combat, that is, operationally effective and suitable. The Joint STARS aircraft was scheduled to begin its initial operational test and evaluation—referred to as the Joint STARS multi-service operational test and evaluation—in November 1995. That testing was delayed and then changed because of the deployment of Joint STARS assets to the European theater to support Operation Joint Endeavor in Bosnia. The Air Force Operational Test and Evaluation Center (AFOTEC) and the U.S. Army Operational Test and Evaluation Command conducted a combined development and operational test of Joint STARS from July through September 1995 and an operational evaluation of the system during Operation Joint Endeavor from January through March 1996. Two Air Force Joint STARS aircraft and 13 Army Joint STARS ground station modules were deployed to support Operation Joint Endeavor and operationally evaluated from January through March 1996. After analyzing the data from the combined development and operational test and the operational evaluation performed during Operation Joint Endeavor, AFOTEC issued its Joint STARS multi-service operational test and evaluation final report on June 14, 1996. DOT&E staff analyzed the same and additional data and the Director issued his Beyond LRIP report to Congress on September 20, 1996. On September 25, 1996, the Under Secretary of Defense for Acquisition and Technology signed an acquisition decision memorandum approving the Joint STARS program’s entry into full-rate production with a total planned quantity of 19 aircraft. LRIP of the Joint STARS aircraft began in fiscal year 1993. By statute, 10 U.S.C. 2399, the “Secretary of Defense shall provide that a major defense acquisition program may not proceed beyond low-rate initial production until initial operational test and evaluation of the program is completed.” Operational test and evaluation is the primary means of assessing weapon system performance in a combat-representative environment. It is defined as the (1) field test, conducted under realistic combat conditions, to determine an item’s effectiveness and suitability for use in combat by typical military users and (2) evaluation of the results of such a test. If used effectively, operational test and evaluation is a key internal control measure to ensure that decisionmakers have objective information available on a weapon system’s performance, thereby minimizing risks of procuring costly and ineffective systems. Joint STARS was moved from low-rate to full-rate production even though (1) it performed poorly during both the combined development and operational test and the operational evaluation in Bosnia, (2) excessive contractor effort was needed to support Operation Joint Endeavor, (3) the suitability and sustainability of the system is questionable since it uses refurbished 25-30 year old airframes, and (4) operational software is considered significantly immature. In DOT&E’s Beyond LRIP report, the DOT&E stated that Joint STARS had only demonstrated effectiveness for operations other than war. The report indicated that of three critical operational issues to judge effectiveness, only one had been demonstrated as met “. . . with limitations.” Those critical operational issues related to (1) performance of the tactical battlefield surveillance mission, that is, surveillance—“met with limitations”; (2) support of the execution of attacks against detected targets, that is, target attack support; and (3) the provision of information to support battlefield management and target selection, that is, battle management. The effectiveness critical operational issues were judged based on seven supporting measures. In its report to Congress, DOT&E listed four of those measures of effectiveness as “not met” during the system’s combined development and operational test and did not list any as having been demonstrated during the Operation Joint Endeavor operational evaluation. “In the current configuration, the [Joint STARS] aircraft has not demonstrated the ability to operate at the required maximum altitude; adequate tactics, techniques, or procedures to integrate [Joint STARS] into operational theaters have not been developed; [Joint STARS] exceeded the break rate and failed the mission reliability rate during [Operation Joint Endeavor]. During , [Joint STARS] did not achieve the effective time-on-station requirement.” He concluded that without corrective actions, “[Joint STARS] would not be suitable in higher intensity conflict” and later in the report judged that the system “as tested is unsuitable.” Analysis of DOT&E’s Beyond LRIP report indicates that not only did Joint STARS have disappointing test results but also that extensive follow-on operational testing of Joint STARS is needed. In its Beyond LRIP report, DOT&E presented a table that reported its findings of the combined development and operational test and Joint STARS Operation Joint Endeavor operational evaluation and indicates where further testing is required. Our analysis of that table indicates that at most only 25 of 71 test criteria could be judged met. DOT&E considers 18 of those 25 to require no further testing, that is, DOT&E judges them clearly met. However, our analysis also indicates that 19 test criteria were clearly not met and that as many as 26 might not have been met. Twenty-seven of the criteria could not be determined in either the combined development and operational test or the Operational Joint Endeavor operational evaluation. Of the 71 Joint STARS operational test and evaluation criteria listed, DOT&E indicates that 53, or about 75 percent, require further testing. In addition to the above, DOT&E also noted that there were several operational features present during Joint STARS Operation Joint Endeavor deployment that were essential to its mission accomplishment but were not included in the recent production decision. It provided two specific examples—satellite communications and a deployable ground support station. DOT&E believes these features “will be a necessary part of the production decision to achieve a capable [Joint STARS] system.” It also noted the need for other features—moving target indicator clutter suppression, communications improvements, terrain masking tools for ground station module operators, and linkage to operational theater intelligence networks. Since at least two of the features present during Operation Joint Endeavor were “essential” to its mission accomplishment have already been developed, and may be needed “to achieve a capable Joint STARS system,” those features should also be tested during the planned Joint STARS follow-on test and evaluation. “ must yield the most credible and objective results possible. All facets of the test effort must operate under the rules that support total objectivity and prevents improper data manipulation.” The test plan also states that interim contractor support “will be limited to perform ground maintenance only; no in-flight support.” Regarding the Army’s ground station modules, it states that “the Army maintenance concept does not call for at any level . . . .” “Approximately 80 contractors were deployed to support the E-8C. However, three or four systems engineers flew on each flight to ensure they could provide system stability and troubleshooting expertise during missions. Additionally, three or four software engineers were on the ground full time, researching and developing fixes to software problems identified during the deployment.” AFOTEC also reported that “Each of the had one contractor representative on site and on call with additional help available as necessary. Five contractor representatives remained at [Rhein-Main Air Base] and functioned as a depot.” The AFOTEC report stated that the “test director agreed to contractor participation in the to a greater extent permitted under US Public Law, Title 10, Section 2399.” When we formally expressed our concerns about the significant contractor involvement in Operation Joint Endeavor, DOD did not directly acknowledge that contractors were utilized beyond the constraints of the law governing operational test and evaluations. It stated that “were this solely an , contractors would not have been utilized beyond the constraints of 10 U.S.C. §2399,” and noted that the contractors were involved in the Joint STARS operation to support the mission. It further stated that employing Joint STARS in Operation Joint Endeavor “allowed the system to be operated and tested at a greater operational tempo than the system would have undergone in traditional testing.” DOD also stated that “because of the developmental nature of the aircraft, we needed to have more contractor personnel involved than we would otherwise have had.” It is understandable that DOD wanted to provide the best support possible in Operation Joint Endeavor. However, such significant contractor use neither supports a conclusion that the system is operationally effective or suitable for combat, nor is it indicative of a level of system maturity that justifies full-rate production. Joint STARS failure to meet its maintainability criteria during an operation less demanding than combat, even with such significant contractor involvement beyond that planned for in combat, also raises the question of the Air Force’s ability to develop a cost-effective maintenance plan for the system. This issue is recognized in the Under Secretary’s acquisition decision memorandum approving Joint STARS full-rate production. In that memorandum, the Under Secretary called for the Air Force to fully examine Joint STARS affordability, sustainability, and life-cycle costs, including the scope of contractor support. “If it is determined that the system will be operated at rates similar to AWACS [Airborne Warning And Control System], it is questionable whether the [Joint STARS aircraft] can be sustained over time. Airframe problems have already been experienced on the existing [Joint STARS airframes], including a hydraulics failure and a cracked strut in the fuselage between the wings.” In discussing the Joint STARS aircraft engines, DOT&E noted that they “are 1950s technology and may not be reliable” and cited AFOTEC’s reporting that engine failures were among the principal reasons that the aircraft failed to meet the break rate criteria during Operation Joint Endeavor. “. . . would face operational challenges taking off from five runways in Korea, each approximately 9,000 feet long. Operations out of Korea would likely require taking off with less fuel and subsequent aerial refueling or shortening the time on station.” Another area of Joint STARS suitability concern is the system’s growth potential. DOT&E has reported that it is not clear that the remanufactured 707 platforms will be capable of incorporating all of the planned upgrades, noting that the airframe limits the system’s growth potential both in weight and volume. It reported that as the current mission equipment already fills much of the fuselage, there is little room for expansion. DOT&E also noted that increasing the payload weight would require longer takeoff runways or taking off with less fuel, thus increasing the aerial refueling requirement or decreasing mission duration. DOT&E also noted that the system’s current computers limited its growth potential due to their having very little reserve processor time or memory. It stated that the Air Force requires that no more than 50 percent of central processor unit cycles or memory be utilized by a new system. DOT&E reported that “None of the E-8C computer subsystems meet these requirements.” It provided an example of the problem, stating that “the memory reserve of the operator workstations still does not meet the requirement, even after being increased from 128 megabytes to 512 megabytes just prior to .” This assessment is another indicator of the program’s elevated risk. As DOT&E noted “Future software enhancements and modifications may require significant hardware upgrades. . . .” The AFOTEC report specifically pointed to the lack of maturity in Joint STARS software. For example, AFOTEC reported that “during Joint STARS , software deficiencies were noted on every E-8C subsystem;” the software “does not adequately support operator in executing the mission;” and “Joint STARS software does not show the expected maturity trends of a system at the end of development.” In discussing Joint STARS software maturity, DOD advised us that the AFOTEC report judged the system overall operationally effective and suitable. Specifically, in reference to software problems, DOD stated that “the majority of software faults that occurred during Operation Joint Endeavor were resolved while airborne in less than 10 minutes.” However, both AFOTEC and DOT&E had some critical concerns regarding how Joint STARS software functioned. For example, according to AFOTEC, the “Joint STARS software is immature and significantly impedes the system’s reliability and effectiveness,” and according to DOT&E “Immature software was clearly a problem during [Operation Joint Endeavor]. . .” “. . .the prime contractor had to be called in to assist and correct 69 software-specific problems during the 41 E-8C missions . . . .an average of 1.4 critical failures per flight. . .” “Communications control was lost on 69 percent of the flights.” “The system management and control processor failed and had to be manually reset on half of the flights.” DOD has stated that the Air Force “plans several actions to mature the software and provide the required support resources” and that “an interim software release in April 1997 will correct some software deficiencies identified during the operational evaluation.” DOD also noted that software updates will be loaded each year thereafter and that software changes are easily incorporated. How easily these software changes are incorporated remains to be seen because much of this software, according to AFOTEC and DOT&E, is poorly documented. For example, AFOTEC has reported that there are 395 deficiency reports open against the Joint STARS program, 318 of which are software related. DOT&E also stated that the more than 750,000 lines of Joint STARS software code are “poorly documented” and later commented that “Software problems with the communications and navigation systems were never fully corrected, even after extensive efforts by the system contractor.” These facts in combination with DOD’s comments raise the serious question as to which software deficiencies are to be addressed in the planned April software update. There is an opportunity not currently under consideration that could reduce the Joint STARS program cost and result in an improved system. Since the Joint STARS was approved for LRIP, the procurement cost objective of the Air Force’s share of the Joint STARS has increased by about $1 billion. Program costs escalated from approximately $5.2 billion to approximately $6.2 billion in then-year dollars. A DOD official informed us that of the $1 billion cost growth, $760 million is attributed to the increased cost to buy, refurbish, and modify the used 707 airframes to receive the Joint STARS electronics. The remaining cost growth is attributed to other support requirements and growth in required spare parts. At least as early as 1992, the Boeing Company proposed putting Joint STARS on newer Boeing 767-200 Extended Range aircraft, but this proposal was not accepted as cost-effective. According to the 1996 Boeing price list, the commercial version of this aircraft can be bought for between $82 million and $93 million depending on options chosen (this is flyaway cost—the cost of a plane ready to be flown in its intended use). Furthermore, the flyaway cost of a commercial Boeing 757, which a Boeing representative informed us is in many respects more comparable to the 707s being used, is listed at between $61 million to $68 million. The actual cost of procuring either of these aircraft could be lowered by volume discounts and by the cost of the commercial amenities not required. On the other hand, these aircraft would require modifications to receive Joint STARS equipment, which would raise their cost. DOD informed us that the cost of procuring, refurbishing, and modifying the current 707 aircraft to receive Joint STARS equipment is now estimated to be about $110 million per airframe. The cost of procuring and preparing new aircraft might be comparable or even less than the current cost. In addition, the Air Force would acquire a new platform that could have (1) greater room for growth (both volume and weight), (2) take off capability from a shorter runway, (3) greater time-on-station capability, (4) significantly improved fuel efficiency, (5) extended aircraft life over the 707 currently used, and (6) reduced operational and support cost. In commenting on a draft of this report, DOD stated that it considered alternatives to the current air platform, both before LRIP started and at the full-rate production decision point. It also stated that the cost of moving the Joint STARS mission to an alternative platform would outweigh the benefits. We note, however, that at a meeting with DOD and service officials to discuss that draft, we asked about the reported DOD and service analyses. One Air Force official stated that the Air Force’s platform choice was not revisited prior to the full-rate production decision. None of the other 13 DOD and service officials present objected to that statement. Furthermore, when we asked for copies of the air platform analyses that were done in support of either the low-rate or the full-rate production decision, DOD was unable to supply those analyses. Finally, DOD officials have informed us that a Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance Mission Assessment has been performed that indicates that the Air Force could acquire a more effective system while saving $3 billion through the year 2010 by moving the Joint STARS mission to either a business jet or an unmanned aerial vehicle following the procurement of the twelfth current version Joint STARS aircraft. We have previously informed DOD of our concerns about the decision to move to full-rate production in spite of the numerous testing deficiencies reported by both AFOTEC and DOT&E. DOD responded that in making the decision to move to full-rate production, it “considered the test reports (both the services’ and the Director, Operational Test and Evaluation’s), the plans to address the deficiencies identified during developmental and operational testing, cost estimates, operational requirements, and other program information.” Although DOD believes that “none of the deficiencies identified are of a scope that warrants interrupting production,” the production decision memorandum clearly reflects a recognition that this program carries significant risk. In his memorandum, the Under Secretary of Defense for Acquisition and Technology directed (1) an update of the Joint STARS Test and Evaluation Master Plan to “address multi-service [operational test and evaluation] deficiencies (regression testing);” (2) acceleration of the objective and threshold dates for the planned Follow-on Operational Test and Evaluation; and (3) the Air Force to “fully examine [Joint STARS] affordability, sustainability, and life cycle costs including the scope of contractor use for field-level system support.” “I am writing to be sure you know that the President is personally committed to [Joint STARS], has engaged Chancellor Kohl on this issue and will continue his personal involvement with key allies to ensure our goal is achieved. I would ask that you underscore your personal support for our collective efforts on behalf of [Joint STARS] when you meet with your NATO and European counterparts.” Notwithstanding DOD’s September 1996 commitment to full-rate Joint STARS production, a DOD official informed us that the NATO armament directors in their November 1996 meeting delayed for 1 year any decision on designating Joint STARS as NATO’s common system or pursuing an alternate system to be developed. In the process of moving the Joint STARS program forward into full-rate production, DOD produced a Beyond LRIP report for Congress and thus moved past a key congressional reporting requirement that serves as an important risk management mechanism. The Beyond LRIP report to Congress that is required before major defense acquisition programs can proceed into full-rate production serves to inform Congress of the adequacy of the operational testing done on the system and to provide it with a determination of whether the system has demonstrated effectiveness and suitability. Having issued this report, DOT&E is under no further obligation to report to Congress at the Beyond LRIP report level of detail on the adequacy of the operational testing or on whether the system has demonstrated effectiveness and suitability for combat. However, DOD plans follow-on test and evaluation of the system to address the deficiencies identified during the system’s earlier testing. On September 20, 1996, DOT&E sent to Congress a Joint STARS “Beyond LRIP” report that (1) clearly indicates that further operational testing is needed, (2) could only declare effectiveness for operations other than war, and (3) stated that Joint STARS is unsuitable as tested. On September 25, 1996, DOD approved the full-rate production of Joint STARS. In the acquisition memorandum approving Joint STARS full-rate production, the Under Secretary of Defense for Acquisition and Technology called for an accelerated follow-on operational test and evaluation of Joint STARS that is to address the deficiencies identified in the initial operational test and evaluation DOT&E reported on in the Beyond LRIP report to Congress. The planned follow-on operational test and evaluation will provide an opportunity to judge the Joint STARS program’s progress in resolving the issues identified in earlier testing. Notwithstanding any concurrent efforts to have Joint STARS designated as a NATO common system, Joint STARS test performance and the clearly unresolved questions about its operational suitability and affordability should have, in our opinion, caused DOD to delay the full-rate production decision until (1) the system had, through the planned follow-on operational test and evaluation, demonstrated operational effectiveness and suitability; (2) the Air Force had completed an updated analysis of alternatives for the Joint STARS to address the identified aircraft suitability and cost issues; and (3) the Air Force had developed an analysis to determine whether a cost-effective maintenance concept could be designed for the system. Furthermore, as they were judged “essential” to mission accomplishment and needed “to achieve a capable Joint STARS system,” the satellite communications and deployable ground support station features (present, but untested, during Operation Joint Endeavor) should also be tested during the planned Joint STARS follow-on operational test and evaluation. Concerns of the magnitude discussed in this report are not indicative of a system ready for full-rate production. The program should have continued under LRIP until the issues identified by AFOTEC and DOT&E were resolved and the system was shown to be effective and suitable for combat. Furthermore, the recent cost growth related to refurbishing and modifying the old airframes being used for Joint STARS and questions regarding the suitability of those platforms indicate an opportunity to reduce the program’s cost and improve the systems acquired. We believe, therefore, that an updated study of the cost effectiveness of placing Joint STARS on new, more capable aircraft is warranted. We recommend that the Secretary of Defense direct the Air Force to perform an analysis of possible alternatives to the current Joint STARS air platform, to include placing this system on a new airframe. Because of (1) DOD’s decision to commit to full-rate production in the face of the test results discussed in this report and (2) its subsequent decision to do additional tests while in production to address previous test deficiencies, we are convinced that DOD plans to proceed with the program. However, if Congress agrees that there is unnecessarily high risk in this program and believes the risk should be reduced, it may wish to require that: The Air Force obtain DOT&E approval of a revised test and evaluation master plan (and all plans for the tests called for in that master plan) for follow-on operational testing to include adequate coverage of gaps left by prior testing and include testing of any added features considered part of the standard production configuration and that DOT&E considers key system components. DOT&E provide a follow-on test and evaluation report to Congress evaluating the adequacy of all testing performed to judge operational effectiveness and suitability for combat and a definitive statement stating whether the system has demonstrated operational effectiveness and suitability. DOD develop and provide Congress an analysis of alternatives report on the Joint STARS air platform that considers the suitability of the current platform and other cost-effective alternatives, and the life-cycle costs of the current platform and best alternatives. In commenting on a draft of this report, DOD disagreed with our recommendation that the Air Force be directed to perform an analysis of possible alternatives to the current Joint STARS air platform. It also disagreed with our suggestion that Congress may wish to require DOD to develop and provide Congress a report on that analysis. DOD stated that alternative platforms were considered prior to both the start of LRIP and the full-rate production decision. DOD stated that based on (1) the fact that over half the fleet is already in the remanufacturing process or delivered to the user; (2) the large nonrecurring costs that would be associated with moving the Joint STARS mission to a different platform; (3) the additional cost to operate and maintain a split fleet of Joint STARS airframes; and (4) the expected 4-year gap in deliveries, such a strategy would force the costs of moving the Joint STARS mission to a different platform outweigh the benefits. DOD’s comment about having previously considered alternative platforms is inconsistent with the information we developed during our review and with Air Force comments provided at our exit conference. In an effort to reconcile this inconsistency, we requested copies of the prior analyses of alternative platforms, but DOD was not able to provide them. DOD’s statement that the costs of moving the Joint STARS mission to another platform would outweigh the benefits contradicts Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance Mission Assessment briefings given the Quadrennial Defense Review. Those briefings recommend (1) limiting Joint STARS production to 12 aircraft, (2) moving the Joint STARS mission to either corporate jets or Unmanned Aerial Vehicles, and (3) phasing out Joint STARS 707 variants as quickly as the new platform acquisitions will allow. According to those briefings, implementation of this recommendation could result in a more effective system and save over $3 billion through fiscal year 2010. We believe that the issue clearly warrants further consideration. Furthermore, given DOD’s resistance to the concept, we are more convinced of the merits of our suggestion that Congress might wish to require a report on such an analysis. In commenting on our draft report, DOD also indicated that congressional direction was unneeded on our suggestions that Congress might wish to require (1) DOT&E approval of a revised test and evaluation master plan for the planned Joint STARS follow-on operational test and evaluation and (2) DOT&E to provide Congress with a follow-on operational test and evaluation report on the adequacy of Joint STARS testing and stating whether Joint STARS has demonstrated operational effectiveness and suitability. DOD stated that congressional direction on the first point was unneeded because the Joint STARS full-rate production decision memorandum required that the test and evaluation master plan be updated for Office of the Secretary of Defense approval and current DOD policy is that DOT&E will review, approve, and report on oversight systems in follow-on operational test and evaluation. DOD also stated that congressional direction on the second point is unneeded because DOT&E has retained Joint STARS on its list of programs for oversight and is to report on the system in its annual report to Congress as appropriate. DOD’s response did not directly address our point since, as DOD pointed out, the acquisition decision memorandum that approved full-rate production required Office of the Secretary of Defense approval, not DOT&E approval, of the follow-on operational test and evaluation master plan. During the course of our review, DOD officials informed us that there was significant disagreement between the Air Force and DOT&E as to what follow-on testing was needed. It was indicated that the issue would probably have to be resolved at higher levels within the department, an indication of greater flexibility than DOD implies. Furthermore, while DOD stated there were some improvements and enhancements “that could benefit the warfighter” and acknowledged that those features were not tested, it did not respond to our comments that DOT&E judged those features “essential” to mission accomplishment or commit to their operational test and evaluation. Given these facts, we have not only maintained our suggestion that Congress may wish to require the Air Force to obtain DOT&E approval of a revised test and evaluation master plan, but also strengthened it to include DOT&E approval of supporting test plans. In its response to our suggestion that Congress may wish to require that DOT&E provide it a detailed, follow-on test and evaluation report, DOD states congressional direction is unnecessary as DOT&E will report on the system, among many others, in its annual report to Congress. DOD’s comment fails to recognize, however, that we are suggesting that, given the already reported test results, Congress may wish a more detailed report outlining the adequacy of and the system’s performance during follow-on operational testing to help in its oversight and provide it assurance that the system’s problems have been substantially resolved. Given that (1) Congress felt such reporting to be beneficial enough to require it before a system can proceed beyond LRIP and (2) the fact that DOT&E, in the required report provided for Joint STARS, could not certify effectiveness for war and found the system unsuitable as tested, we continue to believe that Congress may wish to require a similar report based on the follow-on operational test and evaluation planned. DOD’s comments are reprinted in their entirety in appendix I, along with our evaluation. To determine whether Joint STARS test performance indicates a maturity justifying full-rate production, we interviewed officials and reviewed documents in Washington, D.C., from the DOD Office of the Director of Operational Test and Evaluation and the Joint STARS Integrated Product Team. We reviewed the Air Force Operational Test and Evaluation Center’s multi-service operational test and evaluation plan and its final report on that testing and the DOD Director of Operational Test and Evaluation’s Beyond LRIP report. To determine whether DOD considered and resolved important cost and performance issues prior to making its full-rate production decision, we reviewed Joint STARS program budget documents and program-related memoranda issued by the Under Secretary of Defense for Acquisition and Technology. To determine whether it is possible that a more useful operational test and evaluation report can be provided Congress, we reviewed the statute governing operational testing and evaluation, examined DOT&E’s Beyond LRIP report, and considered other relevant program information. We considered and incorporated where appropriate DOD’s response to our September 20, 1996, letter of inquiry and its response to a draft of this report. We conducted this review from October 1996 through April 1997 in accordance with generally accepted government auditing standards. We are sending copies of this letter to other appropriate congressional committees; the Director, Office of Management and Budget; and the Secretaries of Defense, the Army, and the Air Force. Copies will also be made available to others upon request. If you or your staff have any questions, please contact me, Mr. Charles F. Rey, Assistant Director, or Mr. Bruce Thomas, Evaluator-in-Charge, at (202) 512-4841. The following are GAO’s comments on the Department of Defense’s (DOD) letter dated March 31, 1997. 1. We have not suggested or recommended that Joint STARS production be interrupted. We have, however, suggested actions that we believe (1) will help reduce the program’s risk; (2) could result in the acquisition of a more effective, less costly system; and (3) could help decisionmakers ensure that the Joint STARS program continues to make progress. 2. The report has been modified in light of DOD’s comments. 3. DOD’s indication that other factors were considered in deciding to proceed to full-rate production is a signal that DOD and the Air Force are willing to accept a high level of risk even when the Director, Operation, Test, and Evaluation (DOT&E) has concluded that the system was unsuitable as tested and operational effectiveness for war remains to be demonstrated. We believe, given the system’s test performance as reported by both the Air Force Operational Test and Evaluation Command (AFOTEC) and DOT&E and the program’s procurement cost growth of $1 billion between the low-rate and full-rate production decision points, that an informed full-rate production decision required the following information: (1) an approved test and evaluation master plan for follow-on operational testing and specific plans for the tests called for in that master plan, (2) the results of the already ongoing study of ways to reduce the program’s cost, and (3) an analysis of alternatives to the current platform. DOD did not have these items in hand when it made its decision. We must also note that DOD implies that our recommendations would require a break in production. This is inaccurate. As we stated in the body of our report, the program could have continued under low-rate initial production (LRIP) until operational effectiveness and suitability for combat were demonstrated and plans to address identified deficiencies and reduce program costs were completed. 4. In its report on the Joint STARS multi-service operational test and evaluation, AFOTEC stated that “Joint STARS software is immature and significantly impedes the system’s reliability and effectiveness.” We do not believe that, given the software intensive nature of the system, this statement supports a conclusion that the system could be judged operationally effective. 5. We must note that follow-on operational test and evaluation of the system was planned before the full-rate production decision. The full-rate production decision called for acceleration of that testing and for that testing to address deficiencies identified in the earlier tests. Joint STARS could have continued under LRIP pending a demonstration of operational effectiveness and suitability. 6. This speaks to the number of aircraft missions planned and the number for which an aircraft was provided. It does not address the quality or quantity of the support provided during those missions. Furthermore, DOD’s comment refers to the same—operation that is reported on in both the Air Force and DOT&E reports and in this report. 7. U.S.-based contractor support was utilized during the first Operation Joint Endeavor deployment. It is also our understanding that during the second Operation Joint Endeavor deployment the Air Force may have utilized a “reach-back” maintenance concept in which U.S. stationed contractor staff were providing field support through satellite communications. Moreover, DOD and Air Force officials told us that at least at the beginning of the second Operation Joint Endeavor deployment, contractor staff were flying on the deployed aircraft. This clearly raises the question of what the overall level of contractor support was for both the first and second deployments. “As already discussed, extensive efforts by the system contractor were required to achieve the demonstrated availability for the E-8C aircraft. Even with those efforts the system was not able to meet the user criteria for several measures directly related to the maintenance concept in place during —a concept that involved considerably more contractor support than previously envisioned.” 11. As we noted in the body of our report, Joint STARS failed to meet test criteria during an operation less demanding than combat, even with such significant contractor involvement beyond that planned for in combat. In discussing operational tempo in its Beyond LRIP report, DOT&E stated that if the system is operated at rates similar to the Airborne Warning and Control System, “it is questionable whether the [Joint STARS aircraft] can be sustained over time.” DOD commented that an unbiased assessment of the measure of Joint STARS’ ability to maintain the required tempo could not be made and would be tested during the follow-on operational test and evaluation. We believe that an informed full-rate production decision requires knowledge of a system’s ability to satisfy the operational tempo expected of it. DOD made its Joint STARS full-rate production decision without this knowledge. 12. We understand that Joint STARS, like most systems, has limitations that need to be planned around. At issue here is a question of how great those limitations are and whether they are acceptable. DOD states that “the user is satisfied that the system meets requirements.” However, we must note that the Air Force’s own Operational Test and Evaluation Center reported that the “two critical suitability [measures of performance, sortie generation rate and mission reliability rate], were affected by [Operation Joint Endeavor] contingency requirements and system stability problems.” “The high failure rate of aging aircraft components affected as critical failures were statistically determined to affect over 30 percent of the sorties flown. Analysis revealed the elevated critical failure rate was steady and showed no potential for improvement. Technical data and software immaturity affected the maintainability of the aircraft, and contractor involvement further compromised clear insight into the Air Force technicians’ ability to repair the system.” AFOTEC also reported on Joint STARS performance relative to 15 supporting suitability criteria. It stated “Eight did not meet users’ criteria. One was not tested. Only one . . . met the users’ criteria. The remaining five are reported using narrative results.” 13. DOD discusses only the weight growth of funded activities, leaving open the question of whether there are future, but currently unfunded, improvements planned that will add weight growth. Air Force officials told us that the Airborne Warning and Control System had experienced weight growth over the life of its program. That growth was attributed to the system’s being given added tasks over time. We believe it reasonable to expect that the Joint STARS program experience might track that of the Airborne Warning and Control Systems program, that is, be given added tasks and face weight growth as a result. Also, regarding Joint STARS room for growth, DOD previously advised us that Joint STARS currently has about 455,000 cubic inches of space available. We must note that this equates to a volume of under 7 feet cubed and that in commenting on the system’s space limitation, DOT&E stated “There is little room available for additional people or operator workstations.” 14. As we stated in the body of our report, how easily these software changes are incorporated remains to be seen. 15. We requested and DOD provided additional information on this point. DOD’s subsequent response indicates that this DOD comment was in error. In its subsequent response, DOD stated that the follow-on test and evaluation was accelerated “to reflect desire for earlier to evaluate fixes to deficiencies.” We believe this statement reflects a recognition of increased program risk. 16. The acquisition decision memorandum approving Joint STARS production clearly indicates that the Skantze study mentioned was not completed at that time. We believe that the full-rate production decision should have been made with the Skantze study in hand. Furthermore, we do not understand why DOD felt the need to direct the Air Force to fund and implement a plan that is to save it money, but felt no need to direct the Air Force to examine alternative platforms that at least one other DOD panel had stated would not only save $3 billion but also provide greater effectiveness. 17. We believe that not only should DOT&E approval of the Joint STARS Test and Evaluation Master Plan be required, but also of all supporting test plans. We have changed the language of this matter for congressional consideration accordingly. 18. We are suggesting that Congress may wish to request a more detailed report, one at the Beyond LRIP report level of detail, a level of detail not provided in DOT&E’s annual report. Given that DOT&E could only state effectiveness for operations other than war—could not state a belief as to whether the system would be effective in two of the three critical operational roles it is expected to perform in war—and found the system unsuitable as tested, we believe that such report would help Congress maintain program oversight. DOD’s comment of “other reports as appropriate” leaves the matter in DOD’s hand to decide if Congress would benefit from such a report. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO reviewed the Department of Defense's (DOD) recent decision to commit to the full-rate production of the Joint Surveillance Target Attack Radar System (Joint STARS), focusing on whether: (1) the system had demonstrated a level of maturity through testing to justify a full-rate production commitment; (2) DOD considered and resolved important cost and performance issues prior to making its decision; and (3) there are future actions that could reduce program risk. GAO noted that: (1) Joint STARS' performance during its combined development and operational test and the operational evaluation done in Bosnia do not support a decision to commit the system to full-rate production; (2) the system's operational effectiveness and suitability were not demonstrated during the operational testing; (3) DOD's decision to move Joint STARS into full-rate production was premature and raised the program's level of risk; (4) the program could have continued under low-rate initial production (LRIP) until operational effectiveness and suitability for combat were demonstrated and plans to address identified deficiencies and reduce program costs were completed; (5) DOD decided in favor of Joint STARS full-rate production without the benefit of that information; (6) during the period that the full-rate production decision was being considered, the Assistant to the President for National Security was promoting the sale of the system to the North Atlantic Treaty Organization (NATO); (7) in an August 10, 1996, memorandum to the Secretaries of State, Defense, and Commerce and to the Chairman of the Joint Chiefs of Staff, the Assistant to the President stated that: "We have been working through various military, diplomatic, and political channels to secure NATO support for a fall 1996 decision in principle by the Conference of Armament Directors...to designate (Joint STARS) as NATO's common system"; (8) a DOD official informed GAO that in November 1996, the NATO armament directors delayed their decision on Joint STARS for 1 year; (9) before DOD approved the full-rate production of Joint STARS, the Director of Operational Test and Evaluation (DOT&E) provided Congress with a Joint STARS "Beyond LRIP" report; (10) the report clearly indicates that further operational testing is needed, DOT&E could only declare effectiveness for operations other than war, and the system was unsuitable as tested; (11) DOD plans follow-on test and evaluation to address the deficiencies identified during the earlier testing; (12) there is an opportunity not currently under consideration that could reduce the Joint STARS' program cost and result in an improved system; (13) since the Joint STARS was approved for LRIP, the procurement cost objective of the Air Force's share of the Joint STARS has increased by about $1 billion, primarily due to the greater effort and more resources needed to refurbish the 25- to 30-year-old Boeing 707 airframes than previously anticipated; and (14) it may now be more cost effective for the Air Force to buy the Boeing 767-200 Extended Range aircraft or some other new, more capable aircraft. |
Yellowstone National Park is at the center of about 20 million acres of publicly and privately owned land, overlapping three states—Idaho, Montana, and Wyoming. This area is commonly called the greater Yellowstone area or ecosystem and is home to numerous species of wildlife, including the largest concentration of free-roaming bison in the United States. Bison are considered an essential component of this ecosystem because they contribute to the biological, ecological, cultural, and aesthetic purposes of the park. However, because the bison are naturally migratory animals, they seasonally attempt to migrate out of the park in search of suitable winter range. The rate of exposure to brucellosis in Yellowstone bison is currently estimated at about 50 percent. Transmission of brucellosis from bison to cattle has been documented under experimental conditions, but not in the wild. Scientists and researchers disagree about the factors that influence the risk of wild bison transmitting brucellosis to domestic cattle and are unable to quantify the risk. Consequently, the IBMP partner agencies are working to identify risk factors that affect the likelihood of transmission, such as the persistence of the brucellosis-causing bacteria in the environment and the proximity of bison to cattle, and are attempting to limit these risk factors using various management actions. The National Park Service first proposed a program to control bison at the boundary of Yellowstone National Park in response to livestock industry concerns over the potential transmission of brucellosis to cattle in 1968. Over the next two decades, concerns continued over bison leaving the park boundaries, particularly after Montana’s livestock industry was certified brucellosis-free in 1985. In July 1990, the National Park Service, Forest Service, and Montana’s Department of Fish, Wildlife and Parks formed an interagency team to examine various alternatives for the long- term management of the Yellowstone bison herd. Later, the interagency team was expanded to include USDA’s Animal and Plant Health Inspection Service and the Montana Department of Livestock. In 1998, USDA and Interior jointly released a draft environmental impact statement (EIS) analyzing several proposed alternatives for long-term bison management and issued a final EIS in August 2000. In December 2000, the interagency team agreed upon federal and state records of decision detailing the long- term management approach for the Yellowstone bison herd, commonly referred to as the IBMP. “maintain a wild, free-ranging population of bison and address the risk of brucellosis transmission to protect the economic interest and viability of the livestock industry in Montana.” Although managing the risk of brucellosis transmission from bison to cattle is at the heart of the IBMP, the plan does not seek to eliminate brucellosis in bison. The plan instead aims to create and maintain a spatial and temporal separation between bison and cattle sufficient to minimize the risk of brucellosis transmission. In addition, the plan allows for the partner agencies to make adaptive management changes as better information becomes available through scientific research and operational experience. Under step one of the plan, bison are generally restricted to areas within or just beyond the park’s northern and western boundaries. Bison attempting to leave the park are herded back to the park. When attempts to herd the bison back to the park are repeatedly unsuccessful, the bison are captured or lethally removed. Generally, captured bison are tested for brucellosis exposure. Those that test positive are sent to slaughter, and eligible bison—calves and yearlings that test negative for brucellosis exposure—are vaccinated. Regardless of vaccination-eligibility, partner agency officials may take a variety of actions with captured bison that test negative including, temporarily holding them in the capture facility for release back into the park or removing them for research. In order to progress to step two, cattle can no longer graze in the winter on certain private lands north of Yellowstone National Park and west of the Yellowstone River. Step two, which the partner agencies expected to reach by the winter of 2002/2003, would use the same management methods on bison attempting to leave the park as in step one, with one exception—a limited number of bison, up to a maximum of 100, that test negative for brucellosis exposure would be allowed to roam in specific areas outside the park. Finally, step three would allow a limited number of untested bison, up to a maximum of 100, to roam in specific areas outside the park when certain conditions are met. These conditions include determining an adequate temporal separation period, gaining sufficient experience in managing bison in the bison management areas, and initiating an effective vaccination program using a remote delivery system for eligible bison inside the park. The partner agencies anticipated reaching this step on the northern boundary in the winter of 2005/2006 and the western boundary in the winter of 2003/2004. In 1997, as part of a larger land conservation effort in the greater Yellowstone area, the Forest Service partnered with the Rocky Mountain Elk Foundation—a nonprofit organization dedicated to ensuring the future of elk, other wildlife and their habitat—to develop a Royal Teton Ranch (RTR) land conservation project. The ranch is owned by and serves as the international headquarters for the Church Universal and Triumphant, Inc. (the Church)—a multi-faceted spiritual organization. It is adjacent to the northern boundary of Yellowstone National Park and is almost completely surrounded by Gallatin National Forest lands. The overall purpose of the conservation project was to preserve critical wildlife migration and winter range habitat for a variety of species, protect geothermal resources, and improve recreational access. The project included several acquisitions from the Church, including the purchase of land and a wildlife conservation easement, a land-for-land exchange, and other special provisions such as a long-term right of first refusal for the Rocky Mountain Elk Foundation to purchase remaining RTR lands. The project was funded using fiscal years 1998 and 1999 Land and Water Conservation Fund appropriations totaling $13 million. Implementation of the IBMP remains in step one because cattle continue to graze on RTR lands north of Yellowstone National Park and west of the Yellowstone River. All Forest Service cattle grazing allotments on its lands near the park are held vacant, and neither these lands nor those acquired from the Church are occupied by cattle. The one remaining step to achieve the condition of cattle no longer grazing in this area is for the partner agencies to acquire livestock grazing rights on the remaining private RTR lands. Until cattle no longer graze on these lands, no bison will be allowed to roam beyond the park’s northern border, and the agencies will not be able to proceed further under the IBMP. Although unsuccessful, Interior attempted to acquire livestock grazing rights on the remaining RTR lands in August 1999. The Church and Interior had signed an agreement giving Interior the option to purchase the livestock grazing rights, contingent upon a federally approved appraisal of the value of the grazing rights and fair compensation to the Church for forfeiture of this right. The appraisal was completed and submitted for federal review in November 1999. In a March 2000 letter to the Church, Interior stated that the federal process for reviewing the appraisal was incomplete and terminated the option to purchase the rights. As a result, the Church continues to exercise its right to graze cattle on the RTR lands adjacent to the north boundary of the park, and the agencies continue operating under step one of the IBMP. More recently, the Montana Department of Fish, Wildlife and Parks has re- engaged Church officials in discussions regarding a lease arrangement for Church-owned livestock grazing rights on the private RTR lands. Given the confidential and evolving nature of these negotiations, specific details about funding sources or the provisions being discussed, including the length of the lease and other potential conditions related to bison management, are not yet available. Although the agencies continue to operate under step one of the plan, they reported several accomplishments in their September 2005 Status Revew i of Adaptve Management Elements for 2000-2005. These accomplishments included updating interagency field operating procedures, vacating national forest cattle allotments within the bison management areas, and conducting initial scientific studies regarding the persistence of the brucellosis-causing bacteria in the environment. The lands and conservation easement acquired by the federal government through the RTR land conservation project sought to provide critical habitat for a variety of wildlife species including bighorn sheep, antelope, elk, mule deer, bison, grizzly bear, and Yellowstone cutthroat trout; however, the value of this acquisition for the Yellowstone bison herd is minimal because bison access to these lands remains limited. The Forest Service viewed the land conservation project as a logical extension of past wildlife habitat acquisitions in the northern Yellowstone region. While the Forest Service recognized bison as one of the migrating species that might use the habitat and noted that these acquisitions could improve the flexibility of future bison management, the project was not principally directed at addressing bison management issues. Through the RTR land conservation project, the federal government acquired from the Church a total of 5,263 acres of land and a 1,508-acre conservation easement using $13 million in Land and Water Conservation Fund appropriations. As funding became available and as detailed agreements could be reached with the Church, the following two phases were completed. In Phase I, the Forest Service used $6.5 million of its fiscal year 1999 Land and Water Conservation Fund appropriation to purchase Church-owned lands totaling 3,107 acres in June and December 1998 and February 1999. Of these lands, 2,316 acres were RTR lands, 640 acres were lands that provided strategic public access to other Gallatin National Forest lands, and 151 acres were an in-holding in the Absaroka Beartooth Wilderness area. In Phase II, BLM provided $6.3 million of its fiscal year 1998 Land and Water Conservation Fund appropriations for the purchase of an additional 2,156 acres of RTR lands and a 1,508-acre conservation easement on the Devil’s Slide area of the RTR property in August 1999. In a December 1998 letter to the Secretary of the Interior from the Chairs and Ranking Minority Members of the House and Senate Committees on Appropriations, certain conditions were placed on the use of these funds. The letter stated that “the funds for phase two should only be allocated by the agencies when the records of decision for the ‘Environmental Impact Statement for the Interagency Bison Management Plan for the State of Montana and Yellowstone National Park’ are signed and implemented.” The letter also stated that the Forest Service and Interior were to continue to consult with and gain the written approval of the governor of Montana regarding the terms of the conservation easement. Under the easement, numerous development activities, including the construction of commercial facilities and road, are prohibited. However, the Church specifically retained the right to graze domestic cattle in accordance with a grazing management plan that was to be reviewed and approved by the Church and the Forest Service. The Church’s grazing management plan was completed in December 2002, and the Forest Service determined in February 2003 that it was consistent with the terms of the conservation easement. The Church currently grazes cattle throughout the year on portions of its remaining 6,000 acres; however, as stipulated in the conservation easement and incorporated in the grazing management plan, no livestock can use any of the 1,508 acres covered by the easement between October 15 and June 1 of each calendar year, the time of year that bison would typically be migrating through the area. While purchased for wildlife habitat, geothermal resources, and recreational access purposes, the federally acquired lands and conservation easement have been of limited benefit to the Yellowstone bison. As previously noted, under the IBMP, until cattle no longer graze on private RTR lands north of the park and west of the Yellowstone River, no bison are allowed to migrate onto these private lands and the partner agencies are responsible for assuring that the bison remain within the park boundary. Mr. Chairman, this concludes my prepared statement. Because we are in the very early stages of our work, we have no conclusions to offer at this time regarding these bison management issues. We will continue our review and plan to issue a report near the end of this year. I would be pleased to answer any questions that you or other Members of the Subcommittee may have at this time. For further information on this testimony, please contact me at (202) 512- 3841 or nazzaror@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. David P. Bixler, Assistant Director; Sandra Kerr; Diane Lund; and Jamie Meuwissen made key contributions to this statement. Wid feManagement: Negota ons on a LongTerm Plan or Managing Yellowstone Bson Still Ongoing. GAO/RCED-00-7. Washington, D.C.: i November 1999. Wid feManagement: Issues Concernng the Management of Bson and Elk Herds n Ye owstone Natonal Park. GAO/T-RCED-97-200. llii Washington, D.C.: July 1997. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Yellowstone National Park, in northwest Wyoming, is home to a herd of about 3,600 free-roaming bison. Some of these bison routinely attempt to migrate from the park in the winter. Livestock owners and public officials in states bordering the park have concerns about the bison leaving the park because many are infected with brucellosis--a contagious bacterial disease that some fear could be transmitted to cattle, thus potentially threatening the economic health of the states' livestock industry. Other interested groups believe that the bison should be allowed to roam freely both within and outside the park. In an effort to address these concerns, five federal and Montana state agencies agreed to an Interagency Bison Management Plan (IBMP) in December 2000 that includes three main steps to "maintain a wild, free-ranging population of bison and address the risk of brucellosis transmission to protect the economic interest and viability of the livestock industry in Montana." This testimony discusses GAO's preliminary observations on the progress that has been made in implementing the IBMP and the extent to which bison have access to lands and an easement acquired for $13 million in federal funds. It is based on GAO's visit to the greater Yellowstone area, interviews with federal and state officials and other interested stakeholders, and review of related documents. More than 6 years after approving the IBMP, the five federal and state partnering agencies--the federal Department of the Interior's National Park Service and Department of Agriculture's Animal and Plant Health Inspection Service and Forest Service, and the state of Montana's Departments of Livestock and of Fish, Wildlife and Parks--remain in step one of the three-step plan primarily because cattle continue to graze on certain private lands. A key condition for the partner agencies to progress further under the plan requires that cattle no longer graze in the winter on certain private lands north of Yellowstone National Park and west of the Yellowstone River to minimize the risk of brucellosis transmission from bison to cattle; the agencies anticipated meeting this condition by the winter of 2002/2003. Until this condition is met, bison will not be allowed to roam beyond the park's northern border in this area. While a prior attempt to acquire grazing rights on these private lands was unsuccessful, Montana's Department of Fish, Wildlife and Parks is currently negotiating with the private land owner to acquire grazing rights on these lands. Yellowstone bison have limited access to the lands and conservation easement that federal agencies acquired north of the park. In 1998 and 1999, as part of a larger conservation effort to provide habitat for a variety of wildlife species, protect geothermal resources, and improve recreational access, federal agencies spent nearly $13 million to acquire 5,263 acres and a conservation easement on 1,508 acres of private lands north of the park's border--lands towards which bison frequently attempt to migrate in the winter. The conservation easement prohibits development, such as the construction of commercial facilities and roads, on the private land; cattle grazing rights were retained by the land owner. The Yellowstone bison's access to these lands will remain limited until cattle no longer graze on the easement and certain other private lands in the area. |
The refuge system comprises 538 refuges, 37 wetland management districts (an administrative system of thousands of Waterfowl Production Areas and conservation easements, primarily in the north central United States), and 50 coordination areas. The Fish and Wildlife Service (FWS) owns the surface lands and, in some cases, the mineral rights of National Wildlife Refuges and Waterfowl Production Areas, while conservation easements and coordination areas are owned or managed by others. Day-to-day management of wildlife refuges is the responsibility of local refuge managers, subject to the direction of seven regional refuge chiefs and the Chief of the National Wildlife Refuge System (see fig. 1 for a map of FWS regions). Of FWS’s nearly $1.3 billion budget in fiscal year 2002, about $319 million was devoted to the operations and maintenance of the refuge system. In fiscal year 2002, $99.13 million from the Land and Water Conservation Fund was used for the acquisition of additional refuge lands. Over the years, we and others have examined the effects on the refuge system of secondary activities, such as recreation, military activities, and oil and gas activities—which include oil and gas exploration, drilling and production, and transport. Exploring for oil and gas involves seismic mapping of the subsurface topography. Seismic mapping, regardless of the technology employed, requires surface disturbance, often involving small dynamite charges placed in a series of holes, typically in patterned grids. If seismic mapping reveals potential oil or gas deposits exploratory drilling begins. Oil and gas drilling and production often requires constructing, operating, and maintaining industrial infrastructure, including a network of access roads and canals, local pipelines to connect well sites to production facilities and dispose of drilling wastes, and gravel pads to house the drilling and other equipment. In addition, production may require storage tanks, separating facilities, and gas compressors. Finally, transporting oil and gas to production facilities or to users requires transit pipelines. Typically buried, these pipelines range in size, with some as large as 30 inches in diameter. Pumping stations and storage tanks may also be needed for pipeline operations. Under the National Wildlife Refuge System Administration Act of 1966, as amended, FWS is responsible for regulating all activities on refuges. The act requires FWS to determine the compatibility of activities with the purposes of the particular refuge and the mission of the refuge system and not allow those activities deemed incompatible. However, FWS does not apply the compatibility requirement to the exercise of private mineral rights on refuges. Department of the Interior regulations also prohibit leasing federal minerals underlying refuges outside of Alaska, except in cases where federal minerals are being drained by operations on property adjacent to the refuge. Nevertheless, the activities of private mineral owners on refuges are subject to a variety of legal restrictions, including FWS regulations. A variety of federal laws affect how private mineral rights owners conduct their activities. For example, the Endangered Species Act of 1973 prohibits the “take” of any endangered or threatened species and provides for penalties for violations of the act; the Migratory Bird Treaty Act prohibits killing, hunting, possessing, or selling migratory birds, except in accordance with a permit; and the Clean Water Act prohibits discharging oil or other toxic substances into waters of the United States and imposes liability for removal costs and damages resulting from a discharge. Also, FWS regulations require that oil and gas activities be performed in a way that minimizes the risk of damage to the land and wildlife and the disturbance to the operation of the refuge. The regulations also require that land affected be reclaimed after operations have ceased. Whether FWS has authority to impose permitting requirements on private oil and gas activities is discussed later in this report. At least 155 of the 575 refuges of the National Wildlife Refuge System have some past or present oil and gas activities—exploration, drilling and production, or transit pipelines. Many of these activities are concentrated around the Gulf Coast of Louisiana and Texas. We found that oil and gas exploration has occurred at 44 refuges since 1994. We also determined that there are 4,406 wells on 105 refuges, though only 41 percent of the wells at 36 refuges are active, with the other wells either plugged and abandoned or temporarily idle. Active wells on refuge lands produce roughly 1.1 percent and 0.4 percent of domestically produced oil and gas from onshore wells, with an approximate value of $880 million based on 2001 prices. In addition, active oil and gas transmission pipelines cross at least 107 refuges. Bordering refuges, another 4,795 wells reside within one-half mile outside refuge boundaries, in some cases on lands that FWS may acquire in the future. About one-quarter, or 155, of the 575 refuges (538 refuges and 37 wetland management districts) that constitute the National Wildlife Refuge System have past or present oil and gas activities—exploration, drilling and production, transit pipelines, or some combination of these (see table 1). Since 1994, FWS records show that 44 refuges have had some type of oil and gas exploration activities—geologic study, survey, or seismic work. More than one-half of these exploratory activities occurred in the southeastern and southwestern regions of the United States. We also identified 105 refuges with inactive or active oil and gas wells and 107 refuges with transit pipelines. Exploration or drilling and production activities occurred at 120 of the 155 refuges. In total, we identified 4,406 oil and gas wells within 105 refuges. The number of wells per refuge ranged from 1 dry hole well drilled at Willapa Bay National Wildlife Refuge (NWR) in Washington to 1,120 wells at Upper Ouachita NWR in Louisiana. Although refuges with oil and gas wells are present in every FWS region, they are more heavily concentrated in the Gulf Coast of the United States (see fig. 2). More than one-half of the wells (2,512) are located on refuges in FWS Region 4 and a majority of these are in Louisiana. Wells are also concentrated among a minority of the system’s units. For example, five refuges contain 57 percent of all the wells in the system, as shown in table 2. About 4 out of 10 wells located on refuges are actively producing. Of the 4,406 wells, 1,806, or 41 percent, were known to be actively producing oil or gas or disposing of produced water as of the most recent reporting time period as of January 2003. Of the 105 refuges with oil and gas wells, 36 refuges have actively producing wells. The remaining 2,600 wells did not produce oil, gas, or water during the last 12 months; many of these were plugged and abandoned or were dry holes. Gas wells were the most common type of well as indicated in table 3. Active wells on refuge lands produced a total of 23.7 million barrels of oil and 88,171 million cubic feet of natural gas during the most recent 12 months as of January 2003—about 1.1 percent of the 2.117 billion barrels of oil and 0.4 percent of the 24,532,514 million cubic feet of natural gas produced during 2001 (see table 4). The 1,806 active oil and gas wells on refuge lands were roughly 1 percent of the approximately 148,750 active onshore oil and gas wells in the United States in 2001. The value of all refuge-based production, based on 2001 average prices, was over $880 million. However, in addition to levels of production and oil and gas prices, the net benefit of oil and gas activities depends on a number of factors, including size of the investment in infrastructures and any adverse effects on the environment, recreation, and tourism. At least 273 miles of transit pipeline from 49 different oil and gas pipelines cross 28 of the 138 refuges for which data are available. These pipelines are almost exclusively buried and generally require right-of-way permits from FWS. The pipelines vary in size, up to 30 inches in diameter and carry a variety of products, including crude oil, refined petroleum products, and high-pressure natural gas (see table 5). While pipelines cannot be constructed across refuge lands unless FWS determines that the pipelines are compatible with the purposes of the refuge and issues a right-of-way permit, some pipelines were constructed before FWS acquired the property. These pipelines did not undergo a compatibility determination and may not have received a right-of-way permit. Transit pipelines may also have associated storage facilities and pumping stations, such as those we toured at Delta NWR in Louisiana (see fig. 3), but data are not available to identify how many of these are on refuges. A total of 4,795 wells and 84 transit pipelines reside just outside refuges, within one-half mile of refuge boundaries. The 4,795 wells bound 123 refuges, 33 of which do not have any resident oil and gas wells. The 84 pipelines are 186 miles long and border 42 different refuges. While FWS does not own the land outside refuge boundaries, lands surrounding refuges may be designated for future acquisition. For example, at Deep Fork NWR in Oklahoma, 606 wells are within one-half mile outside current boundaries, and some of this land is within approved boundaries for future acquisition (see fig. 4). The overall environmental effects of oil and gas activities on refuge resources are unknown because FWS has conducted few cumulative assessments and has no comprehensive data. Available information indicates that refuge wildlife and habitat have been harmed to varying degrees by spills of oil, gas, brine, and industrial materials as well as through the construction, operation, and maintenance of the infrastructure necessary to produce oil and gas. Routine oil and gas activities can contaminate a refuge and reduce the quantity and quality of habitat available for wildlife. Over the years, new environmental laws and improved industry practices and technology have reduced some of the most detrimental effects of oil and gas activities; however, some harm to refuges continues to occur and some effects from earlier events have not been reversed and continue to diminish refuge resources. In addition, oil and gas operators have taken steps, in some cases voluntarily, to reverse damages resulting from oil and gas activities, but operators have not consistently taken such steps and the adequacy of these steps is not known. FWS does not have an accurate record of the number of spills on refuges and has conducted few studies on the effects of refuge-based oil and gas activities and, therefore, does not know the full extent of the problem or the steps needed to reverse them. Available studies, anecdotal information, and our observations show that some refuge resources have been diminished to varying degrees by spills of oil, gas, and brine and through the construction, operation, and maintenance of the infrastructure necessary to extract oil and gas. The damage varies widely in severity, duration, and visibility, ranging from infrequent small oil spills and industrial debris with no known effect on wildlife, to large and chronic spills causing wildlife deaths and long-term soil and water contamination. Some damage, such as habitat loss because of infrastructure development and soil and water contamination, may last indefinitely while other damage, such as wildlife disturbance during seismic mapping, is of shorter duration. Also, while certain types of damage are readily visible, others, such as groundwater contamination and reduced habitat quality from infrastructure development, are difficult to observe, quantify, and associate directly with oil and gas activities. Finally, oil and gas activities may hinder FWS’s ability to manage or improve refuge habitat, such as seasonal flooding of wetlands or prescribed burns, or hinder public access to parts of the refuge. Spills of oil, gas, and brine have harmed refuge wildlife and habitat. Oil and gas can injure or kill wildlife by destroying the insulating capacity of feathers and fur, depleting oxygen available in water, or exposing wildlife to toxic substances. Long-term effects of oil and gas contamination are difficult to determine, but studies suggest that effects of exposure include reduced fertility, kidney and liver damage, immune suppression, and cancer. Even small spills may contaminate soil and sediments if they occur frequently. For instance, a study of Atchafalaya and Delta NWRs in Louisiana found that levels of oil contamination near oil and gas facilities are lethal to most species of wildlife, even though refuge staff were not aware of any large spills. Figure 5 shows an ongoing clean up of a relatively small oil spill that occurred at Delta NWR in 2002. Brine spills can also be lethal to young waterfowl, damage birds’ feathers, kill vegetation, and decrease nutrients in water. Based on well data from Premier Data Services, over 19.8 million gallons of brine were produced from active wells on NWRs during the most recent 12-month reporting period as of January 2003. Much of this brine was reinjected back into the ground to prevent surface damage. The 16 refuges we visited reported oil, gas, or brine spills, although the frequency and effect of the spills varied widely. For instance, Hopper Mountain NWR in California reported two oil spills in 1990, the only spills since 1974, and refuge records indicated that the operator cleaned up each spill quickly and that refuge staff detected no effect on wildlife. In contrast, Anahuac NWR in Texas reported at least 7 oil spills since 1991, including 1 pipeline spill that killed over 800 large fish such as mullet and redfish and over 180,000 menhaden, a small but ecologically important fish. FWS officials said that natural gas leaks generally pose a lower risk to habitat than oil spills, but a gas leak in 2000 at Sabine NWR in Louisiana killed fish, crabs, and amphibians. Brine spills have also damaged refuges. For example, Atchafalaya and D’Arbonne NWRs in Louisiana reported that brine spills had killed vegetation in the area of the spill. At these refuges, salt concentrations in the soil have remained high and continued to spread for decades after a spill, and some sites do not support vegetation years afterwards. The exact number and size of oil and gas spills on NWRs is not known. Nationally, FWS reported that 348 oil and gas spills were located on or near refuges during fiscal year 2002, although there are limitations to this figure. First, it includes spills resulting from activities not associated with oil and gas production or transit pipelines, such as shipping accidents. Second, FWS calculated the number of spills by reviewing spill reports from the National Response Center and other parties that did not always identify if a refuge is affected. Third, not all spills are required to be reported. Clean Water Act regulations require operators to report spills of any quantity if they cause a sheen to form on waters subject to federal jurisdiction. Other spills are subject to state reporting requirements, which vary. For instance, Texas requires operators to report spills over 210 gallons, while Louisiana requires operators to report spills over 42 gallons. Finally, refuge staff told us that they knew of spills that operators never reported. Constructing, operating, and maintaining the infrastructure necessary to produce oil and gas can harm wildlife by reducing the quantity and quality of habitat. At Kenai NWR in Alaska, for instance, oil and gas wells and associated facilities have eliminated at least 524 acres of habitat, while other infrastructure, such as access roads and pipelines, has eliminated an additional 424 acres. While this loss of habitat represents a very small proportion of total refuge acreage, refuge staff determined that it eliminated food sources that would have supported between 41 and 136 cow moose and 411 snowshoe hares. In other instances, habitat lost to infrastructure development is negligible—for example, the presence of a wellhead or pipelines, such as the wellhead at Delta NWR shown in figure 6. Infrastructure development can reduce the quality of habitat by fragmenting it and, in some cases, by changing the hydrology of the refuge ecosystem or contaminating it with toxic substances. Habitat fragmentation occurs when a network of roads, canals, and other infrastructure is constructed in previously undeveloped areas of a refuge. Fragmentation increases disturbances from human activities, provides pathways for predators, and helps spread nonnative plant species. For example, the endangered California condor is particularly susceptible to disturbances from human activities. Condors have been observed landing on oil pads on the refuge, which poses a safety risk to the birds and reduces their fear of humans. In addition, FWS estimated in 1980 that oil and gas activities at Hopper Mountain NWR eliminated about 63 percent of the potential feeding habitat for condors on the refuge. The current refuge manager said that the effect of this loss on the condor population may not be significant because the importance of the feeding habitat provided by the refuge may not be as great as previously thought. Corridors that oil and gas operators have developed assist predation—for example, among songbirds, and allow a pathway for invasive species, a significant management problem for FWS. Finally, officials at Anahuac and McFaddin NWRs in Texas said that disturbances from oil and gas activities are likely significant and expressed concern that bird nesting may be disrupted. However, no studies have been conducted at these refuges to determine the effect of these disturbances. Infrastructure networks can also damage refuge habitat by changing the hydrology of the refuge ecosystem, particularly in coastal areas. For instance, tens of thousands of acres of freshwater marsh at Sabine NWR, and elsewhere in Louisiana and Texas, have been lost due to saltwater intrusion. Saltwater intrusion may change the types of plants in the marsh and can cause erosion that creates an open water habitat that is less biologically productive than the marsh. While several factors contribute to the saltwater intrusion, construction of canals to access oil and gas facilities is considered by many scientists to be significant. Seismic studies for oil and gas exploration in coastal marshes can also contribute to saltwater intrusion. Seismic studies are typically conducted in a grid pattern and may cover large portions of a refuge. Preparing and conducting seismic studies may require heavy equipment that can compress the marsh, which changes the plant community and could allow saltwater to intrude into the marsh, particularly during droughts that decrease freshwater flows. At McFaddin NWR, the grid pattern from a 1995 seismic study was clearly visible from infrared aerial photographs taken after the seismic study was completed (see fig. 7). Moreover, industrial activities associated with extracting oil and gas have been found to contaminate wildlife refuges with toxic substances such as mercury and polychlorinated biphenyls (PCB). D’Arbonne, Kenai, and Upper Ouachita (Louisiana) NWRs reported mercury contamination, and Kenai NWR reported PCB contamination from oil and gas activities that must still be cleaned up by FWS if the responsible parties cannot be found. Mercury and PCBs were used in equipment such as compressors, transformers, and well production meters, although generally they are no longer used. Mercury has been linked to brain, kidney, and reproductive system damage, and PCBs are known animal carcinogens. New laws prohibiting some of the most harmful industry practices have helped diminish the adverse effect of current and recent oil and gas activities on refuge resources. For example, Louisiana now generally prohibits using open pits to store production wastes and brine in coastal areas or discharging brine into drainages or state waters. Another example is Texas, which requires operators to install screens or nets over open tanks and pits to protect birds from contacting hazardous fluids. Texas also now requires operators to remove oil and gas infrastructure, such as tanks, which will not be actively used in the continuing operation of a lease and to contour closed sites to reduce water contamination. Improvements in industry practice, including improved technology, have also reduced the damage caused by oil and gas activities. For example, where feasible, directional drilling allows (1) operators to avoid placing wells in sensitive areas such as wetlands and (2) several wells to be drilled from the same pad, thus reducing the amount of habitat damaged. Another example is improved geologic mapping through 3-D seismic technology. While 3-D seismic studies require more vehicle traffic and may damage more vegetation than 2-D studies, improved geologic mapping may reduce the number of wells drilled that do not produce oil or gas and ultimately reduce the amount of habitat damaged. Furthermore, the impact of 3-D seismic studies has been reduced through other improvements, including using vehicles less damaging to the surface, reducing the number of vehicle trips necessary, hand carrying seismic lines to avoid vehicle damage altogether, and scheduling seismic operations to avoid sensitive times. While the relative impacts of the activities have been reduced in recent years, the effects have not been eliminated. For instance, oil and gas infrastructure continues to diminish availability of refuge habitat for wildlife, and spills of oil, gas, and brine that damage fish and wildlife continue to occur. In addition, several refuge managers reported that operators do not always comply with legal requirements or follow best industry practices such as constructing berms (earthen barriers) around tanks to contain spills, covering tanks to protect wildlife, and removing pits that temporarily store fluids used during well maintenance. Environmental damage from oil and gas activities may be partially reversed by remediating contamination or by reclaiming a site to its prior condition after oil and gas activities cease. However, oil and gas operators have not consistently taken steps to reverse environmental damages that have occurred from oil and gas activities on NWRs. In some cases, officials do not know if remediation following spills is sufficient to protect refuge resources, particularly for smaller oil spills or spills into wetlands. In other cases, FWS has been satisfied with the response. According to refuge officials and industry representatives, when small oil spills occur, operators may contain the oil and then remove the oil and the contaminated soil, but in some cases operators leave the oil and cover it with dirt. In contrast, the effects of larger spills may be evaluated systematically and remediated by the operator. For example, in 2000, a ruptured pipeline spilled nearly 200,000 gallons of crude oil at John Heinz NWR in Pennsylvania, damaging several species of wildlife and covering a frozen pond. In response, the operator removed the oil and the contaminated soil, replanted damaged vegetation, funded scientific studies to determine the effect on refuge wildlife, compensated the refuge for the value lost to visitors during the spill; and the operator is negotiating with FWS to identify an appropriate restoration project to compensate for the ecologic value of refuge resources lost while the refuge recovers from the spill. Similar to spill remediation, reclamation of oil and gas facilities following their use is also inconsistent. For instance, an operator at McFaddin NWR removed a road and a well pad that had been constructed to access a new well site and restored the marsh damaged by construction after the well was no longer needed. Figure 8 provides an aerial view of the road and the well pad shortly after they were constructed and a photo of the same site following reclamation. Other refuges, however, reported that storage tanks, debris, and access roads remained long after use (see fig. 9). Refuge staff cited several reasons for some sites not being reclaimed, including difficulty identifying the responsible parties, operator insolvency, potential future use because other locations in the same field remained in operation, and uncertainty of their authority to require operators to reclaim sites. Finally, several states do not require operators to reverse the effects of oil and gas activities. For instance, Texas law does not require operators to remove all buried flowlines or access roads. Several states, such as Oklahoma and Texas, have established programs to clean up abandoned oil and gas sites, but funds are limited. Because operators do not consistently or entirely reverse environmental damages resulting from oil and gas activities, FWS has had to clean up sites at its expense or leave sites unreclaimed. FWS spent $387,100 to clean up 14 oil- or gas-related sites between fiscal years 1991 and 2002 and is planning to spend an additional $108,000 at 3 sites in fiscal year 2003. These cleanup projects included removing oil- and gas-related debris, plugging unused gas wells, and addressing mercury contamination at 9 refuges in Arkansas and Louisiana. Other sites remain to be addressed. There are 2,600 inactive wells on refuges, including an unknown number that have been abandoned but not plugged, and some sites also have unused tanks, flowlines, and debris that should be removed. The estimated cost of cleanup at a site at Anahuac NWR is $1.1 million and currently is deferred until fiscal year 2009. Refuge managers at some refuges we visited expressed concern that as oil and gas production declines, operators will abandon more infrastructure and FWS will have to reclaim these sites. FWS has conducted few studies to quantify the extent of the damage caused by oil and gas activities. FWS identifies and assesses contaminant threats to refuges by conducting Contaminant Assessment Process (CAP) studies and other studies of contamination. Although CAP studies are FWS’s primary formal mechanism for identifying potential sources of contaminants on refuges, the studies do not quantify the extent of any contamination or its biological effects. Moreover, CAP studies have not been conducted at all refuges with oil and gas activities, including many refuges that have significant activities. FWS established the CAP process in 1996, and to date studies have been completed at about 193 refuges (about 34 percent of all refuges), including 67 of the 155 refuges (43 percent) with oil and gas activities. The number of refuges with oil and gas activities that have completed CAP studies varies by region. For instance, in Region 2, which includes Texas, 20 of 28 refuges (71 percent) had completed CAP studies, while in Region 4, which includes Louisiana, 11 of 45 (24 percent) had completed CAP studies. The national coordinator for CAP said that the studies are sequenced to coincide with each refuge’s comprehensive conservation planning process, which, in turn, is prioritized within each region based on factors including primary threats, staffing levels, and funding. Finally, the comprehensiveness of the studies varies widely. The CAP for Kenai NWR lists over 330 known spills and describes other potential contamination sources from oil and gas activities. In contrast, the CAP study for Deep Fork NWR did not list oil and gas activities as a potential source of contamination, even though there are over 360 wells on the refuge and the refuge’s comprehensive conservation plan previously identified concerns over oil and gas activities, including unplugged wells. The CAP program manager stated that, in this case, FWS staff did not follow the procedures established in the CAP manual, which requires that all potential sources of contamination be identified. If contaminants are identified at a refuge, FWS may conduct additional studies through its contaminants program. Since 1988, FWS has funded at least 33 studies at 47 national wildlife refuges nationwide that have examined the effects of oil and gas activities. The scope of the studies ranged from general investigations to document the presence and concentration of a variety of contaminants, including those associated with oil and gas activities, to specific studies to examine the impact of oil and gas activities on particular refuges. In some cases, contamination concerns identified in a general investigation may lead to a more detailed study. For instance, a contaminants survey at Hagerman NWR identified contaminants from oil and gas activities, but the survey was insufficient to determine the effects on fish and wildlife. A later study determined that brine and oil contaminant levels did not appear to be of concern. In addition to conducting its own studies, FWS uses studies conducted by other government agencies and universities, in some cases at its request. For instance, the U.S. Geological Survey is studying the effects of a 3-D seismic study at Sabine NWR to determine the long-term effects of seismic activities on refuge plant species, and Drexel University is studying the impact of an oil spill on wildlife at John Heinz NWR, including any effects on a rare turtle species. The lack of information on the effects of oil and gas activities on refuge wildlife hinders FWS’s ability to identify and obtain appropriate mitigation measures and to require responsible parties to address damages from past activities. For instance, the Chief, Division of Environmental Quality, stated that FWS does not always know the effects of oil and gas activities on wildlife or habitat and, therefore, does not know what actions should be required of operators to reduce those effects. Lack of sufficient information has also hindered FWS’s efforts to identify all locations with past oil and gas activities and to require responsible parties to address damages. FWS does not know the number or location of all abandoned wells and other oil and gas infrastructure or the threat of contamination they pose and, therefore, its ability to require responsible parties to address damages is limited. While recognizing the value of this type of information, the Chief, Division of Environmental Quality, said that in some cases FWS lacked the budget to fund environmental studies and that, in other cases, the cost of obtaining the information was disproportionate to its management value. In those cases where FWS has performed studies, the information has proved valuable. For example, FWS funded a study at some refuges in Oklahoma and Texas to inventory locations containing oil and gas infrastructure, to determine if they were closed legally, and to document their present condition. FWS intends to use this information to identify cleanup options with state and federal regulators. If this effort is successful, FWS may conduct similar studies on other refuges. In other cases, refuges have requested studies that have not been funded. For instance, proposals to examine the effects of oil and gas activities on a wetland management district in Montana and to identify unknown oil and gas locations at Kenai NWR have not been approved, in part, due to lack of funds. In the case of Kenai NWR, refuge staff said that current operators may be responsible for cleaning up historic sites but that FWS had to identify the sites before it could make this determination. FWS’s management and oversight of oil and gas activities varies widely from refuge to refuge. Effectively managing these activities across the refuge system would entail, at a minimum, identifying the risks posed by the activities, establishing operating conditions to minimize damages, and monitoring the activities with trained staff to ensure compliance. While some refuges have adopted comprehensive management and oversight practices, others have done little. Variation in refuges’ management and oversight of oil and gas activities stems from differences in FWS’s regulatory authority depending upon the nature of the mineral rights and from inadequate guidance, resources, and training for refuge staff. In addition, on a related management issue, FWS’s policy requiring a complete and thorough assessment of potentially contaminated property prior to acquisition is not always adhered to because of inconsistent interpretation of the requirements by FWS, placing the federal government at risk of assuming unknown cleanup costs in the future. FWS’s objective in managing oil and gas on refuge lands is to protect wildlife habitat and other resources while allowing oil and gas operators to exercise their mineral rights. Meeting this objective requires basic management controls. Under the Federal Manager’s Financial Integrity Act of 1982, we have issued management control standards that apply to all federal agencies. These standards require agencies to identify risks, develop procedures to protect against these risks, and monitor adherence to the procedures. For refuges, doing so would mean identifying the nature and extent of oil and gas activities on a refuge and the risks they pose to refuge resources, adopting risk-reduction procedures such as issuing access permits with conditions to protect refuge resources and securing financial assurance that reclamation will occur, and overseeing oil and gas operations with trained and dedicated staff to ensure compliance with laws and permits. The refuges we examined varied in the extent to which they identified risks, adopted procedures to minimize those risks, and monitored oil and gas activities. First, some refuge staff did not have complete information on the extent of oil and gas activities occurring on their refuges. For example, at Deep Fork NWR refuge staff estimated that there were 600 or more abandoned wells but knew the location of very few of these wells. Further, as noted earlier, only 67 of the 155 refuges with oil and gas activities and 10 of the 16 refuges we visited (see table 6) had completed CAP studies identifying the possible sources and types of contamination on the refuges. In contrast, at Kenai NWR refuge staff had detailed information on oil and gas wells and activities on the refuge, had completed an exhaustive CAP study, and was completing an Environmental Impact Statement on the effects of oil and gas activities. Second, permits, which grant oil and gas operators access to specified areas of a refuge and contain conditions to protect refuge resources, such as seasonal or vehicle restrictions, to protect air quality, soil, water and wildlife habitat, were applied to varying degrees at 11 of the 16 refuges we visited. FWS can require permits if the mineral rights are federally owned, the property deed allows it to, or the operator voluntarily agreed to one. In the other five cases, refuge staff did not believe they had authority to require permits. In addition, five refuges obtained financial assurance in the form of bonds for the future costs of reclamation, or rely on bonds administered by another federal agency. The other 11 refuges rely instead on state bonds, which are allowed under FWS guidance, but may provide different degrees of financial assurance than federal bonds. For example, the bonds in some states may or may not cover damages caused by oil and gas activities if the effects are considered to be reasonable impacts to the land. Reasonable impacts are not consistently defined among states because impacts to property are determined by what is usual and customary practice in the area. Finally, we found little correlation between the scale of oil and gas activities on refuges and the presence of dedicated staff to oversee them. Two of the refuges we visited have a fully dedicated staff person to oversee oil and gas operators—two of the only three in the entire refuge system. These two refuges in Louisiana collect fees from operators to help pay for these staff. In contrast, refuges with greater levels of activity do not have dedicated staff. FWS’s legal authority to require oil and gas operators to obtain permits varies considerably, depending upon the nature of the mineral rights. Permits granting access to specified areas of a refuge can be used to establish reasonable operating conditions for private mineral owners to exercise their rights while protecting refuge resources. Variation in authority to require such permits, and the uncertainty that this sometimes creates among refuge staff, partly accounts for differences in management and oversight we found at refuges. At one end of the spectrum, FWS has broad authority to deny or regulate access to oil and gas on wildlife refuges when the federal government owns the mineral rights. Under Department of the Interior regulations, access to federal mineral rights underlying refuges requires the approval of the Secretary of the Interior with the concurrence of FWS as to the time, place, and nature of the activities. These regulations also prohibit leasing of federal minerals on refuges outside of Alaska, except in cases where federal minerals are being drained by operations on property adjacent to the refuges. In contrast, FWS’s authority is not nearly as broad or as clear with respect to private owners of mineral rights. FWS’s authority to require permits from private mineral owners depends on the nature of the private rights and, in some cases, whether the property deed contains specific language. Private mineral rights may be either “reserved” or “outstanding.” Reserved rights are created when the property owner retains the mineral rights at the time that the surface property is transferred to the federal government. Outstanding rights are created when the mineral rights are severed from the surface lands prior to the surface property’s transfer to the federal government and, thus, a third party owns the rights. FWS’s authority to regulate oil and gas activities of private owners of reserved mineral rights is limited under current law. The Department of the Interior takes the position, with which we agree, that FWS can require permits for reserved rights only if the deed transferring surface ownership to the federal government contains language that subjects these rights to permitting requirements. The department’s position was first expressed in a 1986 opinion by the Office of the Solicitor, which, that office recently advised us, continues to reflect the department’s position. The department’s position is largely based on a section of the Migratory Bird Conservation Act that makes reserved rights subject to government regulation if the deed includes specific requirements, such as permitting requirements, or states that the rights are subject to regulations prescribed by the Department “from time to time.” Any expansion of FWS’s authority over the owners of reserved mineral rights, to include cases in which deeds do not contain such provisions, would thus require a change in the law. By contrast, it does not appear that the Department of the Interior has taken a formal position, and the Solicitor’s Office recently declined to take a position, regarding FWS’s authority to require a permit for private owners of outstanding mineral rights. The Solicitor’s Office advised us that it would only provide an opinion on FWS’s authority over outstanding mineral rights if FWS requested one. Nonetheless, we believe that FWS has broad general authority, similar to that of the Forest Service and the National Park Service, to require owners exercising outstanding mineral rights to obtain permits that contain conditions to protect a refuge and its wildlife. Both amendments to the National Wildlife Refuge System Administration Act of 1966 (1966 Act) and court decisions since the department issued its 1986 opinion support this conclusion. The National Wildlife System Improvement Act of 1997 (1997 Act) amended the 1966 Act to provide for a more effective process for determining which secondary uses would be compatible with refuges and to allow refuges to be managed more like national forests and parks. The 1997 Act established as a mission of the National Wildlife Refuge System “conservation, management, and where appropriate, restoration of [fish and wildlife] for the benefit of present and future generations of Americans.” In separate cases involving the Forest Service and the National Park Service, federal courts relied on language similar to that in the 1997 Act to find that these agencies had authority to require private owners of outstanding mineral rights to obtain permits before conducting oil and gas activities. We believe the same conclusion follows with respect to FWS’s authority. As a result of these differences in legal authority, there is a considerable gap in FWS’s management and oversight of oil and gas activities, but neither FWS nor we know precisely at how many refuges this is occurring. Because some refuges may consist of hundreds of individual deeds, it is not possible without considerable investigation to determine the relative prevalence of reserved and outstanding mineral rights or the extent to which property deeds allow FWS to require owners of reserved mineral rights to obtain a permit, according to FWS officials. FWS officials also said that differences in FWS’s authority to require permits do not provide for a consistent way of managing and overseeing oil and gas activities. In addition to FWS’s inconsistent or undefined authority to require permits and oversee oil and gas activities, FWS cannot improve its management and oversight of those activities without better guidance, resources, and training. According to refuge managers and officials in the Department of the Interior’s Office of the Solicitor, national guidance is insufficient for refuge staff to know what authority they have to manage oil and gas activities, or how to carry out that authority. To supplement the national guidance, three of FWS’s seven regions have developed more detailed guidance to assist in managing and overseeing oil and gas activities. For instance, while the national guidance describes only FWS’s authority to require permits, guidance in Regions 2 and 6 provides specific examples of conditions the refuge manager should include in a permit to protect refuge resources. Staff at Sabine NWR have also drafted, in conjunction with headquarters staff, more detailed national guidance on managing and overseeing oil and gas activities, including a detailed description of FWS’s authority to require permits and many specific conditions to include in permits. However, FWS has not approved this draft guidance. Refuge staff we interviewed also cited a lack of staff resources as an obstacle to properly managing oil and gas activities because staff do not have time to become familiar with federal and state laws or manage and oversee oil and gas operations. For example, when FWS purchased property for Deep Fork NWR, the property deed contained assurances that FWS would be able to issue permits governing private mineral rights, yet that information was never conveyed to refuge staff. To determine FWS’s permitting authority, refuge staff would have to research each individual property deed. Refuge staff said that they do not have time to do this research because they must address other management concerns, such as law enforcement. In contrast, Sabine NWR has a staff person dedicated to managing oil and gas activities. As a result, this person has sufficient time to become familiar with applicable laws and to work with operators and state regulators to manage and oversee oil and gas activities to reduce their effects on the refuge. This oversight has encouraged the operator to identify and restore sites damaged by past oil and gas activities. Refuges that have access to their own funding mechanisms to recover damages are better able to manage and oversee oil and gas activities. It is standard industry practice for operators’ conducting seismic activities to pay exploratory fees to surface landowners. However, only refuges in Louisiana and Texas have authority to assess and retain such fees to cover potential damages caused by seismic activity. Refuges in Louisiana routinely collect these fees to aid management and oversight and fund restoration efforts, but Region 2 has retained existing policy preventing refuges in Texas from assessing these fees. To address this lack of consistency, FWS headquarters officials told us they are drafting guidance to clarify how these regions should apply their authority to collect and retain fees. One of the refuges that collects these fees is Sabine NWR, which uses these fees to fund a staff person specifically dedicated to the management and oversight of oil and gas activities and to fund mitigation projects to reduce the effect of oil and gas operations. Figure 10 shows a recent mitigation project, funded by oil and gas operators at Sabine NWR, that is designed to restore a marsh damaged by saltwater intrusion due in part to earlier oil and gas activities. Officials in the Department of the Interior’s Office of the Solicitor support the use of fees as a more efficient mechanism than litigation to compensate for damages. Trained staff are integral to effective oversight, yet refuge staff we met with said their principal duties and training as wildlife managers do not prepare them for managing oil and gas activities. FWS has offered only one workshop in the last 10 years for refuge staff nationwide that is specific to managing oil and gas activities on refuges. This 3-day workshop in June 2001, attended by 36 FWS officials, provided information on possible sources of spills, effects of oil on wildlife, enforcement avenues, and damage recovery; however, there was limited discussion of FWS’s regulatory authority. Refuge staff lack training on standard industry practices, state and federal laws, and identification of oil- and gas-related problems. For example, at Atchafalaya NWR, the refuge manager has not been able to enforce special use permits, citing a lack of training about applicable state and federal laws. FWS has not always thoroughly assessed property for possible contamination from oil and gas activities prior to its acquisition. The FWS manual requires a thorough investigation of potential contamination prior to acquisition of any property so that the full present and future costs of cleanup can be determined. However, some FWS regions have interpreted the guidance more narrowly than others. As a result, FWS has not always conducted a thorough investigation of properties to be acquired, resulting in unexpected future cleanup costs. FWS’s guidance requires a complete environmental site assessment to determine “the likelihood of the presence of hazardous substances or other environmental problems associated with the property and any remediation or other clean up costs.” According to FWS contaminant and realty officials, a thorough investigation as required by the FWS manual would include an assessment of both the surface and subsurface properties for contamination. Some regions consistently conduct adequate assessments, while other regions’ investigations are not as thorough. For example, Region 6 assesses both the subsurface and surface properties for contamination, even when acquiring only the surface portion. In two cases, Region 6 did not acquire property, even when offered as a donation, because of subsurface contamination from oil and gas activities. In contrast, FWS Regions 2, 3, and 4 do not always thoroughly investigate all properties for contamination prior to acquisition. For example, not examining the subsurface soils for contamination or investigating further if there is some indication of the presence of contaminants. FWS realty officials told us that the acquisition guidance needs to be clarified and that the oversight of regional implementation needs to be improved to ensure that all new property is thoroughly investigated for contamination. In one instance, FWS acquired property that is contaminated from oil and gas activities and is now paying unexpected cleanup costs because staff did not conduct an adequate assessment of the subsurface property prior to acquisition. At the Patoka River NWR in Indiana (Region 3), during an acquisition, FWS staff conducted an initial contamination investigation and used a state certification of well closure as assurance that the land was cleaned and closed and did not investigate further, even though they were aware that the land had contained oil wells and an oil storage facility. After acquiring the property, FWS found that large amounts of soil were contaminated with oil. FWS has thus far spent $15,000 and a local conservation group spent another $43,000 cleaning up contaminated soil. The National Wildlife Refuge System is a national asset established principally for the conservation of wildlife and habitat. While federally owned mineral rights underlying refuge lands are generally not available for oil and gas exploration and production, that prohibition does not extend to the many private parties that own mineral rights underlying refuge lands. The scale of these activities on refuges is such that some refuge resources have been diminished, although the extent is unknown without additional study. Some refuges have adopted practices—for example, developing data on the nature and extent of activities and their effects on the refuge, overseeing oil and gas operators, and training refuge staff to better carry out their management and oversight responsibilities—that limit the impact of these activities on refuge resources. If these practices were implemented throughout the agency, they could provide better assurance that environmental effects from oil and gas activities are minimized. In particular, in some cases, refuges have issued permits that establish operating conditions for oil and gas activities, giving the refuges greater control over these activities and protecting refuge resources before damage occurs. However, FWS does not have a policy requiring owners of outstanding mineral rights to obtain a permit, although we believe FWS has this authority, and FWS can require owners of reserved mineral rights to obtain a permit if the property deed subjects the rights to such requirements. Expanding or confirming FWS’s authority to require reasonable permit conditions and oversee oil and gas activities, including cases where mineral rights have been reserved and the property deed does not already subject the rights to permit requirements, would strengthen and provide greater consistency in FWS’s management and oversight. Such a step could be done without infringing on the rights of private mineral owners. Finally, FWS’s land acquisition guidance is unclear and oversight is inadequate, thereby exposing the federal government to unexpected cleanup costs for properties acquired without adequately assessing contamination from oil and gas activities. To improve the framework for managing and overseeing oil and gas activities on national wildlife refuges, the Secretary of the Interior should direct the Director of the Fish and Wildlife Service to take the following steps: Collect and maintain better data on the nature and extent of oil and gas activities and the effects of these activities on refuge resources. Determine what level of staffing is necessary to adequately oversee oil and gas operators and seek necessary funding to meet those needs, through appropriations, the authority to assess fees, or other means. Ensure that staff are adequately trained to oversee oil and gas activities. Clarify guidance and better oversee FWS’s land acquisition process so that all hazardous substances and environmental problems and future cleanup costs are fully identified prior to acquisition and unexpected costs are avoided. As part of the process of improving the framework for managing and overseeing oil and gas activities on national wildlife refuges, we further recommend that the Secretary of the Interior and the Director of the Fish and Wildlife Service work with the Department of the Interior’s Office of the Solicitor to (1) determine FWS’s existing authority to issue permits and set reasonable conditions regarding outstanding mineral rights, reporting the results of its determination to Congress, and (2) seek from Congress, in coordination with appropriate Administration officials, including those within the Executive Office of the President, any necessary additional authority over such rights, and over reserved mineral rights, so that FWS can apply a consistent and reasonable set of regulatory and management controls over all oil and gas activities occurring on national wildlife refuges to protect the public’s surface interests. In light of the Department of the Interior’s perceived limitation to its ability to seek expanded legislative authority over private mineral rights, Congress may wish to consider providing that authority. Ensuring that FWS has legal authority to issue permits to holders of both outstanding and reserved mineral rights would improve FWS’s ability to consistently regulate and oversee oil and gas operations on wildlife refuges. We provided an opportunity for the Department of the Interior and U.S. Fish and Wildlife Service officials to review a draft of this report. The comments of the department as expressed by the Acting Assistant Secretary for Fish and Wildlife and Parks were mixed. The department agreed that FWS’s acquisition policy and guidance should be improved. However, the department was silent on our recommendations that the FWS should collect and maintain better data on oil and gas activities and their effects and that it should ensure that staff are adequately trained to oversee oil and gas activities. We continue to believe these recommendations are still warranted. The department did raise a concern in regards to two of our recommendations. First, the department questioned whether hiring additional dedicated staff would be the most cost-effective solution to improving oversight. However, the department apparently misinterpreted our recommendation for FWS to determine what level of staffing necessary to oversee these activities as a call to hire additional dedicated staff. If the department determines that there are more cost-effective means to ensure adequate staffing, such as the use of contractors or temporary staff, it could pursue those actions and be responsive to this recommendation. Second, while the department was silent on whether it would review the FWS’s authority to regulate surface access to refuges for owners of outstanding mineral rights, the department did raise concerns about GAO’s recommendation that it seek additional authority from Congress to regulate reserved mineral rights. According to the department, it would be unconstitutional for it (as an executive branch department) to make such a request to Congress, because doing so would infringe upon the President’s authority to recommend legislation to Congress under the U.S. Constitution’s Recommendations Clause. We fully anticipated in making this draft recommendation that the department would coordinate its legislative proposals with the President. In order to make this explicit, we clarified the recommendation to recognize that the department should coordinate its legislative request to Congress through appropriate Administration officials, including those within the Executive Office of the President. Further, as a legal matter, while the Recommendations Clause explicitly provides for the President to make recommendations to Congress, it does not deny that same freedom to others. The courts have ruled that “. . . anyone can propose legislation.” The department also disagreed with our characterization of lost condor habitat at Hooper Mountain NWR in California. The department asked that we cite the source for this characterization and include additional clarification and explanation of the effect of oil and gas activities on the condor reintroduction program at this refuge. FWS itself, in 1980, made the determination that 70 percent of critical condor habitat was lost due to oil and gas development at Hopper Mountain NWR. However, this calculation included both refuge and off-refuge lands. Considering only refuge lands, lost habitat totaled 63 percent and the report has been revised accordingly. In an attachment to the letter, the Department of the Interior raised three additional concerns with our report. These involve our characterizations of FWS’s land acquisition practices, our inclusion of oil and gas pipelines in the scope of the report, and the significance of problems associated with oil and gas activities. First, FWS concurred that its acquisition policy and guidance could be improved and that regional implementation has at times been inadequate. Nevertheless, FWS took exception to our citing problems we found at Patoka River NWR and with that region’s adherence to established policy in conducting its site assessment. However, our review clearly indicated that the FWS failed to conduct additional contamination investigation of lands that FWS officials knew had supported oil and gas extraction and storage, as required by their policy. As a result, the FWS acquired lands that are contaminated and has incurred expenses to remediate that contamination. Second, the department’s Office of the Solicitor raised a concern that including oil and gas pipelines as an oil and gas activity overstates the prevalence of oil and gas activities. We disagree; pipeline leaks have contributed to refuge contamination and affected refuge operations in other ways. We believe that inclusion of oil and gas pipelines on refuges is an important factor in assessing the overall scale of oil and gas activities on refuges. Nevertheless, we have added additional information to the report that allows readers to differentiate among the types of activities on refuges, including pipelines. Third, the department’s Office of Policy Analysis expressed the view that our reporting of refuge-based oil and gas activities not previously known to FWS overstated the problem because we did not link these activities to “significant detrimental” effects. The department also suggested that any problems associated with oil and gas activities on refuges should be considered relative to other problems faced by these refuges. However, our report already states that FWS has not conducted a cumulative assessment of the effects of oil and gas activities on individual refuges or the refuge system as a whole. Identifying the presence of these activities should be the first step toward any such assessment. Comparing these impacts relative to other threats to refuges is outside the scope of this report. Finally, the department included a number of technical comments from the FWS and various department offices that have been incorporated within the report as appropriate. The Department of the Interior’s letter and our comments on the letter appear in appendix V. We conducted our work from June 2002 through March 2003 in accordance with generally accepted government auditing standards. Appendix IV contains details of our scope and methodology. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of the Interior and the Director of the U.S. Fish and Wildlife Service. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please call me at (202) 512-3841 or William Swick at (206) 287-4851. Key contributors to this report are listed in appendix VI. N.C. Tex. Tex. Colo. N.D. La. Tex. N.D. Ark. La. La. 6 Mont. 6 Mont. Tex. La. Ind. Calif. N.M. La. La. N.M. 6 Mont. 6 Mont. Tex. La. Ariz. Tex. Calif. Ark. La. x 5 W.V. S.C. La. Ala. Ariz. Calif. Ill. N.D. Ill. 4 Miss. La. 2 Okla. Calif. La. N.D. 3 Minn. N.D. Fla. Ill. Penn. Nev. Ark. Kans. Fla. 4 Miss. La. Va. Calif. Tex. 6 Mont. 6 Mont. La. Tenn. Ariz. 6 Mont. x 4 Miss. Calif. Calif. N.D. N.D. Penn. Calif. 3 Mich. Kan. Ariz. La. Tex. 6 Mont. La. 6 Mont. 3 Wisc. 3 Minn. 2 Okla. N.D. La. Tex. La. Ill. Tex. 4 Miss. N.C. Tex. 6 Mont. 6 Mont. Calif. Ill. Fla. 3 Minn. Miss. 4 Miss. N.Y. Tex. 6 Wyo. 1 Wash. 5 W.V. 2 Okla. Ark. 4 Miss. Ind. N.C. Calif. Ark. Kan. Colo. La. Calif. 1 Wash. 2 Okla. Calif. Tex. Calif. Calif. Tex. Calif. 2 Okla. 3 Minn. 3 Mich. 3 Mo. 4 Miss. Nev. Calif. Calif. Fla. La. Tex. 2 Okla. Tex. 3 Wisc. La. N.D. 2 Okla. Ala. Ark. 3 Wisc. 1 Wash. 3 Minn. Feeding habitat for endangered California condors on refuge reduced by 63 percent. Minor soil contamination from oil spills. County issues conditional use permits and works closely with the Fish and Wildlife Service (FWS). Old and unused infrastructure and numerous unplugged wells. Brine spills have killed vegetation. Although the property deed stipulates that a special use permit and bond are required, the refuge does not require permits or bonds. Old and unused infrastructure and numerous unplugged wells. All oil and gas activities are permitted through the Army Corps of Engineers with FWS input. Oil spills have killed wildlife and brine spills have killed vegetation. Abandoned infrastructure, including flow lines and storage tanks remain at site. Refuge sometimes issues voluntary permits. Do not require operators to post bonds, but in one case, has collected fees for damage that exceeded the conditions of the special use permit. Soil and groundwater contamination from oil spills. Abandoned infrastructure remains at site. Refuge issues voluntary special use permits with conditions to protect refuge resources. Refuge does not require voluntary use permits or bonds. Soil and water contamination from oil spills. Abandoned infrastructure remains at site. Sediment contaminated by oil spills. Saltwater intrusion due to subsidence. Abandoned infrastructure remains at the site. Refuge issues special use and right-of-way permits with conditions imposed by FWS and collects mitigation fees. One staff dedicated to oversight activities. Brine spills have killed vegetation. Old and unused infrastructure, including storage tanks, remains at the site. Although the property deed requires a special use permit and an approved plan of operations, the refuge has not requested a plan of operations. In the past, the refuge has issued special use permits, but the current operator refuses to agree to their conditions. Nature and extent of oil and gas activity 59 wells (8 active) 4 production pads with storage (100 miles) and 40 active flow lines (50 miles) Pipeline spill caused wildlife fatalities and contamination. Habitat loss from saltwater intrusion and construction of roads, canals, and other facilities. Habitat fragmentation has contributed to increased number of predators. The refuge collects fees from operators to fund full-time oversight position. Voluntary permits issued to manage operator activities. 139 wells (51 active) 1 storage and injection facility 5 transit pipelines (75 miles) and numerous flow lines (199 miles) Soil and vegetation damage from brine spills and old disposal pits. Mercury contamination. Numerous abandoned wells remain at the site. The refuge does not issue permits for any of the gas activities and relies on operator cooperation. 1,120 wells (908 active) No production pads 13 transmission lines (31 miles) and numerous flow lines (313 miles) Soil and vegetation damage from brine spills and old disposal pits. Mercury contamination. Numerous abandoned wells remain at the site. The refuge does not issue permits for any of the gas activities and relies on operator cooperation. Large pipeline spill resulting in wildlife deaths and soil and sediment contamination. The refuge issues permits for maintenance activities. Minor soil contamination from oil spills. The refuge staff have developed regional management policy and attach conditions to federal permits. The refuge assesses a fee for seismic activities. Unknown soil contamination from oil spills. The refuge staff have developed regional management policy and attach conditions to federal permits. The refuge assesses a fee for seismic activities. Minor soil contamination from oil spills. The refuge staff have developed regional management policy and attach conditions to federal permits. The refuge assesses a fee for seismic activities. Soil and water contamination from numerous oil spills. Mercury and polychlorinated biphenyl contamination. Lost habitat from infrastructure development. The refuge issues right of way and special use permits and requires bonds. The Fish and Wildlife Service’s current authority to regulate, prospectively, the oil and gas activities of private owners of “reserved” and “outstanding” mineral rights FWS’s authority over owners of reserved mineral rights is limited by statute, to those instances in which the deed transferring the land from the mineral rights owner to the federal government includes language either requiring permits or requiring compliance with regulations the Department of the Interior may adopt in the future, including permitting regulations. FWS’s authority over owners of outstanding mineral rights is limited in the sense that FWS’s regulations do not currently require permits. Two of FWS’s sister land management agencies—the National Park Service and the United States Forest Service—have regulations that require outstanding mineral rights owners to obtain permits before engaging in oil and gas activities on federal lands they manage. FWS, on the other hand, has no such regulations. As discussed below, while it appears that the Department of the Interior has not taken a formal position on whether FWS has legal authority to promulgate such regulations, we conclude it has such authority, under its statutes and related case law. Privately owned mineral rights within wildlife refuges may be “reserved” or “outstanding.” Reserved mineral rights are those that were reserved by the owner when ownership of the surface land was transferred to the federal government. Outstanding mineral rights are those that were reserved before the surface was transferred to the federal government, and thus are owned by someone other than the party making the transfer to the government. The Department of the Interior believes, and we agree, that FWS has legal authority to require private owners of reserved mineral rights located within “acquired federal refuges” to obtain “entry permits” only in limited circumstances, in order to obtain access to the refuge for minerals exploration and removal. The department’s position was originally set out in a 1986 legal opinion issued by the department’s Office of the Solicitor (1986 Opinion), and the office recently advised us that the 1986 Opinion continues to reflect the department’s position. The 1986 Opinion concluded that FWS generally lacks statutory or other authority to require entry permits for reserved rights owners and can do so only when the deed transferring the surface property to the federal government has included either specific permitting requirements or language subjecting the exercise of the reserved mineral rights to regulations promulgated by the department, including permitting regulations. The department’s position is based on language in the Migratory Bird Conservation Act that was added by amendment in 1935, making reserved rights subject to requirements specifically set out in the deed or, if the deed so states, to regulations prescribed “from time to time” by the Secretary of the Interior. If the deed does not contain such provisions, the exercise of the reserved rights cannot be subjected to permitting requirements. As the 1986 Opinion explains, prior to the 1935 amendment, the Migratory Bird Conservation Act had made all reserved rights subject to regulations that were prescribed by the department “from time to time.” The House Report associated with the 1935 amendment explains that “some owners of very desirable tracts are unwilling to convey on such indefinite and uncertain terms as regulations made ‘from time to time.’ ’’ The purpose of the change was to provide those who reserved rights in lands they transferred to the United States with some contractual certainty, and to protect them from being required to abide by permitting regulations that were not in effect when the deed was issued. The foregoing limits in the Migratory Bird Conservation Act on how the department may regulate reserved mineral rights do not apply to the department’s regulation of outstanding mineral rights. A number of other legal authorities in related areas indicate, in our view, that FWS has statutory authority to regulate the exercise of outstanding mineral rights on federal lands. In Dunn McCampbell Royalty Interest, Inc. v. National Park Service, 964 F. Supp. 1125 (S.D. Tex. 1995), aff’d on other grounds, 112 F.3d 1283 (5th Cir. 1997), the court ruled that the National Park Service has authority to reasonably regulate private owners’ access to their oil and gas interests located beneath park system lands, by requiring approval of a plan of operations before commencement of exploration or production activities. The court relied on language in the National Park Service Organic Act directing the Park Service to “protect and regulate” national parks so as to “conserve the scenery and the natural and historic objects and the wildlife therein and to provide for the enjoyment of the same in such manner and by such means as will leave them unimpaired for the enjoyment of future generations,” as well as language directing the Department of the Interior to issue regulations “as . . . deem necessary or proper for the use of the parks . . . under the jurisdiction of the National Park Service.” Similarly, in Duncan Energy Co. v. United States Forest Service, 50 F.3d 584 (8th Cir. 1995), the Eighth Circuit court ruled that although the Forest Service may not completely deny access to private owners of mineral interests located within National Forest System lands, the Forest Service may impose reasonable conditions on the use of the federally owned surface and thus may require mineral owners to obtain approval before exploring for or developing minerals. The court relied on language in the Bankhead-Jones Farm Tenant Act that directs the Department of Agriculture (the Forest Service’s parent agency) “to develop a program of land conservation and land utilization” and to issue regulations necessary to “regulate the use and occupancy of property acquired [for the National Forest System] in order to conserve and utilize it.” The court also relied on the Forest Service’s “special use” regulations providing that “ll uses of National Forest System lands . . . are designated ‘special uses’ [and must be approved by an] authorized officer. The statutes addressed in Dunn McCampbell and Duncan bear a number of similarities to the National Wildlife Refuge System Administration Act (Refuge System Administration Act), which governs the National Wildlife Refuge System. Notably, language added to the Refuge System Administration Act by the National Wildlife Refuge System Improvement Act of 1997 is very similar to the language of the National Park Service Organic Act relied upon by the Dunn McCampbell court. As amended in 1997, the Refuge System Administration Act now provides that the mission of the NWRS is to administer lands for the “conservation, management, and where appropriate, restoration of for the benefit of present and future generations of Americans” and directs the Secretary of the Interior to “ensure that the biological integrity, diversity, and environmental health of the System are maintained for the benefit of present and future generations of Americans.” The Refuge System Administration Act also explicitly authorizes the Secretary of the Interior to issue regulations to carry out the act. Similarly, as in the statute relied on by the Duncan court regarding the Forest Service’s permitting authority, the 1997 amendments to the Refuge System Administration Act added language directing the Secretary of the Interior to “provide for the conservation of fish, wildlife, and plants, and their habitats within the System.” Thus, as with the statutes at issue in Dunn McCampbell and Duncan, the 1997 amendments to the Refuge System Administration Act authorize the Department of the Interior to manage the National Wildlife Refuge System with the same type of policy direction and management standards with which the Park System and the Forest System are managed, including issuance of permitting regulations. The legislative history of the Refuge System Administration Act confirms Congress’s concern for ecosystem and fish and wildlife conservation and for ensuring that uses of the refuges are compatible with their purposes. Although neither the Administration Act’s 1997 amendments nor their legislative history specifically refers to regulation of the activities of private oil and gas operators, the overriding purpose of the amendments—providing better management to protect the refuges—together with the reasoning of the courts addressing similar statutes in Dunn McCampbell and Duncan indicate that FWS has current authority to require private owners of outstanding mineral rights to obtain permits before conducting oil and gas operations. To identify the nature and extent of oil and gas activities resident within the National Wildlife System, we relied on several sources of information. We began with our 2001 report, which identified 77 units with oil and gas activities based on the Fish and Wildlife Service’s reported activities in the year 2000. We used the same information source, FWS’s Refuge Management Information System (RMIS), and reviewed exploration, production, and pipeline activities for the years 1994-2001. This information is self-reported by refuges and, by FWS officials’ admission, incomplete. In addition, RMIS does not indicate the scale of activities present on a refuge—for example, whether there is one well or hundreds of wells. Therefore, we contracted Premier Data Services of Englewood, Colorado, to provide more accurate and comprehensive data on the extent and type of oil and gas activities occurring on refuges. Premier maintains a national database of oil and gas wells collected from well permit data compiled by each state’s oil and gas regulators. Premier recently contributed to a study for the Departments of Interior, Agriculture, and Energy under the Energy Policy and Conservation Act, providing a comprehensive review of oil and gas resources and constraints on their development in five basins in the interior West. To determine the number of wells residing on FWS lands, Premier compared a county-by-county listing of wells against a list of counties with refuge system lands provided by FWS. For those refuges in counties with at least one well, Premier either obtained digital maps of the refuges’ current land status from FWS or, in those cases where FWS had not digitized the refuge boundaries, converted paper maps into digital format. Premier then overlaid the geographic plots of wells nationwide with the digitized maps to identify wells within refuge boundaries and to identify wells within ½ mile outside the boundaries. (See fig. 11 for a sample plot of the Butte Sink Wildlife Management Area.) In addition to obtaining information on the location of oil and gas wells, we also obtained information on the status, type, and amount of production of oil, gas, and water (brine) from each well. We eliminated from the database permitted wells that were not drilled, while wells with any production in the most recent reporting period we categorized as active; all other wells we categorized as inactive. To identify pipelines transiting refuge lands, we relied on the National Pipeline Mapping System (NPMS), which is maintained by the Office of Pipeline Safety in the Department of Transportation and on FWS’s RMIS. We overlaid the NPMS data on the 138 refuges for which we had digital refuge boundary data because they also had wells inside or just outside their boundaries. The FWS had not finished digitizing refuge maps for the other refuges in the system. NPMS is based on data reported to the Office of Pipeline Safety by pipeline owners. NPMS includes 99 percent of the nation’s hazardous liquids (including oil and other petroleum products) pipelines and 61 percent of natural gas pipelines in the United States. NPMS does not include local gathering lines or pumping and storage facilities that supplement these lines. To supplement this information, we included refuges identified in RMIS as having transit pipelines. However, there may be other refuges with pipelines, not recorded in NPMS, RMIS, or for which we did not have digital maps. As part of FWS’s review of this report, they identified additional refuges that may have oil and gas activities or updated the status of activities at the refuges listed, but did not offer corroborating documentation. While this information may have been more current than the Premier or the Department of Transportation databases, we chose to keep these data intact and did not make additional adjustments. We attempted to identify information regarding the overall environmental effects of oil and gas activities on national wildlife refuges. However, because FWS had conducted few studies and did not have information regarding what the overall environmental effects of oil and gas activities on refuges were and how those effects have changed over time, we selected at least one refuge in each of FWS’s seven regions for physical inspection. In making these selections, we attempted to choose a cross section of refuges considering the type and scale of oil and gas activities, range of environmental effects, and extent and type of management and oversight. In total, we visited 16 refuges containing 1,510 active and 2,695 total oil and gas wells, about 84 percent and 61 percent, respectively, of all oil and gas wells we identified on refuges. For a complete list of refuges we visited, see appendix II. At each refuge visited, we asked the refuge manager to describe the effects of oil and gas activities on the refuge, obtained any available studies of these effects, and visited locations of oil and gas activity selected by the refuge manager to represent a range of effects. In addition, we contacted state regulators and industry and environmental representatives and reviewed state laws, FWS contaminant reports, and scientific and industry and environmental group reports. To identify reclamation and remediation performed at the refuges visited, we reviewed files at each refuge, discussed actions taken with refuge officials, and reviewed information FWS provided from its cleanup and maintenance databases. To identify steps FWS has taken to document the environmental effect on refuge resources, we reviewed Contaminant Assessment Program studies and additional information FWS provided from its contaminants database. We also discussed these efforts with FWS officials. To assess FWS’s management and oversight of oil and gas activities in the National Wildlife Refuge System, we obtained information on policy, guidance, and practices from headquarters and the seven regional offices and documented the actual practices in use at the 16 refuges we visited. To determine the authority of the FWS to require private mineral owners to obtain permits containing conditions to protect refuge resources from damage and to oversee oil and gas activities, we obtained information from the Department of the Interior’s Office of the Solicitor and reviewed the laws and regulations pertaining to the FWS and other federal land management agencies and recent court cases concerning private mineral rights on federal lands. We also identified the type and amount of training the FWS staff had received and reviewed mechanisms for funding positions to manage and oversee oil and gas activities. In addition, we interviewed officials and obtained documentation on FWS’s coordination with, and the involvement of, other federal and state agencies in the oversight of oil and gas activities on refuges. Finally, we reviewed the acquisition policies and practices used by FWS for adding lands to the refuge system, especially those that contain current or historical oil and gas activities. 1. We provided opportunity for the Department of the Interior and the U.S. Fish and Wildlife Service officials to review a draft of this report. To protect against the possibility of early disclosure of the report, we did not provide the department copies of the draft report to retain, but did give agency officials ample opportunity to review and take notes on the draft. We allowed department and FWS officials to review a draft of the report in Washington, D.C.; Denver; Atlanta; and Portland without restriction as to the time, number of personnel, or note-taking. 2. See our response in Agency Comments and Our Evaluation section on page 44. 3. See our response in Agency Comments and Our Evaluation section on page 45. In addition to the names above, Mary Acosta, Paul Aussendorf, Robert Crystal, Sandra Davis, Jonathan Dent, Doreen Feldman, Chalane Lechuga, John Mingus, Mehrzad Nadji, and Cynthia Norris made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. | The 95-million acre National Wildlife Refuge System contains federal lands devoted to the conservation and management of fish, wildlife, and plant resources. While the federal government owns the surface lands in the system, in many cases private parties own the subsurface mineral rights and have the legal authority to explore for and extract oil and gas. GAO was asked to determine the extent of oil and gas activity on refuges, identify the environmental effects, and assess the Fish and Wildlife Service's management and oversight of oil and gas activities. About one-quarter (155 of 575) of all refuges have past or present oil and gas activity, some dating to at least the 1920s. Activities range from exploration to drilling and production to pipelines transiting refuge lands. One hundred five refuges contain a total of 4,406 oil and gas wells--2,600 inactive wells and 1,806 active wells. The 1,806 wells, located at 36 refuges and many around the Gulf Coast, produced oil and gas valued at $880 million during the last 12 month reporting period, roughly 1 percent of domestic production. Thirty-five refuges contain only pipelines. The Fish and Wildlife Service has not assessed the cumulative environmental effects of oil and gas activities on refuges. Available studies, anecdotal information, and GAO's observations show that the environmental effects of oil and gas activities vary from negligible, such as from buried pipelines, to substantial, such as from large oil spills or from large-scale infrastructure. These effects also vary from the temporary to the longer term. Some of the most detrimental effects of oil and gas activities have been reduced through environmental laws and improved practices and technology. Moreover, oil and gas operators have taken steps, in some cases voluntarily, to reverse damages resulting from oil and gas activities. Federal management and oversight of oil and gas activities varies widely among refuges--some refuges take extensive measures, while others exercise little control or enforcement. GAO found that this variation occurs because of differences in authority to oversee private mineral rights and because refuge managers lack enough guidance, resources, and training to properly manage and oversee oil and gas activities. Greater attention to oil and gas activities by the Fish and Wildlife Service would increase its understanding of associated environmental effects and contribute to more consistent use of practices and technologies that protect refuge resources. |
The mission of USCIS is to adjudicate applications and petitions for immigration benefits, and requests for other action by individuals seeking to become either a citizen of the United States or a lawful permanent resident or to temporarily study, live, or work in this country. To accomplish its mission, each year USCIS processes millions of applications, petitions, and requests for more than 50 types of immigrant and nonimmigrant-related benefits. As we reported in 2007, the processing of millions of applications, petitions, and requests has been hindered by inefficient, paper-based processes, which has resulted in a backlog that peaked in 2004 at more than 3.8 million cases; tens of thousands of missing or misplaced files; difficulties in verifying the identity of applicants; problems in providing other government agencies with the information necessary to identify criminals and potential terrorists; and benefits being issued to unverified applicants. Begun in 2005, the goals of the Transformation Program are to modernize the paper-based immigration benefits process to enhance national security and system integrity, and to improve customer service and operational efficiency. The program is comprised of many systems, each of which provides a service to facilitate operations, such as identity management and risk and fraud analytics. The main component of the program is the USCIS Electronic Immigration System (ELIS), which is to provide case management for the adjudication of immigration benefits and interface with other systems to achieve end-to-end processing of immigration benefits. Once the system has been implemented, USCIS expects that Applicants will be able to establish an account with USCIS to file and track the status of the application, petition, or request online. The system will apply risk-based rules automatically to incoming applications, petitions, and requests to identify potentially fraudulent applications and national security risks. Adjudicators will have electronic access to applications, petitions, and requests, relevant policies and procedures, and external databases to aid in decision making. USCIS will have management information to track and allocate workload. The system will allow electronic linkages to other agencies, such as the Departments of Justice and State, for data sharing and security purposes. Appendix II lists systems internal and external to USCIS with which USCIS ELIS will interface. Figure 1 provides an overview of how applicants are to file for immigration benefits and apply for citizenship using the new system. Five core operational requirements are expected to form the foundation of USCIS ELIS, which is to process and manage all applications. Table 1 describes the five core operational requirements. Each core operational requirement is broken down into a series of capabilities, features, and sub-features. For example, the Intake and Account Management operational requirement has five capabilities, such as the Establish/Authenticate portal account capability, which is broken down into four features, including Manage Account Access. This feature is further decomposed into five sub-features, including Provide Account Management for Customers and Representatives. Appendix III provides additional information on the planned capabilities for the Transformation Program. USCIS began implementing the Transformation Program by awarding contracts for various phases, including for pilot projects and system architecture. In fiscal year 2006, USCIS initiated three pilot projects and one for a proof-of-concept effort. Enumeration pilot: The pilot was a joint effort by USCIS and the then- United States Visitor and Immigrant Status Indicator Technology (US- VISIT) program to positively identify individuals through the generation of a unique identifier permanently associated with a person. The enumerator is created by the submission of fingerprint and biographic data. Each time the person returns to USCIS for a subsequent benefit application, biometric information is to be matched to determine if the individual is the same person. This process is intended to limit the number of times fingerprints are submitted for background checks. Biometric Storage System pilot: The Biometric Storage System was to help streamline the established USCIS biometric and card production processes and become the centralized repository for all USCIS customer biometrics. The system receives the 10-print fingerprints taken at application support centers along with related biographic data, and submits them to the Federal Bureau of Investigation for a fingerprint check and then to the US-VISIT Automated Biometric Identification System for creation of the enumerator and for permanent storage. The 10-print fingerprints and photos are then associated with benefit cards that serve as travel documents. The system is to store the images only as long as needed to facilitate adjudication and production of benefits cards. Digitization pilot: The Digitization pilot was comprised of the Enterprise Document Management System, which stores electronic documents along with metadata describing the documents, and the Records Digitization Facility, which scans documents into electronic format, creates the metadata, and transfers the document to the document management system. The pilot tested the use of images by adjudicators in their day-to-day work environment. The proof-of-concept was intended to demonstrate the case processing capability of the case management system. Specifically, it was to demonstrate that the enumerator (unique identifier based on biometrics) could link an applicant to related filings for that individual and assess whether the case management system could be used to view digitized files. According to the Transformation Program Management Plan, a post- implementation review completed in October 2007 determined that the proof-of-concept met all of its primary objectives and that the Transformation Program should continue with this approach. In November 2008, USCIS awarded a solutions architect contract for approximately $500 million to be allocated over a 5-year period to design, develop, test, deploy, and sustain the Transformation Program by November 2013. The overall strategy was to deliver the solution in two increments: Increment 1, to include releases A and B, and Increment 2, to include releases C, D, and E. Releases A and B included making USCIS ELIS available to applicants applying for nonimmigrant benefits and much of the functionality needed to operate the five core operational requirements. Table 2 shows the order in which USCIS ELIS’s five releases were to be deployed and the types of benefits to be made available to applicants in each release. In July 2011, DHS officially approved the Transformation Program’s acquisition program baseline and supporting operational requirements. The baseline included approved objectives (targets that reflect the most likely cost and schedule) and approved thresholds (ceilings which, if exceeded, initiate official re-planning actions). Table 3 details the cost and schedule parameters of the program’s approved July 2011 baseline. On May 22, 2012, USCIS launched the first release of USCIS ELIS (release A1). This release included capabilities associated with all of the core operational requirements, such as online account setup, case management, case acceptance, applicant evidence intake, and notice generation. Since May 2012, five primary releases, along with a series of maintenance releases, have been deployed to add functionality to USCIS ELIS, with new applicant processing added in two of the five releases. Table 4 shows the date of each functional release and a description of the capabilities in the release. USCIS ELIS is to be managed consistent with DHS’s acquisition management process. DHS’s Acquisition Management Directive 102-01 and its Instruction Manual 102-01-001 (the guidebook) establish the department’s policies and processes for managing major acquisition programs. Acquisition Management Directive 102-01 provides high-level direction to program managers to help determine funding needs, capability requirements, and schedule, and the guidebook provides specific direction to program managers on how to implement the directive, such as how to address a program breach. DHS’s Deputy Secretary and Under Secretary for Management serve as the acquisition decision authorities for the department’s largest acquisition programs, those with life-cycle cost estimates of $1 billion or more. The Under Secretary for Management also serves as DHS’s Chief Acquisition Officer, and in this role is responsible for the management and oversight of the department’s acquisition policies and procedures. To assist in the acquisition oversight process, the Under Secretary for Management is supported by two offices within the department. The Office of Program Accountability and Risk Management (PARM) is responsible for DHS’s overall acquisition governance process. The Office of the Chief Information Officer (OCIO) is responsible for, among other things, setting departmental information technology (IT) policies, processes, and standards. It is also responsible for ensuring that IT acquisitions comply with DHS IT management processes, technical requirements, and the approved enterprise architecture. Within the OCIO, the Enterprise Business Management Office is to ensure that the department’s IT investments align with its missions and objectives. As part of its responsibilities, this office periodically assesses IT investments to gauge how well they are performing through a review of program risk, human capital, cost and schedule, and requirements. These assessments serve as the criteria for reporting to the IT Dashboard for oversight by the Office of Management and Budget (OMB). In March 2015, we reported that DHS acquisition policy does not define the differences in the role of PARM and the OCIO’s Enterprise Business Management Office in oversight of major IT acquisitions. In particular, the functions of PARM and the Enterprise Business Management Office may overlap. Further, we reported that programs report to PARM and the Enterprise Business Management Office through two separate information systems, which further complicate the distinction. In order to ensure consistent, effective oversight of DHS’s acquisition programs, we recommended that the Secretary of DHS direct the Under Secretary for Management to develop written guidance to clarify the roles and responsibilities of PARM and OCIO Enterprise Business Management Office for conducting oversight of major acquisition programs. DHS concurred with our recommendations. DHS Acquisition Management Directive 102-01 establishes that a major acquisition program’s decision authority is responsible for reviewing and approving the movement of the program through four phases of the acquisition life cycle at a series of five acquisition decision events. These acquisition decision events, which can be more than 1 year apart, provide the acquisition decision authority an opportunity to assess whether a major program is ready to proceed through the life cycle phases. Following are the four phases of the acquisition life cycle, as established in DHS acquisition policy: 1. Need: Department officials identify that there is a need, consistent with DHS’s strategic plan, justifying an investment in a new capability and the establishment of an acquisition program to produce that capability. This phase concludes with the acquisition decision authority granting the acquisition program approval to proceed at acquisition decision event 1. 2. Analyze/Select: A designated program manager reviews alternative approaches to meeting the need and recommends a best option to the acquisition decision authority. This phase concludes with the acquisition decision authority granting the acquisition program approval to proceed at acquisition decision event 2A. 3. Obtain: The program manager develops, tests, and evaluates the selected option. The acquisition decision authority may review the acquisition program multiple times before granting the acquisition program approval to proceed with particular acquisition activities. Acquisition decision event 2B focuses on the cost, schedule, and performance parameters for each of the program’s projects. The program may also proceed through acquisition decision event 2C, which focuses on low rate initial production. The phase concludes with the acquisition decision authority granting the program approval to proceed at acquisition decision event 3. 4. Produce/Deploy/Support: DHS delivers the new capability to its operators, and maintains the capability until it is retired. This phase includes sustainment, which begins when a capability has been fielded for operational use; sustainment involves the supportability of fielded systems through disposal, including maintenance. Figure 2 depicts the acquisition life cycle established in the directive. Two important aspect of the acquisition decision events are the review and approval of key acquisition documents critical to establishing the need for a major program, its operational requirements, an acquisition baseline, and testing and support plans. Table 5 describes the key acquisition documents requiring department-level approval before a program can move to the next acquisition phase. Figure 3 depicts the acquisition life cycle and associated program documentation requirements. We have previously reported on the management and development of the USCIS Transformation Program. In July 2007, we evaluated the Transformation Program strategic and expenditure plans to determine the extent to which these plans had prepared USCIS to carry out its program. We reported that the agency’s plans partially or fully addressed most key practices but more attention was needed in certain areas such as performance measurement and IT management. We also reported that the plans provided some information on costs and revenues, but that USCIS had not finalized its acquisition strategy and, therefore, cost estimates were uncertain. To improve its transformation strategy and fully address congressionally requested information, we recommended that the Director of USCIS address gaps in plans in the areas of performance measurement, strategic human capital management, communications, and IT management practices. DHS concurred with our recommendations. Between September 2009 and September 2011, USCIS took steps to address the gaps identified in our report, such as finalizing a balanced set of four performance measures and establishing fiscal year 2012 targets that aligned with transformation goals for customer satisfaction, decisional accuracy, timeliness, and efficiency. In November 2011, we assessed the extent to which USCIS had followed DHS acquisition policy in developing and managing the Transformation Program. We reported that the agency had not consistently followed the acquisition management approach that DHS outlined in management directives in developing and managing the program. For example, USCIS did not complete several acquisition planning documents required by DHS policy prior to moving forward with an acquisition approach, which contributed to schedule delays and increased program costs. To help ensure that USCIS used a comprehensive and cost-effective approach to the development and deployment of transformation efforts to meet the agency’s goals of improved adjudications and customer services processes, we recommended that the Director of USCIS develop and maintain an integrated master schedule consistent with best practices for the Transformation Program and ensure that the life-cycle cost estimate be informed by milestones and associated tasks from reliable schedules. The agency concurred with our recommendations and has begun work to address them. The Transformation Program has changed its system acquisition strategy, which has contributed to significant delays in the program’s planned schedule. As of March 2015, USCIS ELIS functionality deployed in the program’s initial releases is still in operation. Moving forward, USCIS estimates that its Transformation Program will now cost up to $3.1 billion and be fully deployed no later than March 2019. This is an increase of approximately $1 billion and a delay of over 4 years from its initial approved baseline. In addition, several major changes were made to the Transformation Program acquisition strategy to help address concerns about delays and cost overruns, including changes to the software development methodology, contracting approach, and program architecture. However, the plans for this new approach have yet to be formally approved. Nevertheless, despite the lack of an updated and approved program baseline, USCIS has begun system acquisition by awarding contracts for planning and development. Moreover, the delay in the program’s planned schedule has in turn impacted USCIS’s ability to achieve cost savings, operational efficiencies, and other benefits. As of March 2015, the program had work underway to maintain existing operations as well as to transition to a new system architecture. More specifically, USCIS ELIS capabilities developed and implemented in releases A1 through A2.5 were still in operation. To maintain this functionality, USCIS extended its initial Solution Architect contract, originally scheduled to end in 2014, through May 2015. This functionality is expected to be replaced under a new architecture (as discussed later in this report). The program also had transition activities underway to test this new architectural environment. In particular, USCIS conducted a limited deployment for replacing and renewing permanent resident cards in November 2014 and fully deployed an initial set of capabilities in March 2015. Following this deployment, USCIS expects to begin decommissioning the existing capabilities. With regard to program baseline estimates, the Transformation Program is expected to cost no more than $3.1 billion and to be deployed no later than March 2019. This is an increase of approximately $1 billion and delay of over 4 years from the initial baseline approved in July 2011. According to program officials, the Transformation Program is expected to deliver the same system capabilities as initially intended; however, the plan for deploying such capabilities has been revised. Table 6 compares the Transformation program’s current draft baseline to its initial baseline plans. In addition to the changes made to the program baseline estimates, several major changes were made to the Transformation Program acquisition strategy. These changes are summarized in table 7 and discussed in more detail below. Software development methodology: The Transformation program transitioned from a traditional waterfall approach to an Agile one. Agile software development is consistent with industry best practices and existing incremental development requirements. Further, our work has shown that, if performed effectively, this approach can provide more flexibility to respond to changing agency priorities, and allow for easier incorporation of emerging technologies and termination of poorly performing investments with fewer sunk costs. Under the previous approach, USCIS ELIS was to deploy the program under five major releases. The non-immigrant line of business was to be delivered across two releases, A and B. The three remaining lines of business—immigrant, humanitarian, and citizenship—were to be delivered through releases C, D, and E, respectively. Under the Agile approach, the program plans to deliver smaller sets of functionality every 4 weeks. These shorter-term releases are intended to culminate in a more complete release of a USCIS business line in generally no longer than 6 months. The program estimates it will move through 16 releases prior to achieving full operating capability. Contracting approach: In October 2013, the DHS Deputy Chief Procurement Officer approved a revision to the program’s acquisition plan. This updated acquisition plan reflected a transition from a primary contract for a single solution architect to a series of contracts for tasks that had previously been provided by the solution architect (e.g., requirements development and testing). The Office of Transformation Coordination, with support from the Office of Information Technology, has assumed responsibility for integrating and overseeing the work performed under at least nine different contracts. For example, training of USCIS personnel who will interact with USCIS ELIS is provided through one contract, while integration of software code from multiple development teams will be handled through a different contract. In addition, software development will be handled by multiple contractors, each operating on 6-month contracts that can be renewed depending on the performance of the contractor. According to USCIS officials and briefings to senior leadership, this new acquisition plan is to allow the Transformation Program to transition away from a solution that delivered deficient software code and an overly complex solution. However, the new acquisition approach also introduces risk to the program. According to documentation supporting the Transformation Program acquisition plan, the primary risk of this approach is that the contract incentive structure may not lead to cooperative behavior on the part of the various contractors. Our prior work on approaches of this nature has also shown that as the number of contracts and contractual relationships increase, so does program complexity. Further, the schedule and performance risks that arise from higher program complexity typically result in greater costs. Program architecture: In March 2013, the Transformation program was granted approval by its executive steering committee to move to a new, more modern, solution architecture. Design and development of this new architecture began in September 2012 and was intended to simplify the existing USCIS ELIS architecture by using fewer commercial products and including open source software; support enterprise solutions that could be reused across USCIS rather than limiting their use to USCIS ELIS; and allow for easier scalability to accommodate surges in the benefits processing by using cloud services. According to the USCIS Chief Information Officer, none of the hardware or software from the existing architecture will be used as part of the new architecture, with the exception of some licenses associated with risk and fraud services. Accordingly, the program will rebuild functionality previously developed in releases A1 through A2.5, such as support for the I-526 form. The program plans to build this functionality in releases 5.1, 5.2, and 5.3. Appendix III provides additional information on the capabilities (releases) of USCIS ELIS as well as when they are to be made available. To transition to and test the efficacy of the new architecture, USCIS planned a limited and full deployment for the replacement and renewal of permanent resident cards (form I-90) for the first release of system functionality. The Transformation Program conducted the limited deployment in November 2014. In May 2015, the Director of Operational Test and Evaluation signed off on an operational assessment of the limited deployment. According to the USCIS Chief Information Officer, the three key changes made to the acquisition strategy were in conjunction with a March 2012 OMB-facilitated TechStat review with DHS of this program. As part of the root-cause analysis to inform the TechStat review, the USCIS Chief Information Officer noted that delays and cost overruns were partly the result of the solution architect delivering deficient software code and performing at an unacceptably low rate of productivity. The Transformation program’s current baseline estimates and plans have yet to be formally approved by the DHS Undersecretary for Management (the acquisition decision authority for this program); as a result, the program has not been in compliance with DHS acquisition policies and procedures since 2013. According to these policies and procedures, any breaches to the approved acquisition program baseline (cost, schedule, and performance parameters) require that, within 90 days, a new baseline be approved or a program review be conducted to review the proposed baseline revisions and recommendations to the acquisition decision authority. For the Transformation Program, neither of these actions has taken place since 2013—when the program breached its approved acquisition program baseline (from July 2011). According to the component lead in PARM, a new baseline was not approved because supporting acquisition planning documentation was not yet mature. Despite the lack of an updated and approved program baseline, USCIS has begun to implement its current system acquisition plans. For example, in October 2013 and February 2014, the Acting Under Secretary for Management allowed USCIS to award a series of contracts to develop the pilot for the new architecture and re-build existing functionality, as well as to proceed with planning and development of future releases. According to the USCIS component lead for PARM, development of the USCIS ELIS system is limited under the approved contracts. However, according to USCIS planning documents, the releases currently being developed make up at least 66 percent of the total capabilities of USCIS ELIS. According to program officials, an acquisition decision event is expected to occur at the beginning of April 2015, which should bring the Transformation Program back into compliance with DHS policy. Specifically, the acquisition decision (referred to as 2B in the DHS acquisition life cycle) is intended to obtain the DHS Under Secretary for Management’s approval of a new acquisition program baseline. However, as discussed later in this report, this acquisition decision event has already been rescheduled twice by the department—initially set to occur in September 2013 and then in September 2014. Figure 4 provides a timeline of past and current program events. Until the Transformation program has a revised program baseline (reflecting the significant changes in strategy) that has been approved at the departmental level, program accountability for cost, schedule, and operational commitments will continue to be limited. The changes in the Transformation Program’s acquisition strategy have significantly impacted the program’s planned schedule targets, which in turn have had negative effects on USCIS’s ability to achieve cost savings, operational efficiencies, and other benefits. In justifying a transition to the new architecture, the program projected an accelerated completion date of August 2016. Since then, draft acquisition documents have pushed the completion date out to no later than March 2019. According to program officials, the delay in getting to full operational capability is due in part to unexpected or greater-than- expected challenges in implementing the software methodology, contracting, and architecture changes. For example: The development and test environments took longer than expected due the complexity of standing up the environments for use. A bid protest of the flexible Agile development services contract required the program to adjust the schedule and extend the solution architect contract and also contract for another team to continue development work until the bid protest was resolved and development work could be initiated under the contract. According to the Chief of the Office of Transformation Coordination, the temporary development teams that were used while the bid protest was ongoing performed more slowly than what was projected for permanent development teams. The program determined that it would be unable to re-use existing software and hardware purchased by the solution architect and in use by USCIS. As a result, work performed under the solution architect contract would need to be redeveloped as part of future releases. This additional work was not initially planned, and therefore expected to require additional effort. The schedule delays in system delivery have in turn hampered USCIS’s ability to realize cost savings associated with the Transformation Program. For example, the business case for the program highlighted cost savings that would be realized from decommissioning of legacy systems upon full deployment of USCIS ELIS. Each of these legacy systems must remain operational to allow USCIS to perform its mission until an alternative option is available. In fiscal year 2014, the total cost of maintaining systems that could have been decommissioned if USCIS ELIS had been fully operational was approximately $71 million. In addition, the business case for the Transformation Program also identified cost savings from reducing data entry and mail handling costs. USCIS will continue to incur such costs while the program awaits full implementation. In addition, the schedule delays have deferred USCIS’s ability to realize operational improvements tied to the program. For example, the Transformation Program is expected to implement organizational and business process changes to better use IT. According to USCIS, this increased use of IT should help achieve goals such as reducing the immigration benefit backlog through business process change; improving customer service through expanded electronic filing; and enhancing national security by authenticating users and integrating with external agency databases. Due to delays in the program, these improvements have yet to be achieved. Also, we have previously found weaknesses with existing processes and systems by which USCIS processes benefits. The agency has cited the Transformation Program as one effort that will help to address these concerns. For example, in our December 2014 report, we found that collection and maintenance of data for the K nonimmigrant visa process (form I-129F) were not reliable or were not collected or maintained in a reportable format. However, according to USCIS Service Center Operations officials, the agency will be able to collect and maintain more complete data through the deployment of the electronic I-129F petition in USCIS ELIS. Thus, until the new system is deployed, weaknesses in these existing processes may continue. DHS acquisition policy identifies a process by which governance bodies should evaluate cost, schedule, and performance to make decisions about individual programs. Based on this process, multiple bodies govern the program, requiring corrective actions when needed. In addition, the department’s OCIO and PARM offices are tasked with conducting evaluations of individual programs, which inform congressional and OMB oversight. However, a lack of reliable information against which to monitor the Transformation Program has inhibited the ability of these governing and oversight bodies to make informed decisions. The department’s acquisition policy requires that the DHS Acquisition Review Board support the Under Secretary for Management by reviewing major acquisition programs for proper management, oversight, accountability, and alignment to the department’s strategic functions at key decision points and other meetings, as needed. DHS acquisition policy establishes that the review board be chaired by the acquisition decision authority and consist of individuals that manage DHS’s mission objectives, resources, and contracts. Table 8 identifies the members of the Transformation Program’s Acquisition Review Board. In May 2012, the Under Secretary for Management chartered the Executive Steering Committee for the Transformation Program to help improve program governance. In contrast to the Acquisition Review Board, this committee assumed authority to oversee all aspects of the execution of the Transformation Program between key decision points. The Executive Steering Committee includes voting and non-voting members from DHS and USCIS. Table 9 provides a list of the committee’s voting and non-voting members. Leading practices that we and others have identified note that oversight is a critical element of an investment’s life cycle. To be effective, governance bodies should, among other things: Monitor a project’s performance and progress toward predefined cost and schedule expectations: Governance bodies should measure the actual value of project planning parameters, compare actual values to estimates in the plan, and identify significant deviations. Ensure that corrective actions are identified and assigned to the appropriate parties at the first sign of cost, schedule, and/or performance problems: Governance bodies should collect and analyze issues based on predefined expectations and determine corrective actions to address them. Ensure that these corrective actions are tracked until the desired outcomes are achieved: Governance bodies should track the implementation of all corrective actions until the desired outcomes occur. Rely on complete and accurate data to review the performance of IT projects and systems against stated expectations: Governance bodies should integrate, measure, and analyze activities into the processes of the project, including tracking actual progress and performance against established plans and objectives. Analysis should account for the quality (e.g., age, reliability) of all data that are used for the analysis, regardless of the source of the data. The quality of data should be considered to help select the appropriate analysis procedure and evaluate the results of the analysis. Table 10 shows the extent to which each of the governance bodies met the leading practices for performing oversight. As shown, the two governance bodies have implemented three of the leading practices: Acquisition Review Board: The board implemented the leading practice associated with ensuring corrective actions are identified. For example, during a program review on February 16, 2012, the board discussed a request to provide the program with an additional $21.5 million to cover unplanned costs resulting from delays in the program schedule. Based on this discussion, the review board identified a series of action items, including a request for USCIS to return for an additional review after delivery of the first release to request approval to commit additional funding for development and delivery of the next release. The Acquisition Decision Authority committed additional funding in May 2012. Executive Steering Committee: This body implemented two of the leading practices. In particular, the committee fully ensured that corrective actions were identified, and that corrective actions were tracked until the desired outcomes were achieved. As part of its reviews, the committee assigned corrective actions to specific individuals and entities and tracked them until they had been addressed. For example, in July 2013, the Executive Steering Committee tasked the Office of Transformation Coordination with forming a working group to propose potential options for sequencing of forms and associated business processes to be included in releases 10.0 through 16.0. This corrective action was tracked and subsequently completed in September 2013. The governance bodies partially implemented three leading practices: Acquisition Review Board: The board partially addressed two practices associated with monitoring cost and schedule against predefined parameters and following corrective actions through to completion. For example, with respect to monitoring against predefined parameters, in February 2012, the review board evaluated the planned costs of release A against the approved baseline amount to determine if a request for additional funding was justified. However, in February 2014, the program approved awarding a series of contracts but did so without an approved acquisition program baseline. With respect to corrective actions, in a July 2013 acquisition decision memorandum, the Transformation Program was directed to take the corrective action of returning to the Acquisition Review Board for a re- baseline decision by September 2013 in order to establish new cost and schedule parameters against which to evaluate the program. A February 2014 acquisition decision memorandum again directed the program to return for a re-baseline decision by September 2014. However, as of February 2015, a re-baseline decision had not occurred. As another example, in a May 2012 acquisition decision memorandum, the Transformation Program was directed to submit a Systems Engineering Life Cycle Tailoring Plan by June 2012 that was to document the Agile approach. According to the component lead in PARM, this document was not submitted for approval. Executing Steering Committee: The committee partially addressed one practice associated with monitoring cost and schedule against predefined parameters. For example, in April 2013, the committee reviewed and approved the funding of approximately $31 million for IT to support the Transformation Program. However, this evaluation did not include an assessment of these costs against an approved baseline for IT funding. Officials from the Office of Transformation Coordination explained that the approval of funds was based on a DHS-approved life-cycle cost estimate from April 2013. However, this cost estimate was not supported by an approved schedule, current requirements, and the other documentation required for a complete understanding of the program, which would be provided by an approved baseline. Finally, the program’s two governance bodies did not implement two leading practices: Acquisition Review Board: The board did not make decisions based on reliable cost, schedule, and performance information. For example, in October 2013, the Acquisition Decision Authority approved a new contract for continued development of the new architecture. In addition, in February 2014, the authority approved a bridge contract for the solutions architect to, among other things, assist in developing the new architecture. However, operational requirements along with an integrated master schedule and cost estimates had not been approved to support these decisions. This is consistent with our previous findings. For example, in November 2011, we found that USCIS managed the program without specific acquisition management controls, such as reliable schedules. As such, the review board did not have reasonable assurance that it could meet its future milestones or that it is developing a system that will achieve performance goals. A summary of the corresponding recommendations we made and their status is discussed earlier in this report. Executive Steering Committee: The committee did not make decisions based on reliable cost, schedule, and performance information. For example, in March 2013, the committee voted unanimously to migrate to a new architecture for the Transformation Program. This approval was based, in part, on the cost analysis reported by USCIS. However, this analysis included cost savings that did not fully account for the added costs for merging and migrating data from the old architecture. In addition to excluding costs for data migration, the cost analysis also projected significant cost savings to occur based on an accelerated date for full operating capability. Specifically, that analysis was based on an August 2016 date, but the full operating capability date in the draft baseline is no later than March 2019. Officials from the Office of Transformation Coordination added that the Executive Steering Committee decision was also based on how the simplification of the architecture would reduce risks to performance of the system. Nevertheless, the decision was not supported by reliable cost and schedule information. Further, the committee approved capabilities to develop and deliver in each release. However, the program did not have approved documentation tying approved operating requirements to capabilities and features on which such decisions could be made. Additionally, as with the Acquisition Review Board, the committee relied on cost and schedule information that had not been approved. Until these governance bodies can base their reviews of performance on timely, complete, and accurate data, they will be limited in their ability to make timely decisions and to provide effective oversight. In addition to governance from the decision-making bodies previously discussed, two DHS offices—PARM and OCIO—assist in the oversight of the Transformation Program. In particular, these offices perform periodic reviews of the Transformation Program to assess general program health and risk. These assessments are shared with external stakeholders with oversight responsibilities, including OMB, Congress, and, in the case of the OCIO, reported publicly on the Federal IT Dashboard. However, these assessments of the Transformation Program reflected unreliable and, in some cases, inaccurate information. PARM develops periodic reports to ensure that various DHS programs and their components within the agency satisfy compliance-related mandates and improve investment management. To support these efforts, PARM primarily draws on information reported to DHS’s official system of record for acquisition program reporting. PARM’s assessments are also informed by its participation in oversight meetings. For the Transformation Program, the PARM component lead for USCIS attended a majority of the Executive Steering Committee meetings from 2012 through 2014, among other things. PARM reported assessments of the Transformation Program in its annual Comprehensive Acquisition Status Reports for calendar year 2011 and fiscal years 2013 and 2014, but these assessments were not always based on reliable information. In its most recent status report, dated June 2014, PARM reported that the program was shown to be on target for cost and schedule performance targets. The key events for the next 12 months included software development for releases 7 and 8, with completion dates planned for December 17, 2014, and March 31, 2015, respectively. However, as of June 2014, completion dates were revised with only release 5.0 scheduled for deployment in early 2015. Release 7 was re-scheduled for December 2015, and release 8 for April 2016. According to the USCIS PARM component lead, the report pulled data from March 2014 and the program had not yet updated its acquisition program reporting system with revised deployment dates. Further, as previously discussed, revised cost and schedule baselines were not established for the program after 2011 so any targets reported in the status report reflected unapproved parameters reported by the program. We reported in March 2015 on data issues and reporting limitations affecting the accuracy of information contained in DHS’s acquisition program reporting system and the reliability of acquisition status reports departmentwide. To help address these concerns, we recommended that DHS determine a mechanism to hold programs accountable for entering data in its reporting system consistently and accurately, and to adjust reporting requirements to include, among other things, reasons for any significant changes in a program’s acquisition cost, quantity, or schedule from the previous report. DHS concurred with the recommendations and, if they are appropriately implemented, the reliability and utility of these reports for decision making purposes should be improved. OCIO’s Enterprise Business Management Office also assists in overseeing the Transformation program. In particular, it performs periodic reviews that serve as the basis for program ratings that are publicly reported on the Federal IT Dashboard. To support its assessments, OCIO primarily draws on information reported to the Investment Management System—DHS’s official system of record for reporting business cases to the Office of Management and Budget. For the Transformation Program, Enterprise Business Management Office representatives also participate in the Executive Steering Committee and Program Management Review meetings. The Enterprise Business Management Office performed four assessments of the Transformation Program from June 2013 through June 2014, but these assessments were based on unreliable or conflicting data. During that period, the program was evaluated as either a moderately high or medium risk IT program. The most recent assessment from June 2014 assessed the program as a category 3, medium risk investment, which is an improvement from the February 2014 rating of 2, moderately high risk. The assessment states that the program underwent a re-baseline for release 5.0 and, as a result, reported an acceptable schedule variance and positive cost performance. Further, it stated that an operational analysis of USCIS ELIS was completed in August 2013. However, as discussed previously in this report, the program has experienced over a four-year delay in its schedule and has not performed a re-baseline to bring it back within cost and schedule thresholds. In addition, the operational analysis performed in August 2013 pertained to the old USCIS ELIS architecture, which is scheduled for decommissioning, and did not reflect the new USCIS ELIS architecture. Instead, the Transformation Program has not yet been able to successfully test to any of its key performance parameters. Until the OCIO bases its review of performance on timely, complete, and accurate data, its ability to effectively provide oversight and to make timely decisions may be limited. In addition, it risks reporting information on the Federal IT Dashboard that is inaccurate or otherwise misleading, thus limiting OMB and congressional oversight. The Transformation Program presents a significant opportunity for USCIS to address long-standing challenges by reengineering the manner in which it conducts business. However, the program has experienced delays of over four years and a projected cost overrun of approximately $1 billion since it first established its initial baseline. Each delay means that more time will pass until USCIS is fully prepared to provide better service to those applying for immigration and citizenship benefits and until USCIS realizes the many benefits that the program is intended to provide, such as improved customer service through expanded electronic filing and enhanced national security by authenticating users and integrating with external agency databases. Changes made to the program’s acquisition strategy were intended to help mitigate past technical and programmatic challenges; however, the current plans have yet to be approved in accordance with departmental policy. Among other things, the Transformation Program is proceeding with a substantial amount of system development work without a current and approved acquisition program baseline. As we have previously concluded, developing a system without first establishing parameters around which the system will be developed is risky and can result in additional cost overruns and schedule delays. Moreover, program accountability will continue to be limited in the absence of such parameters. While Transformation’s various oversight bodies are active in their respective roles, decisions about the program had been made with incomplete and inaccurate data. In addition, key program health assessment reports shared with OMB and Congress are also unreliable. The ability of USCIS, DHS, Congress, and OMB to effectively monitor program performance and make informed program decisions will continue to be limited until department-level governance and oversight bodies more effectively use reliable program information to inform their program evaluations. To more fully understand the impact of recent changes to the Transformation Program and help ensure improved Transformation Program governance and oversight, we are making four recommendations for the Secretary of DHS to direct to components within DHS. To help ensure that progress made by the Transformation Program can be monitored against established and approved parameters, we recommend that the Secretary of DHS direct the department’s Under Secretary for Management to re-baseline cost, schedule, and performance expectations for the remainder of the Transformation Program. To improve Transformation Program governance, we recommend that the Secretary of DHS direct the Under Secretary for Management to ensure that the Acquisition Review Board is effectively monitoring the Transformation Program’s performance and progress toward a predefined cost and schedule; ensuring that corrective actions are tracked until the desired outcomes are achieved; and relying on complete and accurate program data to review the performance of the Transformation Program against stated expectations. To improve Transformation Program governance, we further recommend that the Secretary of DHS direct the DHS Under Secretary for Management, in coordination with the Director of US Citizenship and Immigration Services, to ensure that the Executive Steering Committee is effectively monitoring the Transformation Program’s performance and progress toward a predefined cost and schedule and relying on complete and accurate program data to review the performance of the Transformation Program against stated expectations. To help ensure that assessments prepared by OCIO in support of the department’s updates to the federal IT Dashboard more fully reflect the current status of the Transformation Program, we recommend that the Secretary of DHS direct the department’s Chief Information Officer to use accurate and reliable information, such as operational assessments of the new architecture and cost and schedule parameters approved by the Under Secretary of Management. We provided a draft of this product to DHS for comment. In its written comments, reproduced in appendix IV, DHS disagreed with our conclusion about the impact of the changes made to the Transformation acquisition strategy on the program’s cost and schedule, but concurred with all four of our recommendations. DHS fully addressed one of our recommendations and provided plans of action for the remaining three. DHS also provided technical comments that we have incorporated into the report as appropriate. Regarding our conclusion, the department disagreed that changes to the acquisition strategy delayed the program and added $1 billion to the overall cost, citing changes in the time period covered by each program cost estimate. Further, DHS did not believe we adequately considered the potential benefits and lower risks that resulted from revising the acquisition strategy for the program, stating that the original approach would have far exceeded initial cost estimates and schedule. Our report did not assess how much the program would have cost or when it would have delivered a complete solution if USCIS continued to pursue its original approach. Nevertheless, we maintain, as stated earlier in this report, the acquisition program baseline approved in May 2015 reflects delays of nearly four years and approximately $1 billion in additional cost when compared to the program’s July 2011 program baseline. Our report also documents that cost increases and delays in achieving full operational capability were due, in part, to unexpected or greater-than-expected challenges in implementing the program’s new approach. In response to the department’s comment about the time period covered by the new cost estimate, we added additional detail to the report to reflect that the newly-approved program baseline covers an additional 11 years beyond what was addressed in the initial program baseline. Even taking this into account, these additional years only account for about $536 million of the increased costs. Concerning the statement that we did not adequately consider the potential benefits and lower risks of revising the acquisition strategy, we disagree and stand by our analysis and findings. With respect to potential benefits, we reported that some of the changes have the potential to result in improvements. For example, our work has shown that, if performed effectively, an Agile approach to software development can provide more flexibility to respond to changing agency priorities and allow for easier incorporation of emerging technologies. However, other changes may introduce additional program risk. For example, our prior work has shown that program complexity increases as the number of contracts and contractual relationships increase. Further, the schedule and performance risks that arise from higher program complexity can result in greater program costs. Regarding our recommendation to re-baseline cost, schedule, and performance expectations for the remainder of the Transformation Program, DHS provided evidence that it has been fully implemented. Specifically, DHS provided an approved acquisition program baseline and supporting documents. These documents demonstrate that the department has approved a re-baseline of cost, schedule, and performance expectations for the remainder of the Transformation Program. DHS concurred with our recommendation that the Acquisition Review Board improve its governance of the program and described specific actions taken that they believe will fully address it. For example, DHS cited a signed memorandum of agreement outlining specific metrics related to cost, schedule, and technical performance. It also cited procedures to ensure acquisition decision memorandum actions are tracked until desired outcomes are achieved and approval of all required acquisition documentation prior to a re-baseline. If implemented effectively, these actions have the potential to fully address our recommendation. DHS concurred with our recommendation on improving the governance of the Transformation Program’s Executive Steering Committee. The department also described planned actions to address the recommendation, including ensuring that cost and schedule data are presented and evaluated against the Acquisition Program Baseline. The department estimated that it will be able to demonstrate successful resolution of this recommendation by December 31, 2015. If implemented effectively, these actions have the potential to fully address our recommendation. DHS also concurred with our recommendation to ensure that the department’s Chief Information Officer updates the federal IT Dashboard using accurate and reliable information about the program. The department described actions it has taken and plans to take to address this recommendation. For example, the DHS Chief Information Officer has established an Integrated Product Team with PARM to identify gaps in its assessment processes and establish better coordination to ensure the timely availability of updated Acquisition Program Board information. In addition, the department stated it would take steps to improve oversight and data quality by consolidating two automated tools into a single enterprise information management and repository system. The department estimated that it will be able to demonstrate successful resolution of this recommendation by December 31, 2015. If implemented effectively, these actions have the potential to fully address our recommendation. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees and the Secretary of Homeland Security. This report will also be available at no charge on our website at http://www.gao.gov. If you or your staffs have any questions on matters discussed in this report, please contact me at (202) 512-4456 or chac@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Our objectives were to (1) determine the status of the Transformation Program, including the impact of changes made to the acquisition strategy and (2) assess the extent to which the Department of Homeland Security (DHS) and the U.S. Citizenship and Immigration Services (USCIS) are executing effective program oversight and governance. To determine the status of the Transformation Program and the impact of changes made to the acquisition strategy, we reviewed and analyzed recent program planning documentation and compared it to past documentation and previously reported issues. Specifically, we reviewed recent documentation such as a draft acquisition program baseline, test and evaluation master plan and operational requirements, the acquisition plan, annual budget figures, budget justification reviews, the annual operating plan, capabilities and constraints documents, contracts and associated statement of work, life-cycle cost estimate, privacy impact assessments, system design documents, team process agreement, and the Transformation road map. To identify differences in past and current program expectations, we reviewed past program documentation, such as the acquisition program baseline, acquisition plan, business case analysis for process alternatives, concept of operations, cost estimating baseline document, exhibit 300 submissions to the Office of Management and Budget, life- cycle cost estimate, mission needs statement, operational assessments and the associated operational requirements document, program management plan, and solution architect contract and statement of work. For example, we compared the operational requirements document approved in 2011 to a draft operational requirements document from 2014 to determine changes in operating requirements and measures used in operational testing. We also compared the cost savings and benefits in the business case of process alternatives to release-specific capabilities and constraints documents along with other planning documentation to determine if recent changes to the program would impact cost savings and benefits intended for the program. In addition, we reviewed information from DHS and USCIS oversight entities, such as meeting minutes and slide decks from the Executive Steering Committee to further understand program changes. We also reviewed related GAO and DHS Inspector General reports to capture previously identified issues encountered by the Transformation Program. To determine the extent to which DHS and USCIS are executing effective program oversight and governance, we reviewed DHS acquisition management policy, analyzed roles and responsibilities, and reviewed the execution of these roles and responsibilities against relevant policy, guidance, and leading practices. Specifically, we identified DHS and USCIS policy for acquisition management such as Acquisition Management Directive 102-01 to understand the DHS and USCIS program oversight entities and expectations. We also identified criteria for effective governance and oversight based on practices captured in the IT Investment Management Maturity Framework and the Capability Maturity Model Integration for Development. We reviewed and analyzed charters of oversight bodies, meeting minutes, presentation slides and supporting materials, and after action reports of various oversight entities, including the Acquisition Review Board, Executive Steering Committee, the Office of the Chief Information Officer (OCIO), the Office of Program Accountability and Risk Management (PARM), the Component Acquisition Review Board and the Product Management Team. Based on this information, we assessed the extent to which the Acquisition Review Board and Executive Steering Committee had executed key governance and oversight practices. We assessed a governance body as having implemented a practice if the practice was shown to have been consistently applied (at least 80 percent of the time), partially implemented if the practice was applied but on an inconsistent basis (at least a quarter of the time), and not implemented if the practice was not applied. To determine the reliability of assessments produced by PARM, we reviewed policy for the comprehensive acquisition status reports to understand the criteria for assessing major acquisition programs. We reviewed the master acquisition oversight list from 2010 through 2014 to confirm that the Transformation Program was properly included in such reviews and included in that list. We analyzed comprehensive acquisition status reports in which the Transformation Program was identified. These reports include calendar year 2011, fiscal years 2013 and 2014. We compared these reports against current acquisition planning documentation to gauge the accuracy of the information. For example, we compared the tasks reported on the comprehensive acquisition status reports that were planned for the next 12 months to the program schedule presented to the Executive Steering Committee for that same time period. We interviewed PARM officials to discuss any gaps. We also reviewed a prior GAO report covering PARM oversight and reporting for the comprehensive acquisition status report and the reliability of information in the system supporting the process. In that report, GAO determined that the data in the system were not sufficiently reliable for our purposes, as also discussed in this report. Moreover, PARM officials acknowledged that there were data accuracy issues with the system. To determine the reliability of assessments produced by OCIO, we reviewed the office’s program assessment scoring and Federal IT Dashboard reporting guides to understand the criteria for assessing major IT programs. We analyzed all reports and underlying scorecards for the Transformation Program. These reports covered from June 2013 through June 2014. We compared these reports against current acquisition planning documentation to gauge the accuracy of the information. For example, we compared information contained in the assessment narrative to presentations made before oversight bodies for that same time period. We interviewed OCIO officials to discuss any gaps. We determined these assessments were unreliable as discussed in the report. We conducted this performance audit from August 2014 to May 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Transformation Program is to link developed systems, enterprise services, and existing systems and capabilities to enable end-to-end processing. The central project in this portfolio is the USCIS Electronic Immigration System (USCIS ELIS), which is to enable electronic filing and adjudicative case management. USCIS ELIS is to function as a component of a larger architecture consisting of multiple systems, services and interfaces. In particular, according to USCIS, USCIS ELIS is to interface with existing systems, some of which are to be decommissioned as the program is fully deployed. These components must all function together as a whole to enable USCIS to operate in a person-centric, paperless environment. Table 11 describes systems and services with which USCIS ELIS is planned to interface. The Transformation Program plans to be fully operational no later than March 2019. In order to be fully operational, the program intends to develop and deploy capabilities incrementally across various releases, beginning with release 5.0. Table 12 describes the planned capabilities to be delivered by release according to the Transformation Program road map, as of December 2014. In addition to the contact named above, individuals making contributions to this report included Michael Holland (assistant director), Mathew Bader, Kathryn Bernet, Nancy Glover, Martin Skorczynski, Nathan Tranquilli, and Johnathan Wall. | Each year, the Department of Homeland Security's (DHS) USCIS processes millions of applications for persons seeking to study, work, visit, or live in the United States. USCIS has been working since 2005 to transform its outdated systems into an account-based system with electronic adjudication and case management tools that will allow applicants to apply and track the progress of their application online. In 2011, USCIS reported that this effort, called the Transformation Program, was to be completed no later than June 2014 at a cost of up to $2.1 billion. Given the critical importance of the Transformation Program, GAO was asked to review it. This report (1) discusses the program's current status, including the impact of changes made, and (2) assesses the extent to which DHS and USCIS are executing effective program oversight and governance. To do so, GAO reviewed DHS and USCIS documents, interviewed relevant officials, and compared program documentation and actions to DHS and USCIS policy and guidance and GAO and industry leading information technology practices. The U.S. Citizenship and Immigration Services' (USCIS) currently expects that its Transformation Program will cost up to $3.1 billion and be fully deployed no later than March 2019, which is an increase of approximately $1 billion and delay of over 4 years from its initial July 2011 baseline. In March 2012, the program began to significantly change its acquisition strategy to address various technical challenges (see table). Key Changes to the Transformation Program's Acquisition Strategy Source: GAO analysis of USCIS documentation. | GAO-15-415 These changes have significantly delayed the program's planned schedule, which in turn has had adverse effects on when USCIS expects to achieve cost savings, operational efficiencies, and other benefits. Among other things, USCIS has yet to achieve the goal of enhancing national security by authenticating users and integrating with external agency databases. While the program's two key governance bodies have taken actions aligned with leading IT management practices, neither has used reliable information to make decisions and inform external reporting (see table). For example, one governing body's vote in March 2013 to migrate to a new architecture was based in part on savings that did not account for the added costs of merging data from the old architecture. The ability of USCIS, DHS, and Congress to effectively monitor program performance may be limited until these bodies more effectively use reliable information to inform their program evaluations. Extent to Which Program Governance Bodies Met Leading Practices for Oversight ● Implemented ◐ Partially implemented ○ Not implemented Source: GAO analysis of USCIS documentation. | GAO-15-415 GAO is making recommendations to DHS components and offices to improve governance and oversight of the Transformation Program. DHS agreed with the recommendations, but did not agree with GAO's evaluation of the impact of changes made to the acquisition strategy. GAO maintains its position on the impact of changes, as discussed in the report. |
Medicaid programs generally represent an open-ended entitlement under which the federal government is obligated to pay its share of expenditures for covered services provided to eligible individuals under each state’s federally approved Medicaid plan. Under federal Medicaid law, to qualify for Medicaid coverage, individuals generally must fall within certain eligibility categories—such as children, pregnant women, adults in families with dependent children, and those who are aged or disabled— and meet financial eligibility criteria. In addition, since 1986, federal law has required that, as a condition of Medicaid eligibility, individuals declare under penalty of perjury that they are citizens or nationals of the United States or in satisfactory immigration status. Eligibility is determined at the time of application, and for individuals enrolled in the program, at a regular basis referred to as redetermination. States differ in how they determine eligibility for Medicaid, and many took steps before 2006 to streamline their enrollment processes. While some states conduct all eligibility screening and determinations within the state’s Medicaid agency, other states contract with different state agencies, counties, or other local governmental entities to conduct or assist with eligibility determinations. In some cases, states also utilize community-based organizations to assist with outreach and education in their Medicaid programs. Over the past decade, states have also made efforts to simplify the application process to make Medicaid programs more accessible to eligible families. As part of these efforts, many states implemented mail-in applications and ended requirements for face-to-face interviews. States also began coordinating Medicaid eligibility determinations with other public programs, such as school lunch programs and Temporary Assistance for Needy Families. Enacted in February 2006, the DRA includes a number of new requirements for state Medicaid programs. Most relevant to this report, as of July 1, 2006, the DRA required states to document citizenship of applicants and beneficiaries as a condition of receiving federal matching funds for their Medicaid expenditures. Under this provision, Medicaid applicants and beneficiaries who are undergoing redeterminations of eligibility must provide “satisfactory documentary evidence” of citizenship. Documenting citizenship is a onetime event completed by individuals either at application or, for those already enrolled, at their first redetermination of eligibility. (Fig. 1 illustrates the sequence of key events regarding the requirement from enactment of the DRA through February 2007.) The DRA explicitly exempts certain individuals from having to document citizenship, specifically those entitled to or enrolled in Medicare, certain individuals receiving Supplemental Security Income, and any additional populations as designated by the Secretary of HHS. In December 2006, Congress expanded the list of populations that are exempt, adding individuals receiving Social Security disability insurance benefits and children in foster care or children who are receiving adoption or foster care assistance. In implementing the DRA provision, CMS first provided guidance to states in a June 2006 letter to state Medicaid directors and subsequently published an interim final rule on July 12, 2006, almost 2 weeks after the DRA provision went into effect. In the interim final rule, CMS expanded upon the list of acceptable documents identified in the DRA and published regulations that grouped the documents by level of reliability, creating a hierarchy in the list and restricting the use of less reliable documents. As required under the DRA, certain documents, such as U.S. passports, are considered sufficient evidence of citizenship. The DRA requires that if an individual does not have one of these primary documents, the individual must produce specific types of documentation establishing citizenship, such as a U.S. birth certificate, as well as documentation establishing personal identity. The regulations published by CMS similarly identify primary, or tier 1, documents to establish citizenship. Under the regulations, if individuals do not have primary evidence, they are expected to produce secondary, or tier 2, evidence of citizenship, such as a military record showing a U.S. place of birth, as well as evidence of identity, such as a state-issued driver’s license. If neither primary nor secondary evidence of citizenship is available, individuals may provide third tier evidence of citizenship to accompany evidence of identity. If primary evidence of citizenship is unavailable, secondary and third tier evidence do not exist or cannot be obtained in a reasonable time period, and the individual was born in the United States, then the individual may provide fourth tier evidence of citizenship, along with evidence of identity. (See table 1 and app. I.) In addition to prescribing a list of acceptable documents for verifying citizenship, the regulations issued by CMS specify, with one exception, that documents must be originals or copies certified by the issuing agency. The exception is that for a U.S. birth certificate, which is a tier 2 document, states may use a cross match with a state vital statistics agency to document a birth record. The regulations allow states to accept original documentation from individuals in person or through the mail. Under the regulations issued by CMS, states must provide applicants and Medicaid beneficiaries a “reasonable opportunity” to document their citizenship before denying or terminating Medicaid eligibility. States have flexibility in defining the length of the reasonable opportunity period. The regulations further explain that current Medicaid beneficiaries must remain eligible for benefits during this period but that states may terminate eligibility afterward if they determine that the beneficiary has not made a good faith effort to present documentation. In contrast, applicants are not eligible for Medicaid coverage until they submit the required documentation. The regulations also require states to assist individuals who are physically or mentally incapable of obtaining documentation and do not have a representative to assist them. However, the regulations do not specify criteria for determining who is capable or the level of assistance states should provide. CMS intends to monitor state implementation of the requirement, including the extent to which states use the most reliable evidence available to establish citizenship based on its hierarchy. States that do not comply with the regulations may face either denied or deferred payment of federal matching funds. In the interim final rule, CMS assessed potential administrative and fiscal effects of the requirement. For example, CMS estimated that individuals would need, on average, 10 minutes to acquire and provide the state with acceptable documentary evidence and that states would need 5 minutes per individual to verify citizenship and maintain current records. In addition, CMS determined that implementing the rule would have no consequential effect on costs for state, local, or tribal governments or the private sector. Under the rule, states may seek federal Medicaid matching funds for administrative expenditures associated with implementing the requirement at a 50 percent federal matching rate. States reported that the requirement resulted in barriers to access, such as delayed or lost Medicaid coverage for some eligible individuals. Of the 44 states, 22 reported a decline in Medicaid enrollment due to the requirement. Most that reported a decline in enrollment attributed it to delays in or losses of coverage for individuals who appeared to be eligible citizens, and all states reporting a decline reported that children were affected by the requirement. States that reported a decline in enrollment varied in their views of the effects on access to Medicaid coverage after the first year of implementation. State enrollment policies and whether an individual is an applicant or a beneficiary at redetermination are two factors that may have influenced the effect of the requirement on individuals’ access. Half the states that reported implementing the requirement noted that the requirement resulted in declines in Medicaid enrollment. Of the 44 states, 22 states reported a decline in enrollment due to implementing the requirement, 12 reported no change in enrollment as a result of the requirement, and 10 reported that they did not know the effect of the requirement on enrollment (see fig. 1). Of the 22 states that reported a decline in enrollment due to the requirement, all responded that children were affected by the requirement, and 21 reported that adults were affected, with 2 specifying pregnant women. A few also responded that the aged and blind and disabled were also affected. Though states often cited a combination of reasons for the decline in Medicaid enrollment, when asked the primary reason, the majority of states (12 of 22) reported that enrollment declined because applicants who appeared to be eligible citizens experienced delays in receiving coverage. In addition, 5 of the 22 states identified the primary reason for the enrollment decline as current beneficiaries losing coverage, with 4 of the 5 states reporting that those individuals appeared eligible. Two states reported that declines were largely driven by denials in coverage for individuals who did not prove their citizenship. It was unclear from survey results, however, whether these individuals were determined ineligible because they were not citizens or simply because they did not provide the required documents within the time frames allowed by the state. (See fig. 3.) Two of the remaining 3 states reported that the primary reason for the decline was that individuals were discouraged from applying because of the requirement or were not responding to states’ requests for documentation of citizenship. The extent of the decline in Medicaid enrollment due to the requirement in some individual states or nationally was unknown because not all states track the effect of the requirement on enrollment. However, 1 state that had begun tracking the effect reported (1) denying an average of 15.6 percent of its monthly applications because of insufficient citizenship documentation in the first 7 months following implementation and (2) terminating eligibility for an average of 3.2 percent of beneficiaries at redetermination per month over the same period and for the same reason. Overall, these denials and terminations represented over 18,000 individuals, who the state generally believed were eligible citizens. While not tracking the effect of the requirement on enrollment explicitly, 10 other states that attributed enrollment declines at least in part to applicants who were delayed or denied coverage also reported increases in monthly denials ranging from 1 to 14 percent after implementing the requirement. States reporting a decline in Medicaid enrollment differed in their views of the effects of the requirement on enrollment after the first year of implementation. Of the 22 states that reported a decline in enrollment, 17 states responded that they expected the downward enrollment trend to continue. Five of these states indicated that the declines would level off within approximately 1 year of implementation, citing, for example, a drop-off in terminations once their current beneficiaries have successfully documented their citizenship. Ten of the 17 states reported that they were unsure how long enrollment declines would continue or generally expected the trend to continue indefinitely. A few of these states noted concern about the ongoing effect on new applicants who will be unfamiliar with the requirement and may be denied enrollment or discouraged from applying. The remaining 5 of 22 states reported that they did not expect the decline to continue. Variation in the effects of the requirement on individuals’ access may have resulted from different state enrollment policies. For example, states that reported a previous reliance on mail-in applications and redeterminations were more likely to report a decline in Medicaid enrollment. About two- thirds of the 22 states that reported a decline in enrollment indicated that individuals most commonly applied by mail before the requirement was implemented. In contrast, the majority of the 12 states that reported no change in enrollment reported that individuals most frequently applied in person before the requirement was implemented. In addition, prior to implementation, 6 states had documentation policies in place that were similar to the requirement. Three of these 6 states reported no change in enrollment, with 1 explaining that it was because the state already required (1) proof of birth to verify age and family relationship and (2) proof of identity for adults. Two of the 6 states reported a decline in enrollment caused by the requirement. Another enrollment policy that may have influenced the requirement’s effect on access to Medicaid coverage was the amount of time states allowed individuals to comply with the requirement—otherwise known as reasonable opportunity periods. In total, 33 states reported the number of days they allowed applicants and beneficiaries to meet the requirement before denying applications or terminating eligibility, with limits generally ranging from 10 days to 1 year. Nine of the 33 states reported allowing applicants 30 days or less, and 4 of these states also reported a decline in enrollment due to the requirement. A few states reported allowing applicants and beneficiaries an indefinite amount of time to obtain and submit the necessary documentation, provided they were deemed as making a good faith effort. Some states’ written policies indicated that the reasonable opportunity period could be extended, provided the individual notified the state that he or she was making a good faith effort to obtain the documentation but needed more time. The effect of the requirement on access may have also depended on whether the individual was a new applicant or a beneficiary at redetermination. Applicants who declare themselves citizens are not eligible for Medicaid coverage until they submit the required documentation, while beneficiaries at redetermination maintain their eligibility while collecting documents as long as they are within the reasonable opportunity period allowed by the state or deemed as making a good faith effort to comply with the requirement. For example, a pregnant woman at redetermination is eligible to have her 20-week ultrasound covered by Medicaid, even though she has not yet submitted her documentation to the state. In contrast, a pregnant woman who is a new Medicaid applicant would not be determined eligible for coverage until she submits her documentation (see fig. 4). In addition, applicants who were born out of state may have faced additional delays while attempting to obtain documentation from their birth state. For example, one state noted that it could take 6 months or more to obtain a birth certificate from another state. In addition, applicants in some states were given less time than beneficiaries to meet the requirement. Of the 33 states that provided information on their reasonable opportunity periods, 13 states reported that the time allowed for providing documentation was longer for beneficiaries at redetermination than for applicants, with this difference ranging from 24 to 320 days. Five of the 13 states reported allowing 45 days for applicants and 300 days or more for beneficiaries. States may offer more flexibility to Medicaid beneficiaries as CMS officials told us that for these individuals the state cannot terminate benefits without documenting that the beneficiary has not made a good faith effort to provide the necessary documentation. Although states reported investing resources to implement the requirement, potential fiscal benefits for the federal government and states are uncertain. To implement the requirement and assist individuals with compliance, all of the 44 states took a number of administrative measures, such as providing additional training for eligibility workers and hiring additional staff, and some also reported committing financial resources. Despite these measures, however, states reported that as a result of the requirement, individuals needed more assistance in person and it was taking the state longer on average to complete applications and redeterminations. According to states, two particular aspects of the requirement increased the burden of implementing it: (1) that documents must be originals and (2) the list of acceptable documents was complex and did not allow for exceptions. While CMS estimated federal and state savings from the requirement, the estimates may be overstated. All 44 states reported taking a number of administrative measures to implement the requirement and assist individuals with compliance. Measures most frequently taken by states included training eligibility workers, revising application and redetermination forms, conducting vital statistics data matches, and modifying information technology systems. For example, 1 state reported that in addition to training 18,000 staff on the requirement, it also provided training and information to community agencies, consumer advocates, and providers on how to assist individuals with compliance. Another state established data matches with Indian Health Services to obtain hospital records that met the requirement and built a Web site on which eligibility workers could search the state’s vital records to document citizenship. To supplement the efforts of eligibility workers, 3 states reported having formed special units of staff focused entirely on assisting individuals to meet the requirement, particularly in difficult cases where eligibility workers had been unsuccessful in their attempts to help individuals comply. One of those states reported that it was in the process of expanding the size of its team from 22 workers to 40 workers. Table 2 lists the administrative measures frequently reported by states. Beyond these administrative measures, 40 percent of the 44 states reported having appropriated funds for implementation or planned to do so in future years. Specifically, 12 states reported that funds were appropriated in their state fiscal year 2007 to implement the requirement, which for the 10 states that specified the amount totaled over $28 million, with appropriations ranging from $350,000 to $10 million in individual states. Further, 15 states budgeted funds for implementation costs in state fiscal year 2008. While many states did not specifically appropriate funds toward implementing the requirement in state fiscal year 2007, this may have been due, in part, to the timing of the requirement within the budget year. States may not be budgeting funds for future years for various reasons, including that the burden of the requirement may decrease after the first year of implementation or that the state may face other budget constraints. For example, one state Medicaid office that reported a significant backlog in applications and redeterminations as a result of the requirement requested funds for implementation in state fiscal year 2008 and planned to renew those requests in state fiscal years 2009 and 2010, but was not sure whether the state legislature would appropriate the funds. Despite investments of resources, most states reported that the requirement resulted in the state spending more time completing applications and redeterminations and individuals needing more assistance in person during the process. Of the 44 states, 28 states reported increases in the level of assistance provided to clients in person, and 35 states reported an increase in the amount of time it took the state to complete applications and redeterminations. (See fig. 5.) States reporting no change in the level of in-person assistance or time spent completing applications and redeterminations since implementation were frequently states where individuals primarily applied for and renewed Medicaid enrollment in person prior to the requirement. Of the 35 states that reported increases in enrollment processing time, most reported that the requirement added 5 or more minutes per case to the processing time for applications and redeterminations. While only 1 of the 35 states expected an increase of less than 5 minutes per case, 9 states estimated an additional 5 to 15 minutes per case, and 16 states expected the requirement to add over 15 minutes of processing time per application or redetermination, well above the 5 minutes estimated by CMS in the interim final rule. One of these 16 states reported processing an average of over 150,000 applications per month in the 8 months following implementation. In that state, assuming an increase in processing time of a minimum of 16 minutes per application since implementing the requirement, this would have added at least 40,000 hours of staff time per month. Other states emphasized that the effect of the requirement on workload goes beyond the amount of time necessary to complete applications and redeterminations. For example, one state reported a 60 percent increase in phone calls (from 24,000 to 39,000 per month), a tenfold increase in voice messages (from 1,200 to 11,000 per month), and an 11 percent increase in the amount of time spent on each call. Though the requirement represented a change in enrollment procedures for most states, states reported that certain aspects of the requirement specified under federal regulations by CMS increased their implementation burden. More than 80 percent of states (36 of 44) reported facing administrative challenges in implementing the requirement, and many attributed the challenges to two specific aspects of the requirement outlined in the regulations, namely (1) that documents must be originals and (2) that the list of acceptable documentation was complex and did not allow for exceptions. In fact, nearly all states (42 of the 44) reported that having to provide original documentation posed a barrier to eligible citizens’ meeting the requirement. Further, many states also reported that mandating originals affected state workload primarily because individuals did not feel comfortable mailing the documents to the state and instead began presenting them in person. With regard to the list of acceptable documents, states reported that the list was complex, often confusing both individuals and eligibility workers, and left states with no discretion to allow exceptions. For example, 1 state that documented citizenship for Medicaid prior to enactment of the DRA noted that when acceptable documentation was not available, the state made an assessment based on a preponderance of evidence, which included certain tribal documents excluded from CMS’s list. Thirty-four states reported that an individual’s inability to provide documents other than those defined under federal regulations by CMS created a barrier to individuals’ compliance with the requirement. Table 3 presents some of the challenges reported by states to implement the requirement. When developing its interim final rule, CMS officials said that CMS considered the specifications of the DRA and other existing federal policies on documenting citizenship, including policies of SSA. CMS officials told us that after meeting the specifications of the DRA, the agency modeled its regulations after the policy established by SSA for documenting citizenship when individuals apply for a Social Security number. Specifically, SSA’s policy mandates that documents be originals and includes a hierarchy of documents with restrictions on the use of less reliable documents. Also, the list of acceptable documents identified by CMS mirrors SSA’s list with only a few exceptions. In contrast, however, SSA’s policy allows more flexibility in special cases. For example, when a U.S.-born applicant for a Social Security number does not have any of the documents from the list, SSA’s policy allows staff to work with their supervisors to determine what would be acceptable in those cases. CMS officials told us that CMS’s list of acceptable documents represents a significant expansion of what was included in the DRA provision and is exhaustive and that they were not aware of any case where an individual was unable to provide any document from the list. To assist states and individuals in complying with documenting citizenship, CMS included some important tools in the regulations. For example, the regulations allow states to use data matches with state vital statistics agencies to verify citizenship and with other government agencies to verify identity, which could alleviate the need for individuals to submit original documents. While many states reported conducting data matches on behalf of individuals, several also expressed concerns that such matches required additional resources and could not be done for individuals born out of state. One state reported conducting 60,000 on-line inquiries per month into the state’s vital records system after implementing the requirement. In one area of the state, however, nearly all children were born across state lines and therefore the state could not electronically verify their citizenship. The state reported that verifying citizenship for children in that portion of the state was especially difficult. While CMS officials confirmed that there is no nationwide database for verifying citizenship, they also told us that there are currently initiatives under way in more than one state to share vital statistics with other states through data matches. Though CMS expected some savings to result from the requirement in fiscal year 2008, the estimate did not account for the cost to states and the federal government to implement the requirement. CMS’s Office of the Actuary estimated that the requirement would result in $50 million in savings for the federal government and $40 million in savings for states in fiscal year 2008, with all savings resulting from terminations of eligibility for individuals who were not citizens. Specifically, CMS assumed that 50,000 noncitizen beneficiaries (which represent less than 1 percent of Medicaid enrollment nationwide) would prove ineligible for Medicaid benefits and be terminated from the program. Though CMS authorized states to claim federal Medicaid matching funds for administrative expenditures related to implementing the requirement, and 15 states reported budgeting funds for 2008 in addition to the numerous other measures being taken by states, CMS’s estimate of savings did not account for any increase in administrative expenditures by states or the federal government. CMS expected, however, that states would experience higher administrative costs during the first year of implementation with these costs decreasing in later years. In addition to not accounting for the cost of the requirement, survey results indicated that CMS may have overestimated the potential savings from the requirement because the intended effect of the requirement, that is, to prevent ineligible noncitizens from receiving Medicaid benefits, may be less prevalent than expected. When asked about potential savings from the requirement, only 5 of the 44 states reported expecting the requirement to result in a decrease in their expenditures for Medicaid benefits in state fiscal year 2008, due in large part to individuals who appeared to be eligible citizens who experienced delays in or lost coverage. Only 1 of the 5 states expecting savings reported that enrollment declines resulted in part from denials or terminations of Medicaid coverage for individuals who were determined ineligible because of their citizenship status. The remaining 39 states expected no savings (20 states) or reported that it was too early to know (19 states). Several of the 20 states that expected no savings in 2008 reported that though some individuals have experienced delays in coverage, those individuals were eligible citizens and would eventually provide the required documentation and receive coverage. In addition, 2 of these 20 states noted that they were not inappropriately financing Medicaid benefits for noncitizens in the past and so expected no savings. Of the 19 states that were unsure how the requirement would affect expenditures, 2 were still tracking the effects of the requirement. Another of these 19 states—a state that reported a decline in enrollment as a result of implementing the requirement—noted that it was difficult to determine whether it would result in lower costs or whether costs would increase, as the state expected individuals would wait to enroll until they were ill or injured, rather than receive preventive care that is less costly to provide. We provided a draft of this report to CMS for comment and received a written response, which is included in this report as appendix II. CMS also provided technical corrections, which we incorporated as appropriate. CMS commented that it generally did not disagree with the approach of our study, but raised several concerns regarding the sufficiency of the underlying data for, and certain aspects of, our findings. In particular, CMS characterized the report’s conclusions as overstating the effect of the requirement on enrollment, and stated it had concerns about the fact that the states did not submit data to substantiate their responses to the survey questions on which we based our findings. CMS also commented on our findings related to the challenges posed by the requirement for states and individuals and the cost to states of implementing the requirement. Specific concerns raised and comments made by CMS, and our evaluation, follow. Regarding the sufficiency of underlying data for certain findings, CMS commented that our survey asked states about the effects of the requirement on enrollment, although states did not provide data to validate their responses. In addition, CMS expressed concerns that the draft report appeared to draw broad conclusions about the effect of the requirement from data provided by one state. The purpose of our work was to report on the initial effects of the requirement. Absent national CMS data on the effects and because state Medicaid offices were largely responsible for implementing the requirement, we determined they were the best source for this information. Though not all states could quantify the effect of the requirement on enrollment, 22 states reported that the requirement resulted in decreases in enrollment, 12 reported that the requirement had no effect on enrollment, and 10 reported not knowing the effect of the requirement on enrollment. We disagree with CMS’s assertion that the draft report drew broad conclusions about the effect of the requirement on enrollment from one state’s data. The report clearly indicates that these data are from a single state and further notes that the extent of the decline in Medicaid enrollment due to the requirement in some individual states and nationally is unknown. CMS raised concerns about one survey question that asked states that reported enrollment declines due to the requirement the reasons for those declines and also about the level of information provided regarding the degree to which the requirement deterred nonqualified aliens from applying for Medicaid. With regard to the first concern, in responding to our survey, states could check an option that said enrollment declines were caused by the delays in or losses of coverage for individuals who appeared eligible. CMS objected to the use of “appeared eligible,” noting that the term is vague and subjective and that it tends to lead the respondent to certain conclusions. However, as we explain in the report, asking states to assess the citizenship status of individuals is consistent with most states’ experience in making such determinations under the self- attestation policies that were in effect prior to the DRA provision. With regard to the second concern, we agree with CMS that our report provides limited information about the extent to which the requirement is deterring nonqualified aliens from applying for Medicaid. However, the report does discuss whether CMS had evidence that such individuals were falsely declaring citizenship when applying for Medicaid. Specifically, our report notes that CMS in its comments to the 2005 OIG report on state self- attestation policies acknowledged that the OIG did not find problems regarding false allegations of citizenship, and CMS was not aware of any such problems. CMS commented that the draft report overstated the effect of the requirement on enrollment because the majority of states reporting enrollment declines attributed the declines primarily to delays in receiving coverage rather than denials of coverage. Our report notes the implications for individuals of such delays in coverage. The report points out, for example, that a pregnant woman who is a citizen may be forced to forgo needed prenatal care while her coverage is delayed by efforts to meet the requirement. CMS also noted that its goal in implementing the requirement was to minimize the incidence of delays in or denials of eligibility due to the requirement. In response to our findings that two aspects of the requirement specified under regulations issued by CMS—namely that documents be originals and that the list of acceptable documents is complex and does not allow for exceptions—presented challenges to states and individuals, CMS commented that the agency has attempted to provide as much flexibility as possible and that other federal agencies require original documentation. Nonetheless, our survey results clearly indicated that these two aspects of the requirement are viewed by most states as posing barriers to access. In particular, 42 of 44 states reported that having to provide original documentation posed a barrier to eligible citizens’ meeting the requirement, and 34 states reported that an individual’s inability to provide documents other than those defined under federal regulations by CMS created a barrier to compliance. Further, while the report explains that CMS modeled its regulations after SSA’s policy for documenting citizenship when individuals apply for a Social Security number, the report also notes that, unlike CMS, SSA provides for flexibility in special cases. CMS also commented on our finding that CMS’s estimates of potential savings from the requirement in fiscal year 2008 did not account for administrative costs. Specifically, CMS agreed that its estimate did not account for administrative costs incurred by states to implement the requirement, but stated that any such costs would decrease after the first year of implementation. Our report describes that some states reported not having budgeted funds for the requirement in future years and explains that one reason for this may be that the burden of the requirement may decrease after the first year of implementation. However, the ongoing costs of assisting applicants in complying with the requirement may continue to be significant for some states, especially those states that had to substantially modify their enrollment procedures. For example, as noted in the report, due to the requirement, one state faced an additional 40,000 hours of staff time needed per month to process applications. CMS commented that it was not surprised that states reported facing challenges, given that the report’s findings were based on states’ experiences after less than 1 year of implementing the requirement. While agreeing that the requirement posed challenges for individuals and states, CMS asserted that these initial challenges have diminished and will continue to do so. Based on our survey responses, states largely do not share CMS’s optimism in this regard. In addition to describing the initial effects of the requirement, which in states’ perspectives have included enrollment declines and increased administrative burdens, our report includes additional indicators that the effects states experienced in the first year will continue at least to some extent in the future. For example, 17 of the 22 states that reported a decline in enrollment due to the requirement reported that they expected the downward trend in enrollment to continue, with some expecting the decline to continue indefinitely. In addition, 15 states reported already having budgeted funds for the requirement in state fiscal year 2008. CMS also emphasized actions it has taken to implement the requirement, such as issuing a letter to state Medicaid directors, publishing an interim final rule, and working on a final rule to be issued shortly. Our report describes the steps taken by CMS to implement the requirement. With regard to CMS’s work on a final rule, we modified our report to indicate CMS’s plans to issue such a rule shortly. As arranged with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Secretary of HHS, the Administrator of the Centers for Medicare & Medicaid Services, and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or cosgrovej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Federal regulations published by the Centers for Medicare & Medicaid Services (CMS) identify primary, or tier 1, documents that are considered sufficient to establish citizenship. Under the regulations, if individuals do not have primary evidence, they are expected to produce secondary, or tier 2, evidence of citizenship as well as evidence of identity. If neither primary nor secondary evidence of citizenship is available, individuals may provide third tier evidence of citizenship with accompanying evidence of identity. If primary evidence of citizenship is unavailable, secondary and third tier evidence do not exist or cannot be obtained in a reasonable time period, and the individual was born in the United States, then the individual may provide fourth tier evidence of citizenship, along with evidence of identity. See table 4 for a list of acceptable documents to prove citizenship and table 5 for acceptable identity documents. Kathryn Allen, Director, led the engagement through its initial phases. In addition, Susan Anthony, Assistant Director; Susan Barnidge; Laura Brogan; Elizabeth T. Morrison; and Hemi Tewarson made key contributions to this report. | The Deficit Reduction Act of 2005 (DRA) included a provision that requires states to obtain documentary evidence of U.S. citizenship or nationality when determining eligibility of Medicaid applicants and current beneficiaries; self-attestation of citizenship and nationality is no longer acceptable. The Centers for Medicare & Medicaid Services (CMS) issued regulations states must follow in obtaining this documentation. Interested parties have raised concerns that efforts to comply with the requirement will cause eligible citizens to lose access to Medicaid coverage and will be costly for states to implement. GAO was asked to examine how the requirement has affected individuals' access to Medicaid benefits and assess the administrative and fiscal effects of implementing the requirement. To do this work, GAO surveyed state Medicaid offices in the 50 states and the District of Columbia about their perspectives on access issues and the administrative and fiscal effects of the requirement. GAO obtained complete responses from 44 states representing 71 percent of national Medicaid enrollment in fiscal year 2004. GAO also reviewed federal laws, regulations, and CMS guidance. States reported that the citizenship documentation requirement resulted in barriers to access to Medicaid for some eligible citizens. Twenty-two of the 44 states reported declines in Medicaid enrollment due to the requirement, and a majority of these states attributed the declines to delays in or losses of Medicaid coverage for individuals who appeared to be eligible citizens. Of the remaining states, 12 reported that the requirement had no effect and 10 reported they did not know the requirement's effect on enrollment. Not all of the 22 states reporting declines could quantify enrollment declines due specifically to the requirement, but a state that had begun tracking the effect identified 18,000 individuals in the 7 months after implementation whose applications were denied or coverage was terminated for inability to provide the necessary documentation, though the state believed most of them to be eligible citizens. Further, states reporting a decline in enrollment varied in their impressions about the requirement's effect on enrollment after the first year of implementation. States' enrollment policies and whether an individual was an applicant or a beneficiary may have influenced the requirement's effect on access to Medicaid. For example, states that relied primarily on mail-in applications before the requirement were more likely to report declines in enrollment than states where individuals usually applied in person. In addition, the requirement may have more adversely affected applicants than beneficiaries because applicants were given less time to comply in some states and were not eligible for Medicaid benefits until they documented their citizenship. Although states reported investing resources to implement the requirement, potential fiscal benefits for the federal government and states are uncertain. All 44 states reported taking administrative measures to implement the requirement and assist individuals with compliance. In addition, 10 states reported that a total of $28 million was appropriated in state fiscal year 2007, and 15 states budgeted funds for implementation costs in state fiscal year 2008. Despite these measures, states reported that the requirement has increased the level of assistance needed by individuals and amount of time spent by states during the enrollment process. States specified two aspects of the requirement as increasing the burden for them and for individuals: that documents had to be originals and the list of acceptable documents was complex and did not allow for exceptions. Further, although CMS estimated the requirement would result in savings for the federal government and states of $90 million for fiscal year 2008, states' responses indicated that this estimate may be overstated for two reasons. Specifically, CMS did not account for the increased administrative expenditures reported by states, and the agency's estimated savings from ineligible, noncitizens no longer receiving benefits may be less than anticipated. In commenting on a draft of the report, CMS raised concerns about the conclusions drawn from the survey responses as to the requirement's effect on access, mainly that states did not submit data to support their responses. |
As you are aware, technology plays an important role in helping the federal government ensure the security of its many physical and information assets. Today, federal employees are issued a wide variety of identification (ID) cards that are used to access federal buildings and facilities, sometimes solely on the basis of visual inspection by security personnel. These cards often cannot be used for other important identification purposes—such as gaining access to an agency’s computer systems—and many can be easily forged or stolen and altered to permit access by unauthorized individuals. In general, the ease with which traditional ID cards—including credit cards—can be forged has contributed to an increase in identity theft and related security and financial problems for both individuals and organizations. The unique advantage of smart cards—as opposed to cards with simpler technology, such as magnetic stripes or bar codes—is that smart cards can exchange data with other systems and process information rather than simply serving as static data repositories. Smart cards can readily be tailored to meet the varying needs of federal agencies or to accommodate previously installed systems. For example, other media, such as magnetic stripes, bar codes, and optical memory (laser-readable) stripes can be added to smart cards to support interactions with existing systems and services or to provide additional storage capacity. An agency that has been using magnetic stripe cards for access to certain facilities could migrate to smart cards that would work with both its existing magnetic stripe readers as well as new smart card readers. Of course, the functions provided by the card’s magnetic stripe, which cannot process transactions, would be much more limited than those supported by the card’s integrated circuit chip. Optical memory stripes (which are similar to the technology used in commercial compact discs) can be used to equip a card with a large memory capacity for storing more extensive data—such as color photos, multiple fingerprint images, or other digitized images—and for making that card and its stored data very difficult to counterfeit. A typical example of a smart card is shown in figure 1. Smart cards can be used to significantly enhance the security of an organization’s computer systems by tightening controls over user access. A user wishing to log on to a computer system or network with controlled access must “prove” his or her identity to the system—a process called authentication. Many systems authenticate users by requiring them to enter secret passwords, which provide only modest security because the passwords can be easily compromised. Substantially better user authentication can be achieved by supplementing passwords with smart cards. Even stronger authentication can be achieved when smart cards are used in conjunction with biometrics. Smart cards are one type of media that can be configured to store biometric information—such as fingerprints or iris scans—in electronic records that can be retrieved and compared with an individual’s live biometric scan to verify that person’s identity in a way that is difficult to circumvent. A system requiring users to present a smart card, enter a password, and verify a biometric scan provides what security experts call “three-factor” authentication, with the three factors being (1) something you possess (the smart card), (2) something you know (the password), and (3) something you are (the biometric). Systems with three-factor authentication are considered to provide a relatively high level of security. Additionally, smart cards can be used in conjunction with public key infrastructure (PKI) technology to better secure electronic messages and transactions. A PKI is a system of hardware, software, policies, and people that, when fully and properly implemented, can provide a suite of information security assurances that are important in protecting sensitive communications and transactions. A properly implemented and maintained PKI can offer several important security services, including assurance that (1) the parties to an electronic transaction are really who they claim to be, (2) the information has not been altered or shared with any unauthorized entity, and (3) the parties will not be able to deny taking part in the transaction. Security experts generally agree that PKI technology is most effective when deployed in conjunction with smart cards. Smart cards are grouped into two major classes: contact cards and “contactless” cards. Contact cards have gold-plated contacts that connect directly with the read/write heads of a smart card reader when the card is inserted into the device. Contactless cards contain an embedded antenna and work when the card is waved within the magnetic field of a card reader or terminal. Contactless cards are better suited for environments where quick interaction between the card and reader is required, such as high-volume physical access. For example, the Washington Metropolitan Area Transit Authority has deployed an automated fare collection system using contactless smart cards as a way of speeding patrons’ access to the Washington, D.C. subway system. Smart cards can be configured to include both contact and contactless capabilities; however, two separate interfaces are needed because standards for the technologies are very different. Since the 1990s, the federal government has considered the use of smart card technology as one option for electronically improving security over buildings and computer systems. In 1996, OMB tasked GSA with taking the lead in facilitating a coordinated interagency management approach for the adoption of multi-application smart cards across government. In this regard, GSA has taken important steps to promote federal smart card use. For example, since 1998, it has worked with several other federal agencies to promote broad adoption of smart cards for authentication throughout the federal government. Specifically, GSA worked with the Department of the Navy to establish a technology demonstration center to showcase smart card technology and applications and it established a smart card project managers’ group and Government Smart Card Interagency Advisory Board. For many federal agencies, GSA’s chief contribution toward promoting smart card adoption was its effort in 2000 to develop a standard contracting vehicle for use by federal agencies in procuring commercial smart card products from vendors. Under the terms of the Smart Access Common ID Card contract, GSA, NIST, and the contract’s awardees worked together to develop smart card interoperability guidelines—including an architectural model, interface definitions, and standard data elements—that were intended to guarantee that all the products made available through the contract would be capable of working together. Further, OMB has begun taking action to develop a framework of policy guidance for governmentwide smart card adoption. Specifically, on July 3, 2003, OMB’s Administrator for E-Government and Information Technology issued a memorandum detailing specific actions the administration was taking to streamline authentication and identity management in the federal government. This included establishing the Federal Identity and Credentialing Committee to collect agency input on policy and requirements and coordinate the development of a comprehensive policy for credentialing federal employees. Since 1998, multiple smart card projects have been launched in the federal government addressing an array of capabilities and providing many tangible and intangible benefits, including enhancing security over buildings and other facilities, safeguarding computer systems and data, and conducting financial and nonfinancial transactions more accurately and efficiently. As of June 2004, 15 federal agencies reported 34 ongoing smart card projects. Initially, many of the smart card initiatives that were undertaken were small-scale demonstration projects that involved as few as 100 cardholders and intended to show the value of using smart cards for identification or to store cash value or other personal information. However, federal efforts toward the adoption of smart cards have continued to evolve as agencies have gained an increased understanding of the technology and its potential uses and benefits. Our most recent study of federal agencies’ investments in smart card technology, which we reported on last month, noted that agencies are increasingly moving away from many of their earlier efforts— which frequently involved small-scale, limited-duration pilot projects—toward much larger, integrated, agencywide initiatives aimed at providing smart cards as identity credentials that agency employees can use to gain both physical access to facilities, such as buildings, and logical access to computer systems and networks. In some cases, additional functions, such as asset management and stored value, are also being included. To date, the largest smart card program to be implemented in the federal government is the Common Access Card program of the Department of Defense (DOD), which is intended to be used for identification by about 3.5 million military and civilian personnel. Results from this project have indicated that smart cards can offer many useful benefits, such as significantly reducing the processing time required for deploying military personnel, tracking immunization records of dependent children, and verifying the identity of individuals accessing buildings and computer systems. Another large agencywide initiative is the Department of Homeland Security’s (DHS) Identification and Credentialing project, an effort in which the agency plans to issue 250,000 cards to employees and contractors using PKI technology for logical access and proximity chips for physical access. Authentication is to rely on biometrics with a personal identification number as a backup. Further, GSA’s Nationwide Identification is a recently initiated agencywide smart card project in which the agency plans to issue a single standard credential card for identification, building access, property management, and other applications to 61,000 federal employees, contractors, and tenant agencies. While smart card technology offers benefits, launching smart card projects—whether large or small—has proved challenging to federal agencies and efforts to sustain successful adoption of the technology across government. Our prior work noted a number of management and technical challenges that agency managers have faced. These challenges include: ● Sustaining executive-level commitment. Maintaining executive- level commitment is essential to implementing smart card technology effectively. Without this support and clear direction, large-scale smart card initiatives may encounter organizational resistance and cost concerns that lead to delays and cancellations. DOD officials stated that having a formal mandate from the Deputy Secretary of Defense to implement a uniform, common access identification card across the department was essential to getting a project as large as the Common Access Card initiative launched and funded. ● Recognizing resource requirements. Smart card implementation costs can be high, particularly if significant infrastructure modifications are required, or other technologies, such as biometrics and PKI, are being implemented in tandem with the cards. Key implementation activities that can be costly include managing contractors and card suppliers, developing systems and interfaces with existing personnel or credentialing systems, installing equipment and systems to distribute the cards, and training personnel to issue and use smart cards. As a result, agency officials have found that obtaining adequate resources is critical to implementing a major government smart card system. ● Integrating physical and logical security practices across organizations. The ability of smart card systems to address both physical and logical (information systems) security means that unprecedented levels of cooperation may be required among internal organizations that often had not previously collaborated, particularly physical security organizations and information technology organizations. In addition to the gap between physical and logical security organizations, the sheer number of separate and incompatible existing systems also adds to the challenge of establishing an integrated agencywide smart card system. ● Achieving interoperability among smart card systems. Interoperability is a key consideration in smart card deployment. The value of a smart card is greatly enhanced if it can be used with multiple systems at different agencies, and GSA has reported that virtually all agencies agree that interoperability at some level is critical to widespread adoption of smart cards across the government. However, achieving interoperability has been difficult because smart card products and systems developed in the past have generally been incompatible in all but very rudimentary ways. With varying products available from many vendors, there has been no obvious choice for an interoperability standard. GSA considered the achievement of interoperability across card systems to be one of its main priorities in developing its Smart Access Common ID Card contract that I discussed earlier. ● Maintaining security of smart card systems and privacy of personal information. Although concerns about security are a key driver for the adoption of smart card technology in the federal government, the security of smart card systems themselves is not foolproof and must be addressed when agencies plan the implementation of a smart card system. Although smart card systems are generally much more difficult to attack than traditional ID cards and password-protected systems, they are not invulnerable. In order to obtain the improved security services that smart cards offer, care must be taken to ensure that the cards and their supporting systems do not pose unacceptable security risks. In addition, protecting the privacy of personal information is a growing concern and must be addressed with regard to the personal information contained on the smart cards. Once in place, smart card-based systems designed simply to control access to facilities and systems could also be used to track the day-to-day activities of individuals, thus potentially compromising the individual’s privacy. Further, smart card-based systems could be used to aggregate sensitive information about individuals for purposes other than those prompting the initial collection of the information, which could compromise privacy. The Privacy Act of 1974 requires the federal government to restrict the disclosure of personally identifiable records maintained by federal agencies while permitting individuals access to their own records and the right to seek amendment of agency records that are inaccurate, irrelevant, untimely, or incomplete. Further, the E-Government Act of 2002 requires agencies to conduct privacy impact assessments before developing or procuring information technology that collects, maintains, or disseminates personally identifiable information. Accordingly, agency officials need to assess and plan for appropriate privacy measures when implementing smart card-based systems and ensure that privacy impact assessments are conducted when required. In considering these challenges, it is important to note that, while they served to slow the adoption of smart card technology in past years, they may be less difficult in the future because of increased management concerns about securing federal facilities and information systems and because technical advances have improved the capabilities and reduced the cost of smart card systems. Nonetheless, sustained diligence in responding to such challenges is essential in light of the growing emphasis on the use of smart card technology. Recognizing the critical role that GSA, OMB, and NIST play in furthering the successful adoption of smart card technology, we made recommendations in January 2003 to these agencies that were aimed at advancing the adoption of smart card technology governmentwide. Specifically, we recommended that ● the Director, OMB, issue governmentwide policy guidance regarding adoption of smart cards for secure access to physical and logical assets; ● the Director, NIST, continue to improve and update the government smart card interoperability specification by addressing governmentwide standards for additional technologies—such as contactless cards, biometrics, and optical stripe media—as well as integration with PKI; and ● the Administrator, GSA, improve the effectiveness of GSA’s promotion of smart card technologies within the federal government by (1) developing an internal implementation strategy with specific goals and milestones to ensure that GSA’s internal organizations support and implement smart card systems consistently; (2) updating its governmentwide implementation strategy and administrative guidance on implementing smart card systems to address current security priorities; (3) establishing guidelines for federal building security that address the role of smart card technology; and (4) developing a process for conducting ongoing evaluations of the implementation of smart card-based systems by federal agencies to ensure that lessons learned and best practices are shared across government. As of last month, all three agencies had taken actions to address the recommendations made to them. Specifically, in response to our recommendations, OMB issued its July 3, 2003, memorandum to major departments and agencies directing them to coordinate and consolidate investments related to authentication and identity management, including the implementation of smart card technology. NIST responded by improving and updating the government smart card interoperability specification to address additional technologies, including contactless cards and biometrics. GSA responded to our recommendations by updating its “Smart Card Policy and Administrative Guidance” to better address security priorities, including minimum-security standards for federal facilities, computer systems, and data across the government. However, three of our four recommendations to GSA remained outstanding. GSA officials stated that they were working to address the recommendations to develop an internal GSA smart card implementation strategy, develop a process for conducting evaluations of smart card implementations, and share lessons learned and best practices across government. The responsibility for one recommendation—establishing guidelines for federal building security that address the role of smart card technology—was transferred to DHS. Recent federal direction contained in Homeland Security Presidential Directive 12 could further facilitate smart card adoption across the federal government. This directive, signed in late August, seeks to establish a common identification standard for federal employees and contractors to protect against a litany of threats, including terrorism and identity theft. The directive instructs the Departments of Commerce, State, Defense, Justice, and Homeland Security to work with OMB and the Office of Science and Technology Policy to institute the new standards and policies. With federal agencies’ increasing pursuit of smart cards, directives from central management such as this one could be an important vehicle for ensuring that more comprehensive guidance is available to support and sustain the broader implementation of agencywide smart card initiatives. Mr. Chairman, beyond the governmentwide assessment presented, you requested that we specifically address actions of the Department of Veterans Affairs in adopting smart card technology. Our report last month discussing agencies’ investments in smart card technology identified VA as being among 9 federal agencies that currently have large-scale, agencywide smart card projects underway. VA’s effort—the Authentication and Authorization Infrastructure Project (AAIP)—was begun in December 2002 as an attempt to provide agencywide capability to authenticate users with certainty and grant them access to information systems necessary to perform business functions. The initiative, currently in a limited deployment phase, involves three core components: (1) a One-VA ID smart card; (2) an enterprise PKI solution; and (3) an identity and access management infrastructure that addresses internal and external access requirements for VA users. VA currently estimates that, between fiscal years 2004 and 2009, this initiative will cost about $162 million. The project is currently focusing on development of the One-VA ID card, which is to employ a combination of smart card and PKI technologies to store a user’s credentials digitally. According to project documentation, the One-VA ID card is intended to replace the several hundred methods for issuing identification cards that are currently in place across the department, and improve physical and information security by strengthening the ability to authenticate users and grant access to information systems that employees and contractors rely on to perform VA’s business functions. As an official source of government identification credentialing, the card is expected to be compliant with Homeland Security Presidential Directive 12. VA is using a phased approach to develop and implement the One- VA ID card. This approach involves prototype testing followed by limited production testing at the department’s facilities in the United States, and by 2006, the issuance of 500,000 cards with PKI credentials to its personnel. VA reported that it has already begun an initial limited deployment of the cards to about 15,000 to 25,000 users. The AAIP project manager anticipated that the results from this limited deployment would provide lessons learned for ensuring successful implementation, support, and training once full deployment of the One-VA ID card begins in early 2005. Further, the department has indicated that it plans to use information gathered from the limited deployment to create agency-wide policies and procedures for the full deployment of smart cards across all VA business units. As of late September, VA reported that fiscal year 2004 spending on the One-VA ID card totaled approximately $27 million for activities such as the acquisition of smart cards, card readers, and hardware support. We have not yet had an opportunity to fully assess the outcomes of the department’s One-VA ID card initiative or its actions to develop the enterprise PKI solution and identity and access management infrastructure that are also key components of this initiative. However, VA officials believe that the department is sufficiently positioned to successfully implement the smart card technology on an agencywide level. The AAIP project manager noted the chief information officer’s involvement, as chair of the department’s Enterprise Information Board, in monitoring progress of the project. Further, as a participant in a number of governmentwide initiatives supporting the adoption of smart card technology, VA should be effectively positioned to carry out such an undertaking. Among its collaborations, VA is one of five agencies using GSA’s Smart Card Access Common ID contracting vehicle and plans to purchase smart cards for AAIP through the GSA contract. It is also a member of the Federal Identity Credentialing Committee, which provides guidance to federal agencies on the use of smart card technology that supports interoperable identity and authentication to enable an individual’s identity to be verified within an agency and across the federal enterprise for both physical and logical networks. Collectively, the department’s experiences and collaborations should lend strength to its own and overall federal efforts toward making smart cards a key means of securing critical information and assets. In summary, the federal government is continuing to make progress in promoting and implementing smart card technology, which offers clear benefits for enhancing security over access to buildings and other facilities, as well as computer systems and networks. The adoption of such technology is continuing to evolve, with a number of large-scale, agencywide projects having been undertaken by federal agencies over the past several years. As agencies have sought greater use of smart cards, they have had to contend with a number of significant management and technical challenges, including sustaining executive-level commitment, recognizing resource requirements, integrating physical and logical security practices, achieving interoperability, and maintaining system security and privacy of personal information. These challenges become less difficult to address, however, as managers place greater emphasis on enhancing the security of federal facilities and information systems and technical advances improve the capabilities and reduce the costs of smart card systems. The challenges are also tempered as increased federal guidance brings direction to agencies’ handlings of their smart card initiatives. VA is among a number of agencies currently undertaking large-scale, agencywide projects to implement smart cards. While its project is still under development, VA has gained experience as a participant on governmentwide initiatives to further smart card adoption that should facilitate the increasing movement toward the use of smart cards as an essential means of securing critical information and assets. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions that you or other members of the subcommittee may have. If you should have any questions about this testimony, please contact me at (202) 512-6240 or via e-mail at koontzl@gao.gov. Other major contributors to this testimony included Michael A. Alexander, John de Ferrari, Nancy Glover, Steven Law, Valerie C. Melvin, J. Michael Resser, and Eric L. Trout. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The federal government is interested in the use of smart cards--credit card-like devices that use integrated circuit chips to store and process data--for improving the security of its many physical and information assets. Besides providing better authentication of the identities of people accessing buildings and computer systems, smart cards offer a number of other potential benefits and uses, such as creating electronic passenger lists for deploying military personnel and tracking immunization and other medical records. Over the past 2 years, GAO has studied and reported on the uses of smart cards across the federal government. Congress requested that GAO testify on federal agencies' efforts in adopting smart card technology--based on the results of this prior work--and on the specific actions that the Department of Veterans Affairs is taking to implement smart card technology. As the unique properties and capabilities of smart cards have become more apparent, federal agencies, including the Office of Management and Budget, the National Institute of Standards and Technology, and the General Services Administration, have acted to advance the governmentwide adoption of smart card technology. In turn, numerous smart card projects that offer a variety of uses and benefits have been launched. As of June 2004, 15 federal agencies reported 34 ongoing smart card projects. Further, agencies' actions toward the adoption of smart cards continue to evolve as understanding of the technology grows. Agencies are moving away from the small-scale, limited-duration demonstration projects of past years (involving as few as 100 cardholders and aiming mostly to show the value of using smart cards for identification) to larger, more integrated, agencywide initiatives involving many thousands (or even millions) of users and that are focused on physical access to facilities and logical (information systems) access to computer systems and networks. In pursuing smart card projects, federal agencies have had to contend with numerous management and technical challenges. However, these challenges may be less imposing in the future because of increased management concerns about securing federal facilities and because technical advances have improved the capabilities and cost effectiveness of smart card systems. The Department of Veterans Affairs (VA) is one of 9 federal agencies currently pursuing large-scale, agencywide smart card initiatives. VA's project, currently in limited deployment, involves using, among other technologies, the One-VA Identification smart card to provide an agencywide capability to authenticate users with certainty and grant them access to information systems essential to accomplishing the agency's business functions. VA estimates that this project will cost about $162 million between 2004 and 2009, and enable it to issue 500,000 smart cards to its employees and contractors. |
General hospitals face competition from a variety of sources, including the approximately 100 specialty hospitals in operation or under development in some markets in 2005. Despite the relatively small number of specialty hospitals, the issue of how general hospitals have responded to the competition from specialty hospitals has been a subject of debate. Federal agencies have broadly addressed how general hospitals’ competitive actions have been influenced by the presence of specialty hospitals; however, to date, the evidence has been largely anecdotal. Specialty hospitals represent a small share of the national health care market and the competition that general hospitals face from other general hospitals, ASCs, imaging centers, and other types of facilities. In 2005, we identified 66 existing specialty hospitals and an additional 46 that were under development. In contrast, there were an estimated 4,800 general hospitals, 4,100 Medicare certified ASCs, and 2,400 imaging centers. (See fig. 1.) Another methodology for assessing the relative magnitude of specialty hospitals is through Medicare inpatient spending. In prior work pertaining to specialty hospitals of various types and ownership structures, we found that specialty hospitals accounted for a low share of Medicare spending for inpatient services relative even to their low share of the hospital market. Specifically, in April 2003 we reported that specialty hospitals in existence accounted for about 2 percent of existing hospitals, but 1 percent of total Medicare inpatient spending. The overall competitive effect of specialty hospitals on general hospitals continues to be the subject of debate. Advocates of specialty hospitals contend that the focused mission and dedicated resources of specialty hospitals enable them to offer reduced treatment costs, improved care quality, and enhanced amenities for patients compared with what general hospitals are able to provide. Moreover, some advocates maintain that competition from specialty hospitals can prompt general hospitals to implement efficiency, quality, and amenity improvements, thus favorably affecting the overall health care delivery system. However, critics are concerned that general hospitals may be adversely affected by specialty hospitals. In 2003, using a broader definition of specialty hospitals that included facilities with and without physician owners or investors, we reported that specialty hospitals tended to treat less-severely-ill patients, served proportionately fewer Medicaid patients, and were less likely to have emergency rooms. We also reported that physicians were owners or investors in the majority of specialty hospitals we identified. These findings were consistent with critics’ concerns that specialty hospitals tend to concentrate on the most profitable procedures and serve patients with the fewest complications. According to such critics, specialty hospitals draw financial resources away from general hospitals and leave those hospitals with the responsibility of caring for the sickest patients and fulfilling their broad missions to provide charity care, emergency services, and standby capacity to respond to communitywide disasters. Critics are also concerned that physician ownership of specialty hospitals creates financial incentives that could inappropriately affect physicians’ clinical behavior and their decisions to refer patients to specific facilities. To date, there have been only anecdotal reports of how general hospitals have competitively responded to specialty hospitals. Two reports—one jointly issued by the Federal Trade Commission (FTC) and the Department of Justice (DOJ), and another issued by MedPAC—discussed general hospitals’ responses to specialty hospitals. The FTC/DOJ report was based primarily on written submissions and testimony provided by health care experts at the agencies’ 2002 workshops and 2003 hearings. The information contained in MedPAC’s report was gathered through site visits and interviews with representatives of specialty and general hospitals in selected markets where specialty hospitals existed and interviews with others in the health care community. Collectively, the reports identified several actions general hospitals took in response to the entry, or the anticipation of entry, of specialty hospitals into the marketplace, including: improving operating room scheduling, extending service hours, building a single-specialty wing to discourage the establishment of competing facilities, partnering with physicians on their medical staff to open a specialty hospital, signing exclusive contracts with private payers to preclude specialty hospitals or the physicians who invest in them from contracting with those payers, and revoking the admitting privileges of physicians involved with a competing specialty hospital. Nearly all general hospitals responding to our survey reported making operational and clinical service changes to remain competitive in markets they viewed as increasingly competitive; however, there was little evidence to suggest that the absence or presence of specialty hospitals had much of an effect on the number or types of changes general hospitals reported implementing between 2000 and 2005. General hospitals responding to our survey reported facing increasing competition both from other general hospitals and from limited-service facilities—a category that includes specialty hospitals, ambulatory surgical centers, and imaging centers. The general hospitals that responded to our survey reported implementing a variety of operational and clinical service changes. However, we found little evidence associating specific changes made by general hospitals with the presence or absence of a nearby specialty hospital. That is, with few exceptions, general hospitals did not report implementing a substantially different number of changes or different types of changes just because there was a specialty hospital in their market. Nearly all general hospitals that responded to our survey described their market environments as ranging from somewhat competitive to extremely competitive. Only one hospital described its market as not competitive. Urban general hospitals were much more likely than rural general hospitals to describe their market as either very or extremely competitive. (See table 1.) A larger percentage of general hospitals that responded to our survey— both urban and rural—reported increased competition from limited- service facilities relative to those that reported increased competition from other general hospitals. More than 90 percent of urban general hospitals indicated that competition from limited-service facilities had either increased or greatly increased in their markets, while 75 percent of urban general hospitals indicated that competition from other general hospitals had either increased or greatly increased. (See table 2.) Similarly, 74 percent of rural general hospitals indicated that competition from limited-service facilities had either increased or greatly increased, while 53 percent of rural general hospitals indicated that competition from other general hospitals had either increased or greatly increased. (See table 3.) Among the 72 potential operational changes survey respondents could have indicated that they made and the 34 potential clinical services respondents could have indicated that they added, expanded, reduced, or eliminated on our survey, general hospitals reported implementing an average of 30 changes (22 operational changes and 8 clinical service changes) from 2000 through 2005. Overall, general hospitals that responded to our survey had reported implementing between 3 and 66 separate changes. Overall, 100 percent of general hospitals we surveyed reported implementing at least 1 operational change. There were 18 specific operational changes that at least half of the general hospitals that responded to our survey reported implementing. (See table 4.) Four of the 6 most commonly reported operational changes involved increasing wages and benefits for nurses and offering more flexible working schedules in an effort to improve nursing staff retention or recruitment. In addition, 4 of the 18 most commonly reported operational changes related to physicians. These changes involved increasing the physicians’ role in hospital governance, increasing physician income guarantees, hiring new physicians, and beginning a hospitalist program. Nearly all general hospitals that responded to our survey reported implementing clinical service changes. Overall, 97 percent of the hospitals added or expanded at least one type of clinical service. The majority of hospitals added or expanded imaging/radiology services (73 percent) and cardiology services (57 percent). Other types of clinical services were added or expanded by a minority of hospitals, such as outpatient surgical services (37 percent) and orthopedic services (31 percent). Nearly one- third of hospitals (33 percent) reduced or eliminated at least one type of clinical service. The most commonly reported clinical services to be reduced or eliminated were inpatient/outpatient psychiatric services (7 percent). Overall, the operational and clinical service changes reported by general hospitals that responded to our survey appeared largely unaffected by the presence or absence of specialty hospitals in their markets. On average, rural general hospitals with a specialty hospital in their regional market made a few more operational service changes than rural general hospitals in markets without specialty hospitals, but made a similar number of clinical service changes. More specifically, rural general hospitals in markets with specialty hospitals made an average of 21 operational changes, 7 clinical service additions or expansions, and 1 clinical service reduction or elimination. Rural general hospitals in markets without specialty hospitals made an average of 18 operational changes, 6 clinical service additions or expansions, and no clinical service reductions or eliminations. (See table 5.) Urban general hospitals in regional and local markets with specialty hospitals made similar numbers of operational and clinical service changes as general hospitals in markets without specialty hospitals. For most of the 72 potential operational changes and 34 potential clinical service changes listed on our survey, the percentage of general hospitals that had reported implementing each change did not systematically vary with the presence or absence of a specialty hospital in the market. For example, 12 percent of urban general hospitals in regional markets with specialty hospitals and 13 percent of urban general hospitals in regional markets without specialty hospitals opened a new hospital wing specializing in one type of medicine between 2000 and 2005. However, for a few of the potential changes listed on our survey, there was a relationship between the percentage of general hospitals that had reported implementing the change and the presence of a specialty hospital in the market. For example, there were 6 operational changes and 3 clinical service changes (including clinical services that were added, expanded, reduced, or eliminated) for which the percentage of rural general hospitals implementing the change significantly differed depending on whether or not a specialty hospital existed in the regional market. (See table 6.) The greatest number of differences (11 operational change differences and 5 clinical service change differences) was observed between the group of urban general hospitals in local markets with specialty hospitals and the group of urban general hospitals where there were no specialty hospitals in either the local or regional markets. Rural general hospitals in markets with specialty hospitals were more likely to have reported implementing six operational changes and two clinical service changes relative to rural general hospitals in markets without specialty hospitals. (See table 7.) For only one clinical service— adding or expanding sleep laboratory services—rural general hospitals in markets with specialty hospitals were less likely to have reported implementing a clinical service change. If there was a specialty hospital in its regional market, an urban general hospital was more likely to have reported making three of the seven operational changes that significantly differed between general hospitals in markets with and without specialty hospitals. Urban hospitals in regional markets with specialty hospitals were less likely to have made four operational changes and one clinical service change. (See table 8.) Urban hospitals in local markets with specialty hospitals were more likely to have made six operational changes and three clinical service changes and less likely to have made five operational changes and two clinical service changes relative to general hospitals in regional markets without specialty hospitals. (See table 9.) Overall, the general hospitals that responded to our survey reported making a variety of operational and clinical service changes to better compete in their markets. Some advocates of specialty hospitals have stated that the presence of one or more of these facilities in a market may prompt general hospitals to improve the quality of the care they deliver or increase the efficiency with which they deliver their services. However, our survey results found relatively few differences, in terms of operational and clinical service changes reported, between general hospitals in markets with and without specialty hospitals. That is, on average, general hospitals in markets with specialty hospitals did not make a substantially different number of changes or different types of changes relative to general hospitals in markets without specialty hospitals. These results held for both rural and urban general hospitals. Our survey results did show that general hospitals reported facing a competitive market for their services. However, general hospitals face competition from many types of facilities, not just specialty hospitals. Competing facilities, including other general hospitals in the market, ASCs, and imaging centers, far outnumber the relatively few specialty hospitals in existence or under development. The predominance of other types of competitors may help explain the lack of a uniquely competitive response of the general hospitals in our study to the existence of specialty hospitals. We obtained comments from CMS and representatives of AHA—a group representing hospitals, health care systems, networks, and other providers of care—and FAH—a group representing investor-owned and investor- managed hospitals and health systems. Their comments are summarized below. In written comments on a draft of this report, CMS stated that our study, by providing quantitative data on the market effect of specialty hospitals, was extremely helpful and that CMS would use the information as the agency developed its DRA-mandated report on physician investment in specialty hospitals. (CMS’s comments are reprinted in app. IV.) CMS also provided technical comments, which we incorporated where appropriate. AHA and FAH stated that their concerns regarding specialty hospitals were specific to those facilities that have physician owners or investors. Both organizations suggested text changes to emphasize that our report is focused on the effect of these types of specialty hospitals on general hospitals, which we incorporated where appropriate. In addition, representatives of AHA stated that general hospitals may make operational and clinical service changes for a variety of reasons, regardless of the degree of competition in their market. While we recognize that general hospitals may make changes for a variety of reasons, that fact does not detract from our finding that general hospitals largely did not make a different number of changes, or different types of changes, in response to competition from specialty hospitals. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of this report until 30 days after its date. At that time, we will send copies of this report to appropriate congressional committees and other interested parties. We will also make copies available to others upon request. This report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512-7101 or steinwalda@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in app. V. This appendix provides information on the key aspects of our analysis of the competitive response of general hospitals to specialty hospitals. First, it describes the sample selection process. Second, it discusses the survey used to collect data from a sample of general hospitals and the process of fielding the survey. Third, it explains the differences between local and regional markets. Fourth, it describes the methodology used to analyze survey data. Finally, it addresses issues related to data reliability and limitations. We selected two groups of general hospitals for this analysis—the sample and a comparison sample. The sample consisted of general hospitals in hospital referral regions (HRR)—which we refer to in this report as regional health care markets—with a specialty hospital that opened since the start of 1998. The comparison sample consisted of general hospitals in regional health care markets without any specialty hospitals. In constructing the comparison sample, we also excluded regional health care markets with specialty hospitals that did not have physician owners or investors. Regional markets capable of meeting the criteria for the sample were identified by compiling a current list of specialty hospitals that opened from 1998 through 2005. We excluded markets in states where certificate of need (CON) laws existed, because specialty hospitals are located primarily in non-CON states. We identified 32 unique regional markets containing 53 specialty hospitals that met these criteria. (See table 10.) We selected markets for the comparison sample on the basis of their similarity to the markets used for the sample, except for the presence of a specialty hospital. We excluded markets from the comparison sample if they contained a specialty hospital, regardless of ownership or date of opening. We used data from DAP pertaining to market characteristics to ensure that markets included in the comparison sample were similar to markets in the sample. We excluded markets from the comparison sample if any one of their values for seven market characteristics—overall population, Medicare population, average number of inpatient beds, population to beds ratio, physician specialists to total physicians ratio, average number of surgical discharges, and the Herfindahl-Hirschman Index—fell outside the range of values for markets in the sample. The application of these criteria resulted in a sample that consisted of 78 unique regional markets. The Centers for Medicare & Medicaid Services’ (CMS) 2005 Provider of Services (POS) file was used to identify general hospitals located in the markets selected for the sample and the comparison sample, and these hospitals were subject to several exclusions. General hospitals that were major teaching hospitals or had fewer than five cardiac, orthopedic, or surgical discharges in 2004, were excluded from both samples because the presence of a specialty hospital may not affect these hospitals in the same manner it would affect other types of general hospitals. In addition, we considered urban general hospitals to be in a regional market with a specialty hospital only if it was also less than 90 miles away from a specialty hospital. We considered rural general hospitals to be in a regional market with a specialty hospital only if it was also less than 120 miles away from a specialty hospital. Information on these hospital characteristics were obtained from CMS’s 2005 POS file, 2002/2003 Cost Report file, and 2004 Health Care Information System (HCIS) file, and Census 2000 US Gazetteer files. The sample included 326 general hospitals and the comparison sample included 294 general hospitals. (See table 11.) The survey questionnaire had two sections. (See app. II.) First, it obtained respondents’ perceptions of competition in their health care markets. Second, it asked respondents to provide information on the operational and clinical service changes that the respondents’ hospitals had made from 2000 through 2005 to remain competitive in their markets. The questionnaire included 72 potential operational changes and 34 potential clinical service changes. The specific operational and clinical service change questions included in the survey were identified through a review of articles in academic journals, industry reports, periodicals, a joint study by the Federal Trade Commission and the Department of Justice, and studies by CMS and the Medicare Payment Advisory Commission (MedPAC). We tested our survey questionnaire with external experts, including one MedPAC analyst and seven hospital administrators from four general hospitals and one hospital system. In August and September of 2005, survey questionnaires were distributed to 603 of the 620 hospitals in our sample—315 general hospitals in the sample and 288 general hospitals in the comparison sample. Sixty-seven percent of general hospitals that received our survey questionnaire responded—401 general hospitals. Seventy percent of the sample and 63 percent of the comparison sample responded to our survey questionnaire. We created a subsample to analyze the competive response of general hospitals to specialty hospitals that were in close proximity. The subsample consisted of general hospitals in hospital service areas (HSA)— which we refer to in this report as local health care markets—with a specialty hospital that opened from 1998 through 2005. Groups of local health care markets form a regional health care market. (See fig. 2.) On average, general hospitals in local health care markets with a specialty hospital were in closer proximity to a specialty hospital than were general hospitals in regional health care markets with a specialty hospital. Among the 315 general hospitals in the sample, 152 resided in the same local health care market as a specialty hospital. Sixty-four percent of general hospitals in the local health care market subsample responded to our survey. From the survey responses, we determined the percentage of general hospitals that reported making each of the potential operational and clinical changes and then compared those percentages for three paired sets of general hospitals. First, we compared rural general hospitals in regional markets with specialty hospitals to rural general hospitals in regional markets without specialty hospitals. (See fig. 3.) Second, we compared urban general hospitals in regional markets with specialty hospitals to urban general hospitals in regional markets without specialty hospitals. Third, we compared urban general hospitals that had a specialty hospital in their local markets to urban general hospitals that did not have a specialty hospital in either their local or regional markets. The third comparison was conducted to explore the possibility that specialty hospitals are more likely to elicit a competitive response from general hospitals that are closest to them. As a part of each comparison we conducted a statistical test, the Pearson chi-square, in order to test the statistical significance of the percentages for each of the three paired sets of general hospitals. This test enabled us to determine if differences between the paired sets of general hospitals were statistically significant. Among the general hospitals that responded to our survey, the comparison of rural general hospitals in regional health care markets included 71 rural general hospitals in regional markets with specialty hospitals and 79 rural general hospitals in regional markets without specialty hospitals. The comparison of urban general hospitals in regional health care markets included 148 urban general hospitals in regional markets with specialty hospitals and 103 urban general hospitals in regional markets without specialty hospitals. The comparison of urban general hospitals in local health care markets with urban general hospitals in regional markets included 90 urban general hospitals in markets with specialty hospitals and 103 urban general hospitals in regional markets without specialty hospitals. Because only 8 rural general hospitals in local markets responded to the survey, we did not conduct a comparison of rural general hospitals in local markets to rural general hospitals in regional markets. We used the survey data we collected for this work, three CMS datasets, and four datasets from DAP to produce the results of this report. In each case, we determined that the data were sufficiently reliable to address the reporting objective. Overall, 67 percent of general hospitals we contacted responded to our 2005 survey, and few respondents failed to complete the questionnaire in full. We identified incomplete and inconsistent survey responses within individual surveys and placed follow-up calls to respondents to complete or verify their responses. We conducted an analysis to identify outliers who made extremely high numbers of service changes. We manually verified 10 percent of all survey responses contained in our aggregated electronic data files, in order to ensure that survey response data were accurately transferred to electronic files for analytical purposes. We determined the three CMS datasets—2002/2003 Cost Report File, first quarter 2005 POS file, and the 2004 HCIS File—and four DAP datasets— 2003 Zip Code Crosswalk File, 1999 Chapter 2 Table File, 2001 selected surgical discharge rates by HRR, and 1999 physician workforce data— were sufficiently reliable for our purposes. The CMS datasets were used to gather descriptive information for hospitals in our sample, to determine general hospital teaching status, and to tie discharge data to individual hospitals. The DAP datasets were used to link the general hospitals in our sample to their corresponding market characteristics. These CMS and DAP files are widely used for similar research purposes. We identified two potential limitations of our analysis. First, because independent information to verify survey responses was not available, all analyses in this report are based on data that are self-reported and potentially limited by the respondent’s ability to report the operational or clinical service changes implemented from 2000 through 2005 for competitive reasons. Second, in response to the threat of future competition, it is possible that general hospitals made changes to their facilities prior to 2000 or that changes made by some general hospitals in anticipation of the new specialty hospitals successfully deterred the entry of that hospital, which our survey did not capture. Our survey listed 72 potential operational changes and 34 potential clinical service changes that a respondent hospital could have indicated that they had implemented from 2000 through 2005. Within the survey, the potential operational changes were organized into nine separate subject-oriented categories. For each of the clinical service changes, respondents were asked to indicate whether they had added, expanded, eliminated, or decreased the service. For analytical purposes, we grouped together “added” and “expanded” clinical service change responses. Also, we grouped together “eliminated” and “decreased” clinical service change responses. When stratified by urban and rural location there were few differences between general hospitals in markets with and without specialty hospitals, in terms of the average number of changes they reported implementing in each category of operational and clinical service change from 2000 through 2005. (See table 12.) Other contributors to this report include James Cosgrove, Assistant Director; Jennie Apter; Zachary Gaumer; Gregory Giusto; Kevin Milne; and Dae Park. Specialty Hospitals: Information on Potential New Facilities. GAO-05- 647R. Washington, D.C.: May 19, 2005. Specialty Hospitals: Geographic Location, Services Provided, and Financial Performance. GAO-04-167. Washington, D.C.: October 22, 2003. Specialty Hospitals: Information on National Market Share, Physician Ownership, and Patients Served. GAO-03-683R. Washington, D.C.: April 18, 2003. | There has been much debate about specialty hospitals--short-term acute care hospitals with physician owners or investors that primarily treat patients who have specific medical conditions or need surgical procedures--and the competitive effects they may have on general hospitals. Advocates of specialty hospitals contend that competition from these physician-owned facilities can prompt general hospitals to implement efficiency, quality, and amenity improvements, thus favorably affecting the overall health care delivery system. Critics of specialty hospitals are concerned that general hospitals may respond to such competition by making changes that do not necessarily increase efficiency or benefit patients or communities, for example, by adding services already available in the community. The appropriateness of physicians' financial interests in specialty hospitals has also been questioned. GAO was asked to provide information on the competitive response of general hospitals to specialty hospitals. GAO surveyed approximately 600 general hospitals in markets with and without specialty hospitals to provide information on the extent to which these two groups of general hospitals reported implementing operational and clinical service changes to remain competitive. GAO received responses from 401 general hospitals. Nearly all general hospitals responding to GAO's survey reported making operational and clinical service changes to remain competitive in what they viewed as increasingly competitive healthcare markets; however, there was little evidence to suggest that general hospitals made substantially more or fewer changes or different types of changes if some of their competition came from a specialty hospital. While the majority of survey respondents indicated that competition from other general hospitals had increased, a larger proportion of respondents--91 percent of urban general hospitals and 74 percent of rural general hospitals--reported increases in competition from limited service facilities, a category that includes approximately 100 specialty hospitals across the nation and thousands of ambulatory surgical centers and imaging centers. To enhance their ability to compete, general hospitals reported making an average of 22 operational changes, such as introducing a formal process for evaluating efforts to improve quality and reduce costs, and 8 clinical service changes, such as adding or expanding cardiology services, from 2000 through 2005. Although specialty hospital advocates have hypothesized that the entrance of a specialty hospital into a market encourages the area's existing general hospitals to adopt changes that make them more efficient and better able to compete, the survey responses largely did not support this view. There were no substantial differences in the average number of operational and clinical service changes made by general hospitals in markets with and without specialty hospitals and, for the vast majority of the potential changes included on GAO's survey, there was no statistical difference between the two groups of hospitals in terms of the specific changes they reported implementing. GAO received comments on a draft of this report from the Centers for Medicare & Medicaid Services (CMS). In its comments, CMS stated that GAO's study, by providing quantitative data on the market effect of specialty hospitals, was extremely helpful. |
VHA provides a range of treatments and services to improve the mental health of veterans, including teaching coping skills and offering tailored programs to treat specific problems, such as depression, PTSD, and substance abuse disorders, and to promote recovery. When needed mental health care is not available through a local VHA provider, VHA has avenues by which veterans may obtain care from non-VA providers in the community. VHA policy states that veterans are entitled to timely access to mental health care. There are a number of ways a veteran may seek access to mental health care. (See fig. 1.) Upon initial referral or request for mental health care, veterans new to mental health care (those not seen by a mental health provider within the past 24 months through VHA) are to receive initial assessments from either referring providers (such as primary care physicians) or mental health providers; these assessments identify those who need urgent or immediate access to mental health care. Following the initial assessment, a veteran is to receive timely access to a comprehensive mental health evaluation that includes a diagnosis and a plan for treatment. The comprehensive evaluation— referred to as a full mental health evaluation—serves as a veteran’s main entry point to mental health care. Additionally, established veterans (those who already have received mental health care within the past 24 months through VHA) are to receive timely access to follow-up care. Follow-up care may be provided by a single provider or, for those veterans who need a range of services, by multiple providers. For example, a veteran may receive ongoing care from both a psychiatrist and a psychologist to manage their symptoms. Veterans with specific diagnoses, such as PTSD, are to be considered for evidence-based therapies, such as cognitive processing therapy or prolonged exposure therapy, as clinically appropriate. Furthermore, veterans discharged from inpatient mental health stays are to receive timely access to follow-up outpatient mental health care. VHA’s scheduling policy establishes processes and procedures for scheduling medical appointments, including mental health appointments. This policy requires schedulers to obtain and correctly record the preferred date—the date on which the veteran wants to be seen—in VHA’s Veterans Health Information Systems and Technology Architecture (VistA). VistA’s scheduling component was implemented in 1985 and VA is considering several options for updating or replacing this scheduling component. In 1995, VHA established a policy of scheduling specialty care appointments, including mental health appointments, within 30 days of the date the veteran would like to be seen. In fiscal year 2011, based on reported improved performance, VHA shortened its wait-time policy to 14 days for new veterans. In fiscal year 2015, VHA set a new policy for new veterans in response to the Choice Act that veterans should be seen within 30 days of the date the veteran wants to be seen. Currently, VHA also has a policy of providing outpatient mental health care to a veteran discharged from an inpatient mental health stay within 7 days and for providing follow-up care to an established veteran within 30 days of the clinically indicated date, commonly referred to as the return-to-clinic date. To facilitate accountability for achieving its wait-time policies, VHA includes wait-time and other performance data, such as the “missed opportunity” rate—the percentage of scheduled appointments that were not used because veterans did not show up for their appointments—in several internal and external reports. VHA also makes publicly available on its website its patient access reports (including monthly average wait times for completed and pending mental health appointments for each VAMC), and its Strategic Analytics for Improvement and Learning (SAIL) reports, which assess VAMC performance across 25 quality measures, including death and medical complication rates, customer satisfaction, and access (based on wait-time data). To meet the needs of veterans seeking mental health care, VHA has sought to increase its mental health staff, including the number of psychiatrists, psychologists, social workers, peer specialists and other mental health professionals. In 2012, VHA began a two-part hiring initiative: (1) VHA’s recruitment effort focused on hiring 1,600 new mental health professionals, 300 new non-clinical support staff and filling existing vacancies starting in June 2012; and (2) Executive Order 13265, issued in August 2012, authorized the hiring of 800 peer specialist positions by December 31, 2013, along with reiterating VHA’s goal of hiring 1,600 new mental health professionals by June 30, 2013. Generally, eligible veterans may utilize the Non-VA Medical Care Program when a VAMC is unable to provide certain specialty care services, or when the veteran would have to travel long distances to obtain care at a VAMC. Non-VA providers generally treat veterans in non- VA facilities, such as physicians’ offices or hospitals in the community, and are commonly paid by VHA using a fee-for-service arrangement. There are several ways veterans can obtain care from non-VA providers. For example, the August 2012 Executive Order required VHA to establish partnerships with community-based providers, such as community mental health clinics (CMHCs), under a pilot program, to help meet veterans’ mental health needs in a timely manner. The goal for the pilot program was to decrease wait times and increase the geographic reach of VHA mental health services. The Executive Order required VHA to establish at least 15 pilot sites by February 27, 2013. In addition, in 2013, VA established the Patient Centered Community Care (PC3) program to deliver care to veterans when local VAMCs and CBOCs cannot provide the care due to demand exceeding capacity, geographic inaccessibility, or other factors. More recently, the Choice Act authorized non-VA care, including mental health care, for veterans with certain access challenges. Under this authority, VHA created the Veterans Choice Program (VCP) with the goal of meeting demand for health care in the short term. Beginning in November 2014, for example, certain veterans were able to receive non-VA care if the next available medical appointment with a VA provider was more than 30 days from their preferred date or if they lived more than 40 miles from the nearest VA facility. Recently passed legislation requires VHA to submit a plan to Congress by November 1, 2015 for consolidating its Non-VA Medical Care programs under a single program. Most veterans included in our review received full mental health evaluations, which provide new veterans with an entry point to access mental health care, in an average of 4 days of their preferred dates. At the five VAMCs we visited, the average time in which a veteran received this full evaluation ranged from 0 to 9 days from the preferred date. (See table 1.) VHA has two policies that conflict in their definitions of timely access to a full mental health evaluation for a veteran new to VHA mental health care: (1) a 14-day policy established by VHA’s Uniform Handbook for Mental Health Services, and (2) a 30-day policy set by VHA in response to the Choice Act. To date, VHA has not provided guidance on which policy should be followed, which is inconsistent with federal internal control standards that call for management to clearly document, through management directives or administrative policies, significant events or activities, such as ensuring timely access to mental health care, to help ensure management directives are carried out properly. VHA officials told us they are aware of the discrepancy and that there is an internal group working to revise the access policies for mental health care, as well as the Uniform Handbook, to ensure consistency, but the officials could not provide a timeline for completion. When we assessed the veteran records included in our sample against these two policies, we found that, across all five VAMCs, 86 of the 100 veterans included in our review received a full mental health evaluation within 30 days of the preferred date and 81 veterans received this evaluation within 14 days. As a result of the conflicting policies, a number of VHA officials, including VISN and VAMC officials, told us they do not know which policy they are currently expected to meet, which may make it difficult for them to ensure timely access in light of increasing demand for mental health care. By not clarifying which access policy currently applies to mental health care, VHA is limited in its ability to effectively manage timely access to mental health care. Although the averages for the time between veterans’ preferred dates and their full mental health evaluations were generally within several days, they may not reflect overall wait times. We found that because VHA uses veterans’ preferred appointment date—not the date veterans initially request or are referred for mental health care—as the basis for its wait- time calculations, these calculations may only reflect a portion of veterans’ overall wait time. This occurs because veterans generally are not asked for their preferred dates until some period of time after they request or are referred for mental health care. (See fig. 2.) On average, our review of 100 new veteran records found that a veteran’s preferred date was 26 days after their initial request or referral for mental health care, though this varied by VAMC. (See table 2.) Delays between initial request or referral date and a veteran’s preferred date may be due to a veteran not wanting to start treatment immediately. However, based on our review of records, we also found that some patients were delayed in receiving care because a facility did not adequately handle a referral to mental health care. For example, one veteran received a referral to psychology in November 2013 when the referring provider noted that the veteran should be evaluated for PTSD. However, no contact was made with the veteran until the veteran called back in July 2014 and asked for an evaluation. As a result, the veteran’s preferred date was 279 days after the initial referral. Another referral led to a veteran waiting 174 days between their initial referral for mental health care and their preferred date. The veteran’s primary care provider was supposed to have placed a referral to psychology in March 2014, but we could not find evidence of the referral ever being placed. Despite not placing a proper referral, the veteran’s primary care provider alerted a VHA psychologist who reached out to the patient in March 2014, by phone, but did not leave a message. No VAMC mental health provider reached out again until September 2014, after the veteran’s primary care provider made a referral (this time documented correctly). The veteran was then able to schedule a full mental health evaluation approximately 1 week later. One veteran called the suicide prevention hotline in August 2014 expressing suicidal thoughts and then hung up. The hotline staff contacted the VAMC and the local authorities who found the veteran intoxicated and took him to a local jail to stabilize. The VAMC contacted the veteran a couple days later to discuss mental health treatment (which the veteran declined at the time) but did not follow up again with the veteran until February 2015, when the veteran contacted a VHA social worker to say he was suffering from severe depression and continued to have suicidal thoughts. As a result, the veteran’s preferred date was 181 days after the initial contact for mental health care. Veterans who receive a full mental health evaluation may experience additional delays in receiving treatment. Our review found that veterans do not always begin treatment specific to their mental health condition (such as a prescription for medication or a course of psychotherapy) at the appointment in which they receive their first full mental health evaluation. We found this for 50 of the 100 new veterans whose records we reviewed. One veteran called into a VAMC expressing suicidal thoughts and was seen the same day for a full mental health evaluation, but no appointment for treatment was made at that time as the veteran was starting a new job. The veteran did not receive treatment for more than 7 months and the mental health providers at the facility did not document any attempts to contact the veteran during that time. Two veterans with PTSD, both initially referred in March 2014, received their full mental health evaluations in April 2014, within 30 days of their initial requests or referrals. However, neither received treatment until August 2014, nearly 4 months after their full mental health evaluations. Another veteran was initially referred to mental health care in September 2014 after a screening for veterans returning from the recent conflicts in Iraq and Afghanistan. While the veteran was seen for a full mental health evaluation within 30 days and subsequently seen in a PTSD orientation group, the veteran did not receive treatment until December 2014, nearly 80 days from the time of the full mental health evaluation. The veteran’s file provided no indication of the reason for the treatment delay. Based on our records review, we found that whether veterans received treatment for their mental health conditions during their full evaluation appointments was generally dependent on the type of mental health professional who conducted the evaluation. Generally speaking, veterans who were evaluated by a psychiatrist received some type of mental health treatment during their full evaluation appointments, while veterans who were evaluated by a psychologist or substance abuse professional often did not receive treatment specific to their mental health condition at their full evaluation appointments. We found that the timeliness in which veterans received this first treatment varied by VAMC included in our review. (See table 3.) The differences between the date of the initial request or referral to care and the preferred date are consistent with previous findings by the VA OIG, which found in 2012 that VHA’s starting point for its wait-time calculation was not a meaningful measurement of a veteran’s waiting time to receive an evaluation. VHA officials agreed that the current calculation does not capture the time between when a veteran is referred to or requests mental health care and when the veteran is contacted to schedule a mental health appointment or to when a particular course of treatment starts, but also said there is currently no consensus on the appropriate standard or measure for when the calculation should begin or end. VHA’s Uniform Handbook also defines policies for timely access to other types of mental health appointments, such as follow-up appointments and outpatient appointments following discharge from an inpatient mental health stay. In particular, established veterans should receive follow-up appointments within 30 days of their return-to-clinic dates; veterans recently discharged from inpatient mental health stays should receive outpatient appointments (either by phone or in-person) within 7 days of discharge; and veterans with PTSD should receive evidence-based therapies for PTSD. Our review found that veterans generally received follow-up appointments and post-discharge appointments in accordance with the Uniform Handbook. Specifically, 126 of the 134 veterans’ appointment records that we reviewed and included a return-to-clinic date documented by the provider received follow-up appointments within 30 days of that return-to- clinic date. In addition, our review of 20 veterans discharged from inpatient mental health units at the four VAMCs we visited that provided inpatient mental health care found that these veterans received an outpatient follow-up within 2 days of discharge, on average, and all but one received this follow-up within 7 days of discharge. However, our review found that not all veterans received evidence-based PTSD therapy. Out of the 51 veterans with a diagnosis of PTSD whose records we reviewed, 7 entered into PTSD evidence-based treatment and 44 did not. Mental health providers we interviewed said veterans with PTSD do not always receive these treatments because providers a) do not feel the veteran is appropriate for this intensive treatment, b) have limited availability in their schedule for this level of intensive treatment and, as a result, only select the candidates mostly likely to succeed, or c) the veteran declines. VHA monitors access to mental health care through on-site reviews of clinic operations and sharing data, internally and externally, on mental health access, but the lack of clear policies for reliable data on veteran wait times and missed opportunities precludes effective oversight. Among other things, VHA’s on-site reviews are to determine compliance with the policies for mental health care, as defined in the Uniform Handbook; reduce variability in access and quality of mental health care nationwide; and identify best practices and areas for growth at each VAMC. We reviewed the findings of the most recent VHA on-site review for each of the five VAMCs we visited. VHA’s recommendations included having one VAMC clarify local policies for referring veterans to mental health and establishing clear criteria on when veterans who have completed treatment should be discharged to primary care or other providers for ongoing monitoring and maintenance. VHA also recommended that another VAMC revise local policies for addressing veteran “no shows”— when a veteran did not attend their appointment and did not cancel in advance—to be consistent with VHA requirements. Following the visits, VHA requires VAMCs to submit corrective action plans that detail how the recommendations will be implemented. For example, one VAMC’s action plan stated that it revised the local no-show policy and trained relevant staff. VHA officials said they also share data on mental health access, such as appointment wait times and VAMCs’ missed opportunities rate, with VISN and VAMC staff through internal VHA websites and with the public through its SAIL and patient access reports. However, our previous work, as well as that of the VA OIG, has shown that VHA wait-time data is unreliable and prone to errors and interpretation. Among other things, we found in December 2012 that VAMCs were not implementing VHA’s scheduling policies in a consistent manner, which led to unreliable wait- time data. While VHA has taken steps to increase training in proper scheduling practices, we found three additional reasons contributing to the unreliability of VHA’s mental health care wait-time data, which is inconsistent with federal internal control standards that state management should use and communicate, both internally and externally, quality information to achieve its objectives. These include: (1) the wait-time data-entry process has the potential for errors, (2) data may not be comparable over time, and (3) data may not be comparable across VAMCs. In addition, we found that one VAMC was using a document outside of VHA’s scheduling system to track veterans referred for a certain type of mental health clinic. The wait-time data entry process has the potential for errors. Wait- time data relies on information entered at the time an appointment is scheduled, a process which we found has the potential for errors, including scheduler errors (e.g., entering an incorrect preferred date) that are compounded by high turnover in these positions, limitations with the scheduling system (e.g., the ability to view only appointments that follow a veteran’s entered preferred date, not those that fall on the day(s) leading up to that date), and variation in how the preferred date is determined (e.g., providers basing their preferred return-to-clinic date on appointment availability instead of on the veteran’s preferred date). For example, we found one case where VHA’s system-generated wait-time data was calculated incorrectly, based on our review of the records. We identified that the scheduler did not properly search for the next available appointment and, as a result, the VistA scheduling system incorrectly used the date the appointment was created (as opposed to the veteran’s preferred date) to calculate the VHA wait time. VAMC officials said this is an easy mistake to make because of the limitations of VHA’s scheduling system. Officials from all of the VAMCs we visited told us that there is the potential for scheduling errors. As result, they may not always be able to rely on VHA’s aggregated wait-time data. Instead, some officials said they need to view individual-level information to monitor timely access, which can be time consuming and burdensome. Specifically, officials from one VAMC said that each day they review all scheduled appointments to identify and correct scheduling errors that may affect the accuracy of their data. We found in December 2012, that VHA’s scheduling policy was unclear and subject to interpretation and this led to difficulty achieving consistent and correct application of the policy by schedulers. As a result, we recommended that VHA update its scheduling policy or identify wait-time measures that are not subject to interpretation or prone to scheduling error. Until VHA clarifies its scheduling policy so that it is not as subject to interpretation or error, or develops new wait time measures, as we recommended, it is likely to continue to have data errors and may be missing an opportunity to improve the reliability, and thus usefulness, of its data. Data may not be comparable over time. VHA has changed the definitions used to calculate various mental health wait-time measures; thus, these measures may not be comparable over time. For example, VHA officials said that in fiscal year 2014 the definition of a ‘new mental health patient’ changed from an individual who had not been seen within a specific mental health clinic, such as the general mental health clinic or PTSD clinic, to an individual who had not been seen in any mental health clinic. According to VHA officials, this change was made because some veterans were being incorrectly flagged as new in the scheduling system if their appointments with their regular providers were scheduled in a different clinic (e.g., general mental health versus PTSD) than normal. A number of VHA officials, including VAMC and VISN officials, told us they were not sure which definitions were in effect at the time of our interviews or gave conflicting answers about which definitions were currently being used, which is contrary to federal internal controls standards that call for management to communicate relevant and reliable information in a timely manner. VHA has not clearly communicated the definitions used or changes to the definitions, which limits the reliability and usefulness of the wait-time data and VHA’s ability to use these measures to determine progress in meeting stated objectives for veterans’ wait times. Until VHA clarifies how different access measures are defined and calculated, and communicates any changes over time, local and VISN officials are likely to face difficulties accurately assessing wait times, and identifying needed improvements. Data may not be comparable between VAMCs. In particular, data may not be comparable when VAMCs use open-access appointments (i.e. blocks of time where veterans may see providers without a scheduled appointment). Specifically, two VAMCs we visited often referred veterans to open-access appointments. In these cases, because appointments were not scheduled until veterans came to the VAMC to be seen, the preferred and appointment dates were the same and wait times were calculated as 0 days, regardless of when veterans requested or were referred for mental health care. Additionally in these cases, because appointments were not scheduled prior to veterans showing up for care, the VAMC’s missed-opportunity rate may have been lower and may not be comparable to that of other VAMCs that did not use open-access appointments. We found that one of the VAMCs we visited had a list of veterans referred to open-access appointments rather than being given specific appointments. This list was maintained outside of VHA’s scheduling system in a spreadsheet that was not systematically updated. Officials stated the spreadsheet was not used for clinical decision-making. However, the manual maintenance of the list raises concerns about the potential to lose track of veterans who may have needed mental health services more urgently. This VAMC’s own documentation stated that there is no system in place to alert providers if a patient did not arrive for an open-access appointment, limiting officials’ ability to follow up at least three times in accordance with VHA’s policy to contact veterans at least three times if they miss an appointment without canceling, referred to as their no-show policy. To mitigate the risk of lack of follow up for these veterans, VAMC officials told us that prior to being placed on this open- access appointment list, veterans receive a telephone screening by a mental health nurse who determines their risk level, but also to give veterans themselves an opportunity to determine if and when they should be seen. Of the 644 veterans who were placed on the referral list for open-access appointments at this VAMC in fiscal year 2014 and through February 2015, close to half (278) were reported as not having shown, and were generally either mailed a letter reminding them about the open-access clinic or had no action recorded. We randomly selected 15 of these veterans’ medical records for review, and found inconsistencies with the VAMC’s application of VHA’s no-show policy for veterans that did not attend an appointment. Just over half, 8 veterans, did not receive mental health care through the open access clinic or through an individual appointment. Of these 8 veterans, only 1 was contacted three or more times to remind them of the need to be seen—in accordance with VHA’s no-show policy. The other 7 veterans were not adequately contacted and received phone calls, one letter, or no reminders. One of the veterans that did not receive any type of reminder was brought to the emergency department 1 month later by local police stating that he felt suicidal. The veteran was then admitted for inpatient mental health care. Another veteran was referred to an open-access appointment in January 2015, but did not attend an open access appointment and was not in contact with VHA again until the veteran called in May 2015. VHA does not have guidance that clarifies how open-access appointments should be used, how such appointments should be scheduled, or how veterans referred for these types of appointments should be tracked. This is inconsistent with federal internal controls that call for management to clearly document policies for significant activities to help ensure management’s directives are carried out properly. As a result, officials at the VAMCs who used open-access appointments said they were unclear about how they could be used, how they should be entered into VHA’s scheduling system, and whether local tracking mechanisms were compliant with VHA scheduling policies. Officials from one of these VAMCs also said that while they referred some veterans to open-access appointments, they also began giving these veterans scheduled appointments after VHA officials told them that not providing scheduled appointments may not comply with VHA’s scheduling policy. In addition, VHA officials said they were not aware of uniform guidance about open-access appointments, and that this lack of guidance could explain why different VAMCs use different approaches. Without guidance on how appointment scheduling for open-access clinics is to be managed, VAMCs can continue to implement these appointments inconsistently, and place veterans on lists outside of VHA’s scheduling system, potentially leading to serious negative health outcomes for veterans that need mental health care. VHA increased mental health staff at its facilities nationwide through a two-part hiring initiative: (1) VHA’s recruitment effort focused on hiring 1,600 new mental health professionals, 300 new non-clinical support staff (such as scheduling clerks), and filling existing vacancies starting in June 2012; and (2) Executive Order 13265, issued in August 2012, which authorized the hiring of 800 peer specialist positions by December 31, 2013, along with reiterating VHA’s goal of hiring 1,600 new mental health professionals by June 30, 2013. As a result of this initiative, which included both inpatient and outpatient mental health positions, VHA hired about 5,300 new clinical and non-clinical mental health staff. In particular, VHA hired: 1,667 new mental health staff (as of June 30, 2013); 304 non-clinical support staff (as of June 30, 2013); 2,357 staff to fill existing mental health vacancies and any vacancies that opened during the initiative (as of June 30, 2013); and 932 peer specialists (as of December 31, 2013). VHA hired various types of mental health staff to fill positions under the agency’s hiring initiative. (See table 4.) Many hires were for social workers and psychologists, positions that officials at VAMCs we visited reported hiring as part of the hiring initiative. Officials at the five VAMCs we visited reported local improvements in access to mental health services due to the additional hiring. For example, officials at one VAMC reported being able to offer more evidence-based therapies as a result of the additional hiring. Officials at this VAMC, as well as officials at a second VAMC and two CBOCs, reported being able to provide mental health care at new locations where they were unable to do so prior to the hiring initiative. For example, prior to the hiring initiative, officials at one VAMC said one of their CBOCs had no capacity to provide mental health therapy services. As a result of the hiring, VAMC officials were able to add a psychologist at that CBOC who now provides mental health therapy and testing services for veterans who visit that location. Further, officials at four of the five VAMCs we visited cited the benefits of having the additional peer specialist hires to assist mental health professionals in performing a variety of therapeutic and supportive tasks with fellow veterans. Peer specialists at the facilities we visited said they educate veterans on available mental health services, provide peer counseling, engage veterans who are resistant to discussing mental health issues, model effective coping techniques, and co-facilitate therapy groups. While VAMC officials cited the benefits of the peer specialists, they also said they initially did not receive clear guidance from VHA on their intended role and thus were unsure how to incorporate the position into the provision of mental health care. Consequently, officials said it took more time to take full advantage of these newly hired positions. Although VHA considered their hiring initiative a success, officials at the five VAMCs we visited reported a number of challenges in hiring and placing mental health providers, including Pay disparity with the private sector. Officials at all the VAMCs we visited said that VHA salaries for mental health professionals were not competitive with private sector salaries. For example, officials at one VAMC said they experienced difficulties in recruiting mental health staff, such as psychiatrists, and lost prospective hires to the private sector. Competition among VAMCs. Officials at three of the five VAMCs we visited also stated that, because every VAMC across the country was trying to fill mental health staff positions at the same time during the hiring initiative, competition among the different VAMCs was high. For example, officials at one VAMC said they made offers to candidates who then used those offers as leverage to secure higher offers at other VAMCs. Lengthy hiring process. Even when candidates were available to fill positions, officials at four of the five VAMCs we visited stated that VHA’s lengthy hiring process—which could take anywhere from 3 months to more than 1 year—was a challenge, possibly resulting in losing candidates who took positions elsewhere during that time. Officials at one VAMC attributed the delays to a lack of human resources staff to complete the administrative side of the hiring process. Despite the VAMC’s staff growing significantly as a result of VHA’s hiring initiatives, officials said the VAMC did not hire any additional human resources staff, which increased the workload of existing staff and contributed to hiring delays. Lack of space. Once the hiring process was completed, officials at four of the five VAMCs and all five CBOCs we visited reported difficulties getting mental health hires in place to provide care due to a lack of sufficient space. All of the VAMCs we visited had either recently completed or were in the process of undergoing expansions of mental health space in their VAMC or CBOC buildings. Officials at one of the CBOCs we visited said that although they moved into their current facility in July 2014, by April 2015, they were already struggling with space constraints. Lack of support staff. Four VAMCs we visited reported that a lack of non-clinical support staff resulted in providers taking on some of the administrative burden, which reduced their clinical availability. For example, officials at one VAMC said that while the recent hiring initiatives added staff to improve access, without a corresponding initiative for hiring support staff, providers are now also scheduling patient appointments, addressing office equipment issues, and handling phone calls about administrative issues, in addition to their clinical duties. Nationwide shortage of mental health professionals. Officials at three VAMCs we visited reported that the nationwide shortage in mental health professionals also presented a hiring challenge. According to the Department of Health & Human Services’ (HHS) Substance Abuse and Mental Health Services Administration, the nation faces a current shortage in the mental health and addiction services workforce, and that shortage is expected to continue. As of July 2015, there were about 4,000 areas designated as having a shortage of mental health professionals, which HHS’s Health Resources and Services Administration projected would require almost 2,700 additional mental health providers to fill the need in these underserved areas. Additional staff likely will be needed to meet VHA’s continuing demand for mental health care. In an April 2015 report, VHA projected a roughly 12 percent increase in mental health staff would be needed to maintain the current veteran staffing ratios for fiscal years 2014-2017. As of March 2015, VHA’s mental health staff vacancy rate (14 percent) was similar to that of VHA’s overall staff vacancy rate (16 percent), even though the vacancy rates were calculated differently. Mental health staff vacancy rates varied widely among VAMCs we visited, from 9 to 28 percent. Four of the five VAMCs we visited had mental health staff vacancy rates that were higher than the national average. (See table 5.) To address some of the mental health hiring challenges, VAMCs reported using various recruitment and retention tools, including hiring and retention bonuses, student debt repayment, hiring telehealth providers at VHA facilities outside of the region (e.g., a provider located in another state), and using internships and academic affiliations to find potential recruits. For example, officials at one VAMC reported using relocation and recruitment bonuses up to 15 percent of the base salary (usually around $5,000) to recruit mental health clinicians. In November 2014, VHA raised the annual salary ranges for all physicians system-wide, including psychiatrists, to enhance the agency’s recruiting, development, and retention abilities when compared with the private sector. Officials at four of the five VAMCs we visited stated that they were still unable to meet overall demand for mental health care despite VHA’s hiring initiative. In addition, an official from a VHA office tracking mental health staffing and access data had not observed any systemic reduction in wait times or staff-to-patient ratios nationally, in part because of simultaneous increased demand for mental health care. Nationally, VHA outpatient mental health staffing totals increased from 11,138 full-time equivalents in fiscal year 2010 to 13,795 in fiscal year 2014, a 24 percent increase. Over the same time period, the number of veterans receiving outpatient mental health care increased from 1,259,300 to 1,533,600, a 22 percent increase. The 22 percent increase in veterans receiving outpatient mental health care has outpaced general growth in the number of veterans using VHA services overall. During the same time period, from fiscal year 2010 through fiscal year 2014, the total number of veterans who used any VHA services increased 9 percent, from 5,441,059 to 5,955,725. VHA attributed the increased demand for mental health care to the influx of veterans returning from the recent conflicts in Iraq and Afghanistan, increased proactive screening efforts, and VHA’s increased capacity to provide mental health care. Officials at the five VAMCs we visited described strategies they used to manage demand for mental health care in light of staffing challenges: Changing the type of mental health care offered. To maintain access given the increased demand, VAMC officials told us they increased the use of telehealth, group therapy (rather than individual therapy), and lengthened the time between therapy appointments. Officials at three of the five VAMCS we visited stated they have increased the use of telehealth to meet the increased demand for mental health services that exceeds their capacity, particularly in rural areas where it is more difficult to hire staff. For example, officials at one VAMC stated that a psychiatrist position has remained open for 2 years at a rural CBOC that has experienced significant increased demand for mental health care. Because of the difficulty in getting that position filled, VAMC officials hired a psychiatrist located in another state to provide mental health care via telehealth to veterans visiting that CBOC. Further, officials at four VAMCs we visited said they refer veterans to group therapy due to shortages in the availability of individual therapy appointments. Finally, officials at three VAMCs and two CBOCs we visited reported shortening appointments or spacing out follow-up individual therapy appointments at longer intervals than what providers preferred. For example, officials at one VAMC we visited said that when they are short staffed, they use shorter appointment times than they would prefer (e.g., 30 minutes) or extend the time between follow-up appointments, which means they did not see patients as often as the providers would prefer. Countering space and staffing constraints. VAMC officials reported using several strategies to address space shortages such as office sharing, increased teleworking or altering provider schedules, and converting closets into small offices. Officials at one VAMC told us they had designated one provider just for new patient intake appointments. This provider’s schedule was made up of 90 minute time slots devoted to assessing the acuity of new patients and referring them to the appropriate source of care. Officials at another VAMC we visited told us they made use of limited office space for individual therapy appointments by having their providers record patient notes in common areas. Referring veterans to other VA locations when a preferred CBOC is not available. Officials at two VAMCs we visited said that when veterans are unable to receive timely care at their preferred CBOC location, they refer that veteran to the VAMC or another CBOC for care until space becomes available at their preferred location. These two VAMCs had a total of 175 veterans on their transfer lists at the time of our visits. We reviewed 30 of these 175 veterans’ records and found 26 veterans were waiting for care at a preferred location and 4 veterans were placed on this list in error. Of the 26 veterans that were waiting for care at a preferred CBOC, 17 were receiving care at other VAMCs or CBOCs until capacity became available at their preferred CBOC and 9 were not. Of the 9 veterans not receiving care at another location, 7 veterans’ records clearly documented that they refused care at an alternative location and 2 veterans’ records did not clearly document if they refused alternate care. According to officials at one CBOC, VAMC social workers followed up every month with veterans who opted not to receive care at the VAMC and instead wait for an opening at the preferred CBOC to assess their current mental health status. In 2013, 10 VAMCs across VHA established partnerships with 23 community mental health clinics (CMHCs), as required by an August 2012 Executive Order in an effort to help VHA meet veterans’ mental health needs; these CMHCs provided mental health care to a limited number of veterans. The over 2,400 mental health appointments that veterans received through the CMHCs accounted for approximately 2 percent of the total mental health care provided across the 10 VAMCs. The most common service veterans received was individual therapy or counseling, but other commonly provided services included group therapy, medication management, and treatment for substance abuse (including intensive outpatient treatment and 28-day residential programs). Veterans were generally satisfied with the care they received from the CMHCs in the pilot, according to VHA’s survey. Veterans who were referred between January 2013 and December 2013 were surveyed retrospectively and 61 percent were completely or somewhat satisfied with the care they received and 19 percent were completely or somewhat dissatisfied. Most of the 10 VAMCs established partnerships with one or two CMHCs, although one of the participating VAMCs, Atlanta, established seven partnerships. As such, nearly half of the care provided through the pilot program was through partnerships with the Atlanta VAMC. The Atlanta CMHCs provided 1,150 appointments to veterans, while the partnerships with other VAMCs generally provided far fewer appointments. For example, the Indianapolis and Mountain Home VAMCs’ partnerships provided the next highest number of appointments, with 664 and 170 appointments respectively, while certain CMHCs that partnered with the Sioux Falls VAMC provided fewer than 10 appointments. (See table 6 for additional information on the community provider pilot sites.) The most common partnership between VAMCs and CMHCs was for the CMHC to provide care on a fee basis. This type of arrangement was used for 12 CMHCs. Payments under fee-basis care are made to non-VA health care providers on an individual veteran basis. The seven CMHCs that partnered with the Atlanta VAMC provided care through locally negotiated contracts, which established the specific services to be provided to veterans. Finally, four CMHCs established partnerships through which VHA mental health providers, located elsewhere, provided telemental health care to veterans at designated CMHCs that were closer to veterans’ homes than either the nearest VAMC or CBOC. While the pilot program has ended, some VAMCs continued their partnerships with affiliated CMHCs. When the pilot ended, the funding for the partnerships ended as well. VAMCs that continued their partnerships had to identify new funding sources, and in general, staff reported that they funded the ongoing partnerships through their normal funding mechanisms. For example, officials from the Atlanta VAMC said that they used money from their discretionary budget to fund the care provided by the CMHCs after the pilot ended. VAMC officials who ended their relationship with CMHCs generally reported that they did so due to a perceived lack of veterans’ interest. VHA and CMHC officials described a number of successes and challenges related to the pilot program. Improved capacity and communication were among the community provider pilot successes: Improved capacity. Officials at one VAMC said they would not be able to maintain mental health access at current levels without the capacity provided by the pilot sites. Additionally, officials at three VAMCs said their partnerships allowed them to expand access by providing additional and more convenient care to veterans living in rural areas. Specifically, officials from one of these VAMCs said that, prior to their telemental health partnership with a CMHC, some veterans traveled 2 or more hours to receive care, but using the CMHC drastically reduced veterans’ travel times and the VAMC’s travel reimbursement costs. Similarly, many veterans reported that they were able to receive care at CMHCs that were much closer than the nearest VAMC or CBOC, according to a VHA survey. VHA’s survey also found that approximately 80 percent of veterans reported travel times of more than 30 minutes to the nearest VAMC, while approximately 50 percent reported that traveling to the nearest CMHC took less than 30 minutes. Improved communication. Both VAMC and CMHC officials said that having a VAMC liaison on site or a dedicated point of contact improved communication. Specifically, one VAMC used previously established relationships with CMHCs to identify pilot sites, and used the pilot program to embed liaisons, who are VAMC employees, which helped facilitate communication between the VAMC and CMHCs. Embedded liaisons, who were registered nurses, were responsible for veteran outreach, the resolution of complaints, and ensuring veterans were receiving care comparable to that provided by the VAMC. Officials at two other VAMCs improved communication by identifying key points of contact. These points of contact worked to improve communications by centralizing information sharing within the VAMCs and CMHCs and addressing veterans’ or other concerns, such as billing confusion. Challenges with the community provider pilot included a number of administrative issues, as well as concerns about the appropriateness of care: Medical documentation and billing. Officials at two VAMCs noted difficulty receiving timely medical documentation and CMHC officials we spoke with also described difficulty receiving timely payments from VAMCs. VAMC officials reported that delays in receiving medical documentation could limit the ability of VHA to provide quality care if a veteran returned to the VAMC for care prior to the VAMC receiving their medical documentation. CMHC officials reported waiting, at times for months, for payments or resubmitting paperwork multiple times because the VAMC appeared to have misplaced it. VHA recommends including incentives for timely documentation in their contracts with CMHCs as a best practice, but some VAMC officials noted that they had little leverage when trying to create or use such incentives. For example, officials at one facility said they had very little leverage in obtaining documentation because they could not withhold billing while waiting for documentation. Technical challenges. Both VAMC and CMHC officials in our review also reported experiencing technical challenges, particularly related to the transfer of medical files and the use of telemental health technology. Two VAMCs reported depending on secure fax to exchange information because the VAMCs and the CMHCs used different computer systems. VHA officials said they plan to provide an internal report to VAMCs recommending, among other things, that VAMCs and CMHCs establish standards and plans for sharing information to reduce the impact on care and workload while ensuring confidentiality. Confusion among available non-VA programs. Some VAMC officials expressed confusion about the different non-VA medical programs, including the CMHC partnerships, available to veterans. Some of the VAMCs in the pilot program extended their partnerships with the CMHCs after the pilot program’s end. A VHA document and VHA officials indicated that PC3 and VCP are now the primary programs for obtaining non-VA care of all kinds, including mental health care, although VAMC officials reported it is not always clear which option should be used. Veterans at a facility that continues to fund ongoing partnerships with CMHCs after the pilot program ended would have at least three options for non-VA care (with PC3 and VCP being the primary options). VAMC officials also reported that some providers and patients were unaware of the CMHCs as a treatment option for mental health care and that there is also confusion among patients regarding which services VHA will pay for at non-VA facilities. VHA officials said that veterans generally work with providers to identify the most appropriate non-VA option for mental health care. VHA central office officials said they leave non-VA care decisions up to the individual VAMCs and generally do not review their non-VA care coordination decisions. Previous reports from us and others have highlighted inefficiencies in non-VA care delivery. Concern about appropriateness of care. VAMC and VISN officials also expressed concern about the appropriateness and effectiveness of referring to community providers for mental health care, and had concerns about the ability of community providers to provide culturally competent and high-quality care to veterans. VAMC and CMHC officials noted the importance of having providers provide culturally competent mental health care for veterans, and some VAMCs were reported to have provided such training to community providers. Some VHA officials expressed concern about whether sufficient providers in certain areas have the necessary training and experience to treat certain types of veterans and one VAMC established guidelines regarding which veterans were eligible to be referred for non-VA care and which were not. VAMC officials said that some veterans preferred receiving care through VHA rather than through a CMHC. Together these factors may have contributed to veterans generally choosing to remain within the VHA system for mental health care. Finally, veterans referred to a CMHC did not always receive care, for a variety of reasons, including that some veterans did not want to receive mental health care once they were referred or veterans did not show up for scheduled appointments. Our review found that veterans at the selected sites were generally receiving mental health care within 30 days of their preferred dates. A veteran’s preferred date is the basis for how VHA calculates wait times, although this approach may not accurately reflect veterans’ overall wait times. In particular, the way in which VHA calculates the key wait-time measure for new veterans generally does not account for the full amount of time it takes veterans to receive their full mental health evaluations, which we found ranged from 0 days to more than 200 days from their initial requests or referrals. VHA officials told us they are aware that the wait-time calculation does not include some portions of a veteran’s wait, but said there is not currently consensus on what standard should be used to begin or end this calculation. In addition, we found that VHA management of mental health care, as demonstrated through the use of clear policies and accurate measurement of performance, could be improved in three areas. First, the existence of two conflicting access policies (14 days versus 30 days) for a full mental health evaluation, the primary entry point for mental health care, creates confusion among VAMC officials and providers about which policy they are expected to meet. By issuing clarifying guidance, VHA would eliminate confusion and improve VAMC officials’ ability to make decisions to prioritize and improve access. Second, VHA lacks guidance on open-access appointments, which has caused confusion about these appointments at the local level and may have contributed to some VAMCs not complying with VHA’s scheduling policies. The lack of guidance on open-access appointments also could lead to inconsistent application of VHA’s access policies, hinder VHA’s ability to assure all veterans are getting their needs served, and may skew the performance measurements of VAMCs that use them—specifically wait-time data and no-show rates—which would result in data that is not comparable across VAMCs. Third, a key way in which VHA measures access to mental health care is through the use of wait-time data. However, our interviews with local and VISN officials confirmed confusion about what definitions are in effect when calculating wait-time measures. The lack of guidance on the calculation of these measures limits their reliability and usefulness. Until VHA clarifies how different access measures are defined and calculated and communicates any changes over time, local and VISN officials are likely to face difficulties accurately assessing wait times and identifying needed improvements. Finally, our interviews with mental health appointment schedulers and review of medical records confirmed our previous findings about how the wait-time measures are subject to error. As a result, we are reiterating our previous recommendation that VHA take actions to improve the reliability of wait-time measures by clarifying the scheduling policy or identifying clearer wait-time measures that are not subject to interpretation or prone to scheduler error. To enhance VHA’s oversight of veteran mental health care and, in particular, improve and ensure the accuracy, reliability, and usefulness of its mental health data, we recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to take the following three actions: Issue clarifying guidance on which of its access policies (e.g., 14 day or 30 day) should be used for scheduling new veterans’ full mental health evaluations. Issue guidance on how appointment scheduling for open-access appointments are required to be managed. Issue guidance about the definitions used to calculate wait times, such as how a new patient is defined, and communicate any changes in wait-time data definitions within and outside VHA. VHA provided written comments on a draft of this report, which we have reprinted in appendix I. In its comments, VHA concurred with our three recommendations and described the agency’s plans to implement each, but disagreed with certain findings. In its comments, VHA stated that with regard to our first recommendation, the agency is in the process of revising the relevant access policy in the Uniform Handbook for scheduling full mental health evaluations to be consistent with the 30-day wait time goal the agency established in response to the Choice Act. VHA stated that it has already changed its data metrics and processes for measuring wait time to align with the 30- day goal, is in the process of revising the policy, and has plans to issue clarifying guidance about the policy revisions, metrics, and expectations for scheduling mental health evaluations. VHA plans to publicize this information through national calls and set a target completion date of March 2016. With regard to our second recommendation, VHA stated that it conducted training in summer 2015 for schedulers based on existing VHA policy that included instructions on how to schedule same-day appointments, which VHA considers to include open-access appointments. However, VHA’s description of same-day appointments for individuals who need an initial mental health evaluation, which is to occur within 24 hours of a request or referral for mental health services, does not accurately represent what we observed at two VAMCs during our review. At one of these VAMCs, veterans that expressed a desire or were referred for mental health services received their initial mental health evaluation over the phone, and then, rather than being given a scheduled appointment for the full mental health evaluation, were referred to the open-access clinic to subsequently seek care. We found that close to half of the veterans at this VAMC who were referred to the open-access clinic never showed up at the clinic and follow-up was often inadequate. At the second VAMC, we found that veterans were referred to the open-access clinic, but also were given scheduled appointments for a future date. We have reviewed the training provided to schedulers, as provided by VHA, and it was unclear whether the type of same-day, walk-in appointments addressed by the training would apply to what we observed in the field. Moreover, given potential differences between certain types of walk-in appointments (e.g., walk-in clinics where no prior evaluation may be required and open- access clinics that include an evaluation prior to referral), issuing specific guidance for open-access appointments would help ensure veterans are getting their needs served and that data are comparable when VAMCs use different approaches. With regard to our third recommendation, VHA provided an overview of the different places that data pertaining to wait times are released. VHA stated that it plans to provide an updated data definition document in October 2015 for the SAIL data and will issue an information letter in November 2015 that contains sources where the general public and VHA employees can find the definitions used to calculate wait times, including how a new patient is defined. More generally, VHA stated that the draft report does not capture the many ways in which VHA ensures veterans receive the care they need when they want it. In particular, VHA commented that our approach did not highlight an initial assessment which veterans are to receive within 24 hours of initial contact. Although this assessment is discussed in the report, we did not use this measure because, according to VHA officials with whom we spoke, this information is not tracked consistently in VHA’s medical record system. We clarified this point in the final report. We believe the report captures various ways VHA provides mental health care, including care provided in outpatient and inpatient settings. In addition, VHA commented that the use of the preferred date provides meaningful data on wait times because it differentiates between the ideal date a veteran wants to be seen and those dates that are either before or after the veteran’s preferred date. VHA commented further that they disagreed with our calculations of the overall time it takes for veterans to receive full mental health evaluations, because it would not capture situations outside of their control such as when a veteran wants to delay treatment. The preferred date is intended to take into account veterans’ preferences. However, our calculations illustrate that the use of the preferred date does not always reflect how long veterans are waiting for care or the variation that exists not only between, but within, VAMCs. The report recognizes that many factors could impact a veteran’s wait time, including the veteran's preference. Further, we do not recommend a specific method for calculating wait times. Rather, our calculations included the time that a veteran waited after initially requesting or being referred for care, but before an appointment is scheduled, at which time the preferred date is set. During the period of time prior to establishing the preferred date, we found instances of veterans’ requests or referrals for care being mismanaged or lost in the system, leading to delays in veterans’ access to mental health care. Given the potential vulnerability of veterans seeking mental health care, we believe this time is an important part of the veteran’s overall experience that provides meaningful information for VHA. Our current and previous work, along with the work of VA OIG, highlights the limitations of VHA’s current scheduling practices, leading us to reiterate our previous recommendation that VHA take actions to improve the reliability of wait-time measures by clarifying the scheduling policy or identifying clearer wait-time measures that are not subject to interpretation or prone to scheduler error. VHA also commented that the full mental health evaluation should be considered the start of a veteran’s treatment, and that therefore there is no delay in care between this evaluation and the delivery of specific interventions. However, during the course of our work, we found a lack of consensus among VHA officials on the appropriate standard or measure for calculating the beginning of treatment. Further, given the wide variation we found across VAMCs in the average number of days between a veteran receiving a full evaluation and their first treatment, as discussed in the report, we believe this presents an important opportunity for VHA to improve veterans’ experiences in accessing mental health care. Finally, VHA commented that our review did not extend to a clinical medical record review to assess quality of care. A full clinical medical record review was beyond the scope of our work. Our objectives in reviewing a selection of veterans’ files were to examine veterans’ access to and VHA’s oversight of timely mental health care. Our review allowed us to address both objectives and provide recommendations for improving veterans’ access to mental health care. We are sending copies of this report to the appropriate congressional committees, the Secretary of Veterans Affairs, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at draperd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. In addition to the contact named above, Lori Achman, Assistant Director; Jennie F. Apter; Robin Burke; Jyoti Gupta; Jacquelyn Hamilton; Sarah Harvey; Eagan Kemp; David Lichtenfeld; Vikki L. Porter; Brienne Tierney; and Malissa G. Winograd made key contributions to this report. | Between 2005 and 2013, the number of veterans receiving mental health care from VHA increased 63 percent, outpacing overall growth in veterans receiving any VHA health care. In fiscal year 2014, VHA spent more than $3.9 billion providing outpatient specialty mental health care (mental health care) to more than 1.5 million veterans. GAO was asked to examine VHA's efforts to provide timely access to mental health care for veterans. This report examines, among other things, (1) veterans' access to timely mental health care, and (2) VHA's related oversight. GAO conducted site visits to five VAMCs selected to provide variation in factors such as location and mental health care utilization rates; reviewed a randomly selected, nongeneralizable sample of 100 medical records (20 from each of the five selected VAMCs) for veterans new to mental health care who received treatment between July 1, 2014, and September 30, 2014; and interviewed VHA and VAMC officials on VHA's measures and oversight of access to mental health care. GAO evaluated VHA's oversight of access to mental health care against relevant federal standards for internal control. The way in which the Department of Veterans Affairs' (VA) Veterans Health Administration (VHA) calculates veteran mental health wait times may not always reflect the overall amount of time a veteran waits for care. VHA uses a veteran's preferred date (determined when an appointment is scheduled) to calculate the wait time for that patient's full mental health evaluation, the primary entry point for mental health care. Of the 100 veterans whose records GAO reviewed, 86 received full mental health evaluations within 30 days of their preferred dates. On average, this was within 4 days. However, GAO also found veterans' preferred dates were, on average, 26 days after their initial requests or referrals for mental health care, and ranged from 0 to 279 days. Further, GAO found the average time in which veterans received their first treatment across the five VA medical centers (VAMC) in its review ranged from 1 to 57 days from the full mental health evaluation. conflicting access policies for allowable wait times for a full mental health evaluation—14 days (according to VHA's mental health handbook) versus 30 days (set in response to recent legislation) from the veteran's preferred date—created confusion among VAMC officials about which policy they are expected to follow. These conflicting policies are inconsistent with federal internal control standards and can hinder officials' ability to ensure veterans are receiving timely access to mental health care. VHA monitors access to mental health care, but the lack of clear policies on wait-time data precludes effective oversight. GAO found VHA's wait-time data may not be comparable over time and between VAMCs. Specifically data may not be comparable over time. VHA has not clearly communicated the definitions used, such as how a new patient is identified, or changes made to these definitions. This limits the reliability and usefulness of the data in determining progress in meeting stated objectives for veterans' timely access to mental health care. data may not be comparable between VAMCs. For example, when open-access appointments are used, data are not comparable between VAMCs. Open-access appointments are typically blocks of time for veterans to see providers without a scheduled appointment. GAO found inconsistencies in the implementation of these appointments, including one VAMC that manually maintained a list of veterans seeking mental health care outside of VHA's scheduling system. Without guidance stating how to manage and track open-access appointments, data comparisons between VAMCs may be misleading. Moreover, VAMCs may lose track of patients referred for mental health care, placing veterans at risk for negative outcomes. GAO recommends that VHA issue clarifying guidance on (1) access policies; (2) definitions used to calculate wait times; and (3) how open-access appointments are to be managed. VHA concurred with GAO's recommendations but disagreed with certain of its findings, for example, GAO's calculation of overall wait-times. GAO maintains its findings, as discussed in the report, are valid. |
The success of a homeland security strategy relies on the ability of all levels of government and the private sector to communicate and cooperate effectively with one another. Activities that are hampered by organizational fragmentation, technological impediments, or ineffective collaboration blunt the nation’s collective efforts to prevent or minimize terrorist acts. GAO and other observers of the federal government’s organization, performance, and accountability for combating terrorism and homeland security functions have long recognized the prevalence of gaps, duplication, and overlaps driven in large part by the absence of a central policy focal point, fragmented missions, ineffective information sharing, human capital needs, institutional rivalries, and cultural challenges. In recent years, GAO has made numerous recommendations related to changes necessary for improving the government’s response to combating terrorism. Prior to the establishment of the Office of Homeland Security (OHS), GAO found that the federal government lacked overall homeland security leadership and management accountable to both the President and Congress. GAO has also stated that fragmentation exists in both coordination of domestic preparedness programs and in efforts to develop a national strategy. GAO believes that the consolidation of some homeland security functions makes sense and will, if properly organized and implemented, over time lead to more efficient, effective, and coordinated programs, better information sharing, and a more robust protection of our people, borders, and critical infrastructure. At the same time, even the proposed Department of Homeland Security (DHS), will still be just one of many players with important roles and responsibilities for ensuring homeland security. In addition, the creation of DHS will not be a panacea. It will create certain new costs and risks, which must be addressed. As it is with so many other homeland security areas, it is also the case for intelligence and information sharing that there are many stakeholders who must work together to achieve common goals. Effective analysis, integration, and dissemination of intelligence and other information critical to homeland security requires the involvement of the Central Intelligence Agency (CIA), Federal Bureau of Investigation (FBI), the National Security Council (NSC), the National Security Agency (NSA), the Department of Defense (DOD), and a myriad of other agencies, and will also include the proposed DHS. State and local governments and the private sector also have critical roles to play – as do significant portions of the international community. Information is already being shared between and among numerous government and private sector organizations and more can be done to facilitate even greater sharing, analyzing, integrating, and disseminating of information. We have observed fragmentation of information analysis and sharing functions potentially requiring better coordination in many homeland security areas. For example, in a recent report on critical infrastructure protection (CIP), we indicated that some 14 different agencies or components had responsibility for analysis and warning activities for cyber CIP. Our recent testimony on aviation security indicated that the Immigration and Naturalization Service (INS), FBI and the Department of State all need the capacity to identify aliens in the United States who are in violation of their visa status, have broken U.S. laws, or are under investigation for criminal activity, including terrorism. GAO has also noted that information sharing coordination difficulties can occur within single departments, such as those addressed in our July 2001 review of FBI intelligence investigations and coordination within the Department of Justice. Procedures established by the Attorney General in 1995 required, in part, that the FBI notify the Criminal Division and the Office of Intelligence Policy and Review whenever a foreign counterintelligence investigation utilizing authorized surveillance and searches develops “…facts or circumstances…that reasonably indicate that a significant federal crime has been, is being, or may be committed….” However, according to Criminal Division officials, required notifications did not always occur and often, when they did, were not timely. The Attorney General and the FBI issued additional procedures to address the coordination concerns and ensure compliance, but these efforts have not been institutionalized. This country has tremendous resources at its disposal, including leading edge technologies, a superior research and development base, extensive expertise, and significant human capital resources. However, there are substantial challenges in leveraging these tools and using them effectively to ensure that timely, useful information is appropriately disseminated to prevent or minimize terrorist attacks. One challenge is determining and implementing the right format and standards for collecting data so that disparate agencies can aggregate and integrate data sets. For example, Extensible Markup Language (XML) standards are one option for exchanging information among disparate systems. Further, guidelines and procedures need to be specified to establish effective data collection processes, and mechanisms need to be put in place to make sure that this happens – again, a difficult task, given the large number of government, private, and other organizations that will be involved in data collection. Mechanisms will be needed to disseminate data, making sure that it gets into the hands of the right people at the right time. It will be equally important to disaggregate information in order to build baselines (normative models) of activity for detecting anomalies that would indicate the nature and seriousness of particular vulnerabilities. Additionally, there is a lack of connectivity between databases and technologies important to the homeland security effort. Databases belonging to federal law enforcements agencies, for example, are frequently not connected, nor are the databases of the federal, state, and local governments. In fact, we have reported for years on federal information systems that are duplicative and not well integrated. Ineffective collaboration among homeland security stakeholders remains one of the principal impediments to integrating and sharing information in order to prevent and minimize terrorist attacks. The committees’ joint inquiry staff’s initial report detailing numerous examples of strategic information known by the intelligence community prior to September 11th highlights the need to better ensure effective integration, collaboration, and dissemination of critical material. The joint inquiry staff’s report focuses on the national intelligence community, but its implications are clearly evident for all homeland security stakeholders – government at all levels, as well as the private sector, must work closely together to analyze, integrate, and appropriately disseminate all useful information to the relevant stakeholders in order to combat terrorism and make the nation more secure. GAO recognizes that this goal is easier to articulate than achieve and that some long-standing obstacles to improving information sharing between and among stakeholders at all levels will require significant changes in organizational cultures, shifts in patterns of access to and limitations on information, and improved processes to facilitate communication and interaction. GAO’s ongoing work illuminates some of the issues. For instance, officials from the Department of Justice, FBI, and the Office of the Secretary of Defense indicated that the vast majority of information—about 90 percent—is already publicly available, and that only about 10 percent of the information is classified, sensitive, or otherwise restricted. The officials said that the expectation for all homeland security participants to obtain actionable information (actionable intelligence is information that is specific enough to tell who, what, where, and when an attack will take place) is unrealistic because, in most cases, the data do not exist or cannot be recognized as actionable. These officials also said that they do share actionable information with appropriate entities, but must also balance the release of the information against the possibility of disclosures that may reveal the sources and methods used to collect the information. Non federal officials tend to echo these concerns. Since September 11th, GAO has met with representatives of various state and local organizations and conducted dozens of case studies of transit authorities, port authorities, and pipeline safety commissions and others entities, as well as testified before and heard testimonies from federal, state, and local officials at 11 congressional field hearings around the country. State and local officials continue to be frustrated by difficulties in the communication and sharing of threat information among all levels of government. Some of the problems they cited include: limited access to information because of security clearance issues, the absence of a systematic top-down and bottom-up information exchange, and uncertainties regarding the appropriate response to a heightened alert from the new homeland security advisory system. It is clear that sharing, analyzing, integrating, and disseminating information needs to occur both in and between all levels of government -- and throughout organizations both vertically and horizontally. A number of steps have been taken to address these issues, but clearly more needs to be done. Following the terrorist attacks of September 11th, a review by the Department of Justice found that America’s ability to detect and prevent terrorism has been undermined significantly by restrictions that limit the intelligence and law enforcement communities’ access to, and sharing of, information. The USA Patriot Act, enacted shortly after the terrorist attacks, was designed to address this problem through enhanced information sharing and updating information-gathering tools. The Patriot Act gives federal law enforcement agencies greater freedom to share information and to coordinate their efforts in the war on terrorism. Methods to use this authority are now being established and implemented, but the effectiveness of these changes will need to be evaluated. Moreover, the private sector has a critical role in reducing our vulnerability from terrorists. The national strategy for homeland security states: “Government at the federal, state, and local level must actively collaborate and partner with the private sector, which controls 85 percent of America’s infrastructure.” The strategy further states that the government at all levels must enable the private sector’s ability to carry out its protection responsibilities through effective partnerships and designates the proposed DHS as the primary contact for coordination at the federal level. Recently, the President’s Critical Infrastructure Protection Board issued a strategy recognizing that all Americans have a role to play in cyber security, and identifies the market mechanisms for stimulating sustained actions to secure cyberspace. The strategy recommends that the federal government identify and remove barriers to public-private information sharing and promote the timely two-way exchange of data to promote increased cyberspace security. Although industry groups already exchange security data, confidentiality concerns over the release of information may limit private sector participation. For example, the technology industry has said that any security information shared with the government should be exempt from disclosure under the Freedom of Information Act, which provides that any person has the right to request access to federal agency records or information. GAO has also reported on how public-private information sharing practices can benefit CIP. In a report issued last October, GAO cited a number of important practices, including: establishing trust relationships with a wide variety of federal and nonfederal entities that may be in a position to provide potentially useful information and advice on vulnerabilities and incidents; developing standards and agreements on how information will be used establishing effective and appropriately secure communications taking steps to ensure that sensitive information is not inappropriately disseminated, which may require statutory change. Clearly, these practices are applicable to intelligence and information sharing in the broadest sense—and for stakeholders. Effectively implementing these practices will require using the full range of management and policy tools. GAO believes that the challenges facing the homeland security community require a commitment to focus on transformational strategies, including strengthening the risk management framework, refining the strategic and policy guidance structure to emphasize collaboration and integration among all relevant stakeholders, and bolstering the fundamental management foundation integral to effective public sector performance and accountability. Implementation of these strategies along with effective oversight will be necessary to institutionalize and integrate a long-term approach to sustainable and affordable homeland security. The events of September 11th have clearly shown the need for a comprehensive risk and threat assessment. Such an assessment, which needs to be integrated at all levels within the homeland security community, is necessary to better protect the nation’s people, borders, and property. As your committees’ work indicates, threats are many, and sources are numerous. A comprehensive assessment can help the nation to better understand and manage the risks associated with terrorism. Moreover, a comprehensive risk and threat assessment is critical to setting priorities and allocating resources. There is no such thing as zero risk and, therefore, hard choices must be made given our limited resources over the coming years. Previously, GAO observed that the federal government has not effectively planned and implemented risk assessment and management efforts. We noted in testimony before Congress last October that individual federal agencies have efforts under way, but the results to date have been inconclusive. In the past, we have recommended that the FBI and the DOD enhance their efforts to complete threat and vulnerability assessments and to work with state and local governments in order to provide comprehensive approaches. Although some of this work was accomplished, delays resulting from the September 11th attacks have prevented their completion. Nevertheless, assessments can help in efforts to pinpoint risks and reallocate resources: For example, after September 11th the Coast Guard conducted initial risk assessments of the nation’s ports. The Coast Guard identified high-risk infrastructure and facilities within specific areas of operation, which helped it to determine how to deploy resources to better ensure harbor security. The Administration clearly recognizes the importance of such assessments. The national homeland security strategy points out that vulnerability assessments must be an integral part of the intelligence cycle for homeland security activities. They would allow planners to project the consequences of possible terrorist attacks against specific facilities or different sectors of the economy or government. The strategy also states the U.S. government does not now perform comprehensive vulnerability assessments of all the nation’s critical infrastructure and key assets. GAO has long advocated the development and implementation of a national strategy to integrate and manage homeland security functions. The national strategy for homeland security released by the Administration last summer recognizes information sharing and systems as key factors cutting across all mission areas in linking and more effectively using the nation’s information systems to better support homeland security. The issuance of this strategy is a very important step. Moreover, information systems and processes will need to be better integrated to support the goals established by the strategy. In our current world, we can no longer think of information sharing, analysis, integration, and dissemination in terms of just the traditional intelligence community. Today, a broader network for information sharing includes the traditional intelligence community, U.S. allies, other federal agencies, state and local governments, and the private sector. To optimize such a network, it is important to have a strong, strategic planning framework and a supporting policy structure. In addition, the national strategy identified one key homeland security mission area as intelligence and warning to detect and prevent terrorist actions. The intent is to provide timely and useful actionable information based on the review and analysis of homeland security information. The national strategy describes a number of initiatives to better develop opportunities for leveraging information sharing among homeland security stakeholders, including: Integrate information sharing across the federal government. This initiative addresses coordinating the sharing of essential homeland security information, including the design and implementation of an interagency information architecture to support efforts to find, track, and respond to terrorist threats. This effort is among the Administration’s budget priorities for fiscal year 2004. Integrate information sharing across state and local governments, private industry, and citizens. This initiative describes efforts to disseminate information from the federal government to state and local homeland security officials. One effort, to allow the exchange of information on federal and state government Web sites, has been completed. Adopt common “meta-data” standards for electronic information relevant to homeland security. This initiative is intended to integrate terrorist-related information from government databases and allow the use of “data mining” tools for homeland security. This effort is under way. Improve public safety emergency communications. This initiative is intended to develop comprehensive emergency communications systems that can disseminate information about vulnerabilities and protective measures and help manage incidents. State and local governments often report that there are deficiencies in their communications capabilities, including the lack of interoperable systems. Such systems are necessary between and among all levels of government. This effort is planned, but no timeline is indicated. Ensure reliable public health information. The last initiative is intended to address reliable communication between medical, veterinary, and public health organizations. It is under way. While these initiatives provide a starting point for improved information sharing, their effective and timely implementation is not assured. A commitment to achieve these objectives must be emphasized. Implementation will require integration, coordination, and collaboration between organizations both within and outside the federal government. Further, the initiatives tend to rely on the creation of DHS for their complete implementation, a department that will require a considerable transition period to reach full potential. Improvements in efficiency and effectiveness are expected in the long term, but there will be additional costs and challenges, as the new department faces tremendous communications, human capital, information technology, and other integration, challenges. Moreover, it is also important to note that the national strategy for homeland security is one of several national strategies that address general and specific security and terrorism related issues. In addition to the homeland security strategy, the Administration recently released a national security strategy. The Administration has stated that the national security strategy could, in conjunction with the homeland security strategy, be viewed as an overarching framework. There are also requirements for several other strategies that cover specific aspects of national and homeland security. These include the National Strategy for Combating Terrorism, National Strategy to Combat Weapons of Mass Destruction, National Strategy to Secure Cyberspace, National Money Laundering Strategy, National Defense Strategy, and National Drug Control Strategy. These strategies reflect important elements supporting national and homeland security. In is important that clear linkages be established among the various strategies to ensure common purpose within an overarching framework in order to clearly define specific roles, responsibilities, and resource priorities. An overarching, integrated framework can help to sort out issues of potential duplication, overlap, and conflict – not only for the federal government, but for all key stakeholders. While the individual plans will articulate roles and responsibilities, as well as set goals, objectives and priorities for their areas, effective integration is necessary to ensure that initiatives are undertaken that complement, not conflict with, each other. Further, integration would allow for the better utilization of resources. Given the many challenges we face, we do not have the resources do everything and must make some hard choices. Finally, a comprehensive, integrated strategic framework requires a review of the policies and processes that currently guide sharing, analysis, integration, and dissemination of intelligence and other critical information to homeland security stakeholders. Indeed, the policy structure currently in place is principally the product of a Cold War environment, in which threats to the United States occurred mainly on foreign soil. New and emerging threats clearly demonstrate that terrorist acts can – and will – impact America at home. The changing nature of the threats present an opportunity for the homeland security community to revisit the legal and policy structure to ensure that it effectively creates an environment for the type of broad-based information sharing needed to protect America at home. It is not just the intelligence community, or the federal government, that have roles, as well as needs, in this evolving environment. Information can be collected by many sources and analyzed to identify potential threats. This information must be disseminated to all relevant parties – whether it is to a federal agency or another level of government. The volume and sources of threats, as your committees have reported, present new and serious challenges to our ability to analyze and integrate information into meaningful threat assessments. Not least, this will require attention to government’s capacity to handle the increased volume of information. Our policy structures need to adapt to these challenges. In fact, the government has recently implemented several measures that promote the sharing of information between all levels of government. For example, the USA Patriot Act provides for greater sharing of intelligence information among federal agencies. The FBI has also implemented several initiatives that would increase information sharing between all levels of government, including increasing the number of its Joint Terrorism Task Forces, to be located at each of its 56 field offices; and establishing the Terrorism Watch List to serve as its single, integrated list of individuals of investigative interest. The FBI plans to make the list accessible throughout the law enforcement and intelligence communities. All of these are recent changes, of course, and will take time to fully implement. It will be important to assess how effective these and other changes are in promoting needed and appropriate information sharing. GAO stands ready to assist the Congress in these efforts. As the recent proposals to create DHS indicate, the terrorist events of last fall have provided an impetus for the government to look at the larger picture of how it provides homeland security and how it can best accomplish associated missions – both now and over the long term. This imperative is particularly clear for the homeland security community, where information sharing and collaboration issues remain a challenge. In this environment, there exists a very real need and possibly a unique opportunity to rethink approaches and priorities to enable the homeland security community to better target its resources to address the most urgent needs. In some cases, the new emphasis on homeland security has prompted attention to long-standing problems that have suddenly become more pressing. In other cases, it will be equally important for organizations to focus on the fundamental building blocks necessary for effective public sector performance and accountability – foundations that readily apply to the homeland security community. In recent months, we have testified about the long-term implementation challenges that the homeland security community faces – not only in ensuring an effective transition to a consolidated DHS, but in strengthening the relationships among and between all stakeholders to facilitate transformational change that can be sustained in years to come. There are many tools that organizations involved in homeland security might consider to drive necessary changes for better collaboration and integration of information sharing activities. One such tool is the Chief Operating Officer (COO) concept. Strategic positioning of COOs can provide a central point to elevate attention on management issues and transformational change, to integrate various key management functions and responsibilities, and to institutionalize accountability for management issues and leading change. Despite some assertions to the contrary, there is no meaningful distinction between the intelligence community, other homeland security organizations, or even other public sector agencies when it comes to creating an environment where strong leadership and accountability for results drives a transformational culture. Over the years, GAO has made observations and recommendations about many success factors required for public sector effectiveness, based on effective management of people, technology, financial, and other issues, especially in its biannual Performance and Accountability Series on major government departments. These factors include the following: Strategic Planning: Leading results-oriented organizations focus on the process of strategic planning that includes involvement of stakeholders, assessment of internal and external environments, and an alignment of activities, core processes and resources to support mission-related outcomes. Organizational Alignment: Operations should be aligned in a way that provides for effective sharing of information, consistent with the goals and objectives established in the national homeland security strategy. Communication: Effective communication strategies are key to any major transformation effort and help to instill an organizational culture that lends itself to effective sharing of information. Building Partnerships: A key challenge is the development and maintenance of homeland security partners at all levels of the government and the private sector, both in the United States and overseas. Performance Management: An effective performance management system fosters institutional, unit, and individual accountability. Human Capital Strategy: As with other parts of the government, homeland security agencies must ensure that their homeland security missions are not adversely impacted by the government’s pending human capital crisis, and that they can recruit, retain, and reward a talented and motivated workforce, which has required core competencies, to achieve their mission and objectives. Information Management and Technology: State-of-the art enabling technology is critical to enhance the ability to transform capabilities and capacities to share and act upon timely, quality information about terrorist threats. Knowledge Management: The homeland security community must foster policies and activities that make maximum use of the collective body of knowledge that will be brought together to determine and deter terrorist threats. Financial Management: All public sector entities have a stewardship obligation to prevent fraud, waste and abuse, to use tax dollars appropriately, and to ensure financial accountability to the President, Congress and the American people. Acquisition Management: The homeland security community, along with the proposed DHS, in the coming years will potentially have one of the most extensive acquisition requirements in government. High-level attention to strong systems and controls for acquisition and related business processes will be critical both to ensuring success and maintaining integrity and accountability. Risk Management: Homeland security agencies must be able to maintain and enhance current states of readiness while transitioning and transforming themselves into more effective and efficient collaborative cultures. Creating and sustaining effective homeland security organizations will require strong commitment to these public sector foundations to foster our nation’s safety. Of all the management success factors applicable to the homeland security community, one of the most important is the establishment of effective communications and information systems. Such systems will likely be critical to our efforts to build an integrated approach to information sharing. Meaningful understanding of inter- and intra-agency information sharing (intelligence or otherwise) necessitates the development of models depicting both how this occurs today and how this should occur tomorrow to optimize mission performance. Such modeling is referred to as developing and implementing enterprise architectures, which in the simplest of terms can be described as blueprints (both business and technology) for transforming how an organization operates. Included in these architectures are information models defining, among other things, what information is needed and used by whom, where, when, and in what form. Without having such an architectural context within which to view the entity in question, a meaningful understanding of the strengths and weaknesses of information sharing is virtually impossible. Currently, such an understanding within the homeland security arena does not exist. At OHS steps are being taken to develop enterprise architectures for each of the proposed department’s four primary mission areas. According to the chief architect for this effort, working groups have been established for three of the four homeland security mission areas and they are in the process of developing business models (to include information exchange matrixes), that are based on the national strategy and that define how agencies currently perform these mission areas. For the fourth, which is information analysis and infrastructure protection (i.e., intelligence information sharing), the office is in the process of forming the working group. The goal of the groups is to follow OMB’s enterprise architecture framework, and deliver an initial set of architecture models describing how homeland security agencies operate by December 31, 2002. Human capital is another critical ingredient required for homeland security success. The government-wide increase in homeland security activities has created a demand for personnel with skills in areas such as information technology, foreign language proficiencies, and law enforcement – without whom, critical information has less chance of being shared, analyzed, integrated, and disseminated in a timely, effective manner. A GAO report issued in January 2002 stresses that foreign language translator shortages, combined in part with advances in technology, at some federal agencies have exacerbated translation backlogs in intelligence and other information. These shortfalls have adversely affected agency operations and hindered U.S. military, law enforcement, intelligence, counter terrorism and diplomatic efforts. GAO believes it is reasonable for certain human capital and management flexibilities to be granted, provided that they are accompanied by adequate transparency and appropriate safeguards designed to prevent abuse and to provide for Congressional oversight. Such flexibilities might prove useful to other entities involved in critical information sharing activities. Moreover, the proposed department, similar to other federal agencies, would benefit from integrating a human capital strategy within its strategic planning framework. Naturally, this framework would apply to the intelligence community at large, as well as other homeland security stakeholders. While recent events certainly underscore the need to address the federal government’s human capital challenges, the underlying problem emanates from the longstanding lack of a consistent strategic approach to marshaling, managing, and maintaining the human capital needed to maximize government performance and assure government’s accountability. Serious human capital shortfalls are eroding the capacity of many agencies, and threatening the ability of others to economically, efficiently, and effectively perform their missions. The federal government’s human capital weaknesses did not emerge overnight and will not be quickly or easily addressed. Committed, sustained, and inspired leadership and persistent attention from all interested parties will be essential if lasting changes are to be made and the challenges we face successfully addressed. GAO’s model of strategic human capital management embodies an approach that is fact-based, focused on strategic results, and incorporates merit principles and other national goals. As such, the model reflects two principles central to the human capital idea: People are assets whose value can be enhanced through investment. As with any investment, the goal is to maximize value while managing risk. An organization’s human capital approaches should be designed, implemented, and assessed by the standard of how well they help the organization pursue its mission and achieve desired results or outcomes. The cornerstones to effective human capital planning include leadership; strategic human capital planning; acquiring, developing and retaining talent; and building results-oriented organizational cultures. The homeland security and intelligence communities must include these factors in their management approach in order to leverage high performance organizations in this critical time. Finally, it is important to note that the success of our nation’s efforts to defend and protect our homeland against terrorism depends on effective oversight by the appropriate parts of our government. The oversight entities of the executive branch – including the Inspectors General, the OMB and OHS -- have a vital role to play in ensuring expected performance and accountability. Likewise, the committees of the Congress and the GAO, as the investigative arm of the legislative branch, have long term and broad institutional roles to play in supporting the nation’s efforts to strengthen homeland security and prevent and mitigate terrorism. GAO recognizes the sensitive issues surrounding oversight of the intelligence and law enforcement communities, and we work collaboratively to find a balance between facilitating the needs of legitimate legislative oversight and preventing disclosure of national security and law enforcement sensitive information. Yet, as GAO has testified previously, our ability to be fully effective in our oversight role of homeland security, including the intelligence community, is at times limited. Historically, the FBI, CIA, NSA, and others have limited our access to information, and Congress’s request for evaluations of the CIA have been minimal. Given both the increasing importance of information sharing in preventing terrorism and the increased investment of resources to strengthen homeland security, it seems prudent that constructive oversight of critical intelligence and information sharing operations by the legislative branch be focused on the implementation of a long term transformation program and to foster information sharing in the homeland security community. In summary, I have discussed the challenges and approaches to improving information sharing among homeland security organizations, as well as the overall management issues that they face along with other public sector organizations. However, the single most important element of any successful transformation is the commitment of top leaders. Top leadership involvement and clear lines of accountability for making management improvements are critical to overcoming an organization’s natural resistance to change, marshaling the resources needed to improve management, and building and maintaining organization-wide commitment to new ways of doing business. Organizational cultures will not be transformed, and new visions and ways of doing business will not take root without strong and sustained leadership. Strong and visionary leadership will be vital to creating a unified, focused homeland security community whose participants can act together to help protect our homeland. This concludes my written testimony. I would be pleased to respond to any questions that you or members of the committees may have. This appendix provides a compendium of selected GAO recommendations for combating terrorism and homeland security and their status. GAO has conducted a body of work on combating terrorism since 1996 and, more recently, on homeland security. Many of our recommendations have been either completely or partially implemented, with particular success in the areas of (1) defining homeland security, (2) developing a national strategy for homeland security, (3) creating a central focal point for coordinating efforts across agencies, (4) tracking funds to combat terrorism, (5) improving command and control structures, (6) developing interagency guidance, (7) improving the interagency exercise program to maintain readiness, (8) tracking lessons learned to improve operations, (9) protecting critical infrastructure, (10) protecting military forces, (11) consolidating first responder training programs, (12) managing materials used for weapons of mass destruction, and (13) improving coordination of research and development. Overall, federal agencies have made realistic progress in many areas given the complexity of the environment confronting them. Many additional challenges remain, however, and some of GAO’s previous recommendations remain either partially implemented or have not been implemented at all. The information below details many of our key recommendations and the status of their implementation. The implementation of many of these recommendations may be affected by current proposals to transfer certain functions from a variety of federal agencies to the proposed Department of Homeland Security. Some of the recommendations have been modified slightly to fit into this format. Combating Terrorism: Status of DOD Efforts to Protect Its Forces Overseas (GAO/NSIAD-97-207, July 21, 1997). Recommendations, p. 20. We recommend that the Secretary of Defense direct the Chairman of the Joint Chiefs of Staff to develop common standards and procedures to include (1) standardized vulnerability assessments to ensure a consistent level of quality and to provide a capability to compare the results from different sites, (2) Department of Defense (DOD)-wide physical security standards that are measurable yet provide a means for deviations when required by local circumstances, and (3) procedures to maintain greater consistency among commands in their implementation of threat condition security measures. Implemented. (1) The Joint Staff has sponsored hundreds of vulnerability assessments—known as Joint Staff Integrated Vulnerability Assessments—based on a defined set of criteria. (2) The Joint Staff has issued one volume of DOD-wide construction standards in December 1999, and plans to complete two additional volumes by December 2002. (3) DOD has provided more guidance and outreach programs to share lessons learned among commands. To ensure that security responsibility for DOD personnel overseas is clear, we recommend that the Secretary of Defense take the necessary steps to ensure that the memorandum of understanding now under discussion with the Department of State is signed expeditiously. Further, the Secretary should provide the geographic combatant commanders with the guidance to successfully negotiate implementation agreements with chiefs of mission. Implemented. The Departments of Defense and State have signed a memorandum of understanding, and scores of country-level memorandums of agreement have been signed between the geographic combatant commanders and their local U.S. ambassadors or chiefs of mission. These agreements clarify who is responsible for providing antiterrorism and force protection to DOD personnel not under the direct command of the geographic combatant commanders. Combating Terrorism: Spending on Governmentwide Programs Requires Better Management and Coordination (GAO/NSIAD-98-39, Dec. 1, 1997). Recommendations, p. 13. We recommend that consistent with the responsibility for coordinating efforts to combat terrorism, the Assistant to the President for National Security Affairs of the National Security Council (NSC), in consultation with the Director, Office of Management and Budget (OMB), and the heads of other executive branch agencies, take steps to ensure that (1) governmentwide priorities to implement the national counterterrorism policy and strategy are established, (2) agencies’ programs, projects, activities, and requirements for combating terrorism are analyzed in relation to established governmentwide priorities, and (3) resources are allocated based on the established priorities and assessments of the threat and risk of terrorist attack. Partially implemented. (1) The Attorney General’s Five-Year Counter-Terrorism and Technology Crime Plan, issued in December 1998, included priority actions for combating terrorism. According to NSC and OMB, the Five-Year Plan, in combination with Presidential Decision Directives (PDD) 39 and 62, represented governmentwide priorities that they used in developing budgets to combat terrorism. (2) According to NSC and OMB, they analyzed agencies’ programs, projects, activities, and requirements using the Five-Year Plan and related presidential decision directives. (3) According to NSC and OMB, they allocated agency resources based upon the priorities established above. More recently, the Office of Homeland Security issued a National Strategy for Homeland Security, which also established priorities for combating terrorism domestically. However, there is no clear link between resources and threats because no national-level risk management approach has been completed to use for resource decisions. To ensure that federal expenditures for terrorism-related activities are well-coordinated and focused on efficiently meeting the goals of U.S. policy under PDD 39, we recommend that the Director, OMB, use data on funds budgeted and spent by executive departments and agencies to evaluate and coordinate projects and recommend resource allocation annually on a crosscutting basis to ensure that governmentwide priorities for combating terrorism are met and programs are based on analytically sound threat and risk assessments and avoid unnecessary duplication. Partially implemented. OMB now is tracking agency budgets and spending to combat terrorism. According to NSC and OMB, they have a process in place to analyze these budgets and allocate resources based upon established priorities. More recently, OMB also started tracking spending on homeland security—the domestic component of combating terrorism. However, there is no clear link between resources and threats. No national-level risk management approach has been completed to use for resource decisions. Combating Terrorism: Opportunities to Improve Domestic Preparedness Program Focus and Efficiency (GAO/NSIAD-99-3, Nov. 12, 1998). Recommendations, p. 22. We recommend that the Secretary of Defense—or the head of any subsequent lead agency—in consultation with the other five cooperating agencies in the Domestic Preparedness Program, refocus the program to more efficiently and economically deliver training to local communities. Implemented. DOD transferred the Domestic Preparedness Program to the Department of Justice on October 1, 2000. The Department of Justice implemented this recommendation by emphasizing the program’s train-the-trainer approach and concentrating resources on training metropolitan trainers in recipient jurisdictions. In June 2002, the President proposed that a new Department of Homeland Security take the lead for federal programs to assist state and local governments. We recommend that the Secretary of Defense, or the head of any subsequent lead agency, use existing state and local emergency management response systems or arrangements to select locations and training structures to deliver courses and consider the geographical proximity of program cities. Implemented. DOD transferred the Domestic Preparedness Program to the Department of Justice on October 1, 2000. The Department of Justice implemented this recommendation by modifying the programs in metropolitan areas and requiring cities to include their mutual aid partners in all training and exercise activities. In June 2002, the President proposed that a new Department of Homeland Security take the lead for federal programs to assist state and local governments. We recommend that the National Coordinator for Security, Infrastructure Protection and Counterterrorism actively review and guide the growing number of weapons of mass destruction (WMD) consequence management training and equipment programs and response elements to ensure that agencies’ separate efforts leverage existing state and local emergency management systems and are coordinated, unduplicated, and focused toward achieving a clearly defined end state. Partially implemented. NSC established an interagency working group called the Interagency Working Group on Assistance to State and Local Authorities. One function of this working group was to review and guide the growing number of WMD consequence management training and equipment programs. In a September 2002 report, we reported that more needs to be done to ensure that federal efforts are coordinated, unduplicated, and focused toward achieving a clearly defined end state—a results-oriented outcome as intended for government programs by the Results Act. In June 2002, the President proposed that a new Department of Homeland Security take the lead for federal programs to assist state and local governments. Combating Terrorism: Issues to Be Resolved to Improve Counterterrorism Operations (GAO/NSIAD-99-135, May 13, 1999). We recommend that the Attorney General direct the Director, Federal Bureau of Investigation (FBI), to coordinate the Domestic Guidelines and concepts of operation plan (CONPLAN) with federal agencies with counterterrorism roles and finalize them. Further, the Domestic Guidelines and/or CONPLAN should seek to clarify federal, state, and local roles, missions, and responsibilities at the incident site. Implemented. The Domestic Guidelines were issued in November 2000. The CONPLAN was coordinated with key federal agencies and was issued in January 2001. We recommend that the Secretary of Defense review command and control structures, and make changes, as appropriate, to ensure there is unity of command to DOD units participating in domestic counterterrorist operations to include both crisis response and consequence management and cases in which they might be concurrent. Implemented. In May 2001, the Secretary of Defense assigned responsibility for providing civilian oversight of all DOD activities to combat terrorism and domestic WMD (including both crisis and consequence management) to the Assistant Secretary of Defense for Special Operations and Low-Intensity Conflict. Further, in October 2002, DOD will establish a new military command—the Northern Command—to manage command and control in domestic military operations to combat terrorism in support of other federal agencies. We recommend that the Secretary of Defense require the services to produce after-action reports or similar evaluations for all counterterrorism field exercises that they participate in. When appropriate, these after-action reports or evaluations should include a discussion of interagency issues and be disseminated to relevant internal and external organizations. Partially implemented. DOD has used its Joint Uniform Lessons Learned System to document observations and lessons learned during exercises, including interagency counterterrorist exercises. Many DOD units produce after-action reports and many of them address interagency issues. However, DOD officials acknowledged that service units or commands do not always produce after-action reports and/or disseminate them internally and externally as appropriate. Combating Terrorism: Use of National Guard Response Teams Is Unclear (GAO/NSIAD-99-110, May 21, 1999). Recommendations, p. 20. We recommend that the National Coordinator for Security, Infrastructure Protection and Counterterrorism, in consultation with the Attorney General, the Director, Federal Emergency Management Agency (FEMA), and the Secretary of Defense, reassess the need for the Rapid Assessment and Initial Detection teams in light of the numerous local, state, and federal organizations that can provide similar functions and submit the results of the reassessment to Congress. If the teams are needed, we recommend that the National Coordinator direct a test of the Rapid Assessment and Initial Deployment team concept in the initial 10 states to determine how the teams can best fit into coordinated state and federal response plans and whether the teams can effectively perform their functions. If the teams are not needed, we further recommend that they be inactivated. Partially implemented. With authorization from Congress, DOD established additional National Guard teams and changed their names from Rapid Assessment and Initial Detection teams to WMD Civil Support Teams. However, subsequent to our report and a report by the DOD Inspector General, which found some similar problems, DOD agreed to review the National Guard teams and work with other agencies to clarify their roles in responding to terrorist incidents. In September 2001, DOD restricted the number of teams to 32. Combating Terrorism: Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attack (GAO/NSIAD-99-163, Sept. 7, 1999). Recommendations, p. 22. We recommend that the Attorney General direct the FBI Director to prepare a formal, authoritative intelligence threat assessment that specifically assesses the chemical and biological agents that would more likely be used by a domestic-origin terrorist—nonstate actors working outside a state-run laboratory infrastructure. Partially implemented. The FBI agreed with our recommendation. The FBI, working with the National Institute of Justice and the Technical Support Working Group, produced a draft threat assessment of the chemical and biological agents that would more likely be used by terrorists. FBI officials originally estimated it would be published in 2001. However, the terrorist attacks in the fall of 2001 delayed these efforts. The FBI and the Technical Support Working Group are now conducting an updated assessment of chemical and biological terrorist threats. According to the FBI, the assessment is being done by experts in WMD and terrorist training manuals and will include the latest information available. The assessment, once completed, will be disseminated to appropriate agencies. We recommend that the Attorney General direct the FBI Director to sponsor a national-level risk assessment that uses national intelligence estimates and inputs from the intelligence community and others to help form the basis for and prioritize programs developed to combat terrorism. Because threats are dynamic, the Director should determine when the completed national-level risk assessment should be updated. Partially implemented. The Department of Justice and the FBI agreed to our recommendation. According to the FBI, it is currently working on a comprehensive national-level assessment of the terrorist threat to the U.S. homeland. The FBI said that this will include an evaluation of the chemical and biological weapons most likely to be used by terrorists and a comprehensive analysis of the risks that terrorist would use WMD. The FBI estimates the assessment will be completed in November 2002. Combating Terrorism: Chemical and Biological Medical Supplies Are Poorly Managed (GAO/HEHS/AIMD-00-36, Oct. 29, 1999). Recommendations, p. 10. We recommend that the Department of Health and Human Services’ (HHS) Office of Emergency Preparedness (OEP) and Centers for Disease Control and Prevention (CDC), the Department of Veterans Affairs (VA), and U.S. Marine Corps Chemical Biological Incident Response Force (CBIRF) establish sufficient systems of internal control over chemical and biological pharmaceutical and medical supplies by (1) conducting risk assessments, (2) arranging for periodic, independent inventories of stockpiles, (3) implementing a tracking system that retains complete documentation for all supplies ordered, received, and destroyed, and (4) rotating stock properly. Partially implemented. Three of the recommendations have been implemented. However, only VA has implemented a tracking system to manage the OEP inventory. CDC is using an interim inventory tracking system. CBIRF has upgraded its database program to track medical supplies, and is working toward placing its medical supply operations under a prime vendor contract. Combating Terrorism: Need to Eliminate Duplicate Federal Weapons of Mass Destruction Training (GAO/NSIAD-00-64, Mar. 21, 2000). Recommendations, p. 25. We recommend that the Secretary of Defense and the Attorney General eliminate duplicate training to the same metropolitan areas. If the Department of Justice extends the Domestic Preparedness Program to more than the currently planned 120 cities, it should integrate the program with the Metropolitan Firefighters Program to capitalize on the strengths of each program and eliminate duplication and overlap. Partially implemented. DOD transferred the Domestic Preparedness Program to the Department of Justice on October 1, 2000. The Department of Justice, while attempting to better integrate the assistance programs under its management, continued to run the Domestic Preparedness Program as a separate program. In June 2002, the President proposed that a new Department of Homeland Security take the lead for federal programs to assist state and local governments. Combating Terrorism: Action Taken but Considerable Risks Remain for Forces Overseas (NSIAD-00-181, July 19, 2000). Recommendations, p. 26. To improve the effectiveness and increase the impact of the vulnerability assessments and the vulnerability assessment reports, we recommend that the Secretary of Defense direct the Chairman of the Joint Chiefs of Staff to improve the vulnerability assessment reports provided to installations. Although the Joint Staff is planning to take some action to improve the value of these reports, we believe the vulnerability assessment reports should recommend specific actions to overcome identified vulnerabilities. Not implemented. DOD believes that the changes in process at the time of our report addressed our recommendations. DOD is still in the process of implementing these actions. To ensure that antiterrorism/force protection managers have the knowledge and skills needed to develop and implement effective antiterrorism/force protection programs, we recommend that the Secretary of Defense direct the Assistant Secretary of Defense for Special Operations and Low-Intensity Conflict to expeditiously implement the Joint Staff’s draft antiterrorism/force protection manager training standard and formulate a timetable for the services to develop and implement a new course that meets the revised standards. Additionally, the Assistant Secretary of Defense for Special Operations and Low-Intensity Conflict should review the course content to ensure that the course has consistency of emphasis across the services. Partially implemented. DOD revised its training standards for antiterrorism/force protection managers, but the Army has not implemented the new training standards. We recommend that the Joint Chiefs of Staff should develop an antiterrorism/force protection best practices or lessons learned program that would share recommendations for both physical and process-oriented improvements. The program would assist installations in addressing common problems—particularly those installations that do not receive Joint Staff Integrated Vulnerability Assessment reports or others who have found vulnerabilities through their own assessments. Partially implemented. The Joint Chiefs of Staff have undertaken a number of lessons learned programs, but not all of the programs that would address this recommendation are operational. To provide Congress with the most complete information on the risks that U.S. Forces overseas are facing from terrorism, we recommended that the Secretary of Defense direct the services to include in their next consolidated combating terrorism budget submission information on the number and types of antiterrorism/ force protection projects that have not been addressed by the budget request and the estimated costs to complete these projects. Information on the backlog of projects should be presented by geographic command. Not implemented. DOD did not concur with this recommendation. DOD believes that there is no need to provide the additional information to Congress. Combating Terrorism: Federal Response Teams Provide Varied Capabilities; Opportunities Remain to Improve Coordination (GAO-01-14, Nov. 30, 2000). Recommendations, p. 27. To guide resource investments for combating terrorism, we recommend that the Attorney General modify the Attorney General’s Five-Year Interagency Counterterrorism and Technology Crime Plan to cite desired outcomes that could be used to develop budget requirements for agencies and their respective response teams. This process should be coordinated as an interagency effort. Partially implemented. The Department of Justice asserted that the Five-Year Plan included desired outcomes. We disagreed with the department and believed what it cited as outcomes are outputs— agency activities rather than results the federal government is trying to achieve. The National Strategy for Homeland Security, issued in July 2002, supercedes the Attorney General’s Five-Year Plan as the interagency plan for combating terrorism domestically. This strategy does not include measurable outcomes, but calls for their development. We recommend that the Director, FEMA, take steps to require that the WMD Interagency Steering Group develop realistic scenarios involving chemical, biological, radiological, and nuclear agents and weapons with experts in the scientific and intelligence communities. FEMA agreed with the recommendation. GAO is working with FEMA to determine the status of implementation. In June 2002, the President proposed that a new Department of Homeland Security take the lead for developing and conducting federal exercises to combat terrorism. We recommend that the Director, FEMA, sponsor periodic national- level consequence management field exercises involving federal, state, and local governments. Such exercises should be conducted together with national-level crisis management field exercises. FEMA agreed with the recommendation. GAO is working with FEMA to determine the status of implementation. In June 2002, the President proposed that a new Department of Homeland Security take the lead for developing and conducting federal exercises to combat terrorism. Combating Terrorism: Accountability Over Medical Supplies Needs Further Improvement (GAO-01-463, Mar. 30, 2001). Recommendations, pp. 25 and 26. Partially implemented. CDC has implemented two of our recommendations and partially implemented one. Specifically, CDC has not finalized agreements with private transport companies to transport stockpiles in the event of a terrorist attack. It is currently using contracts between the federal government and the transport companies. Implemented. OEP has implemented all eight of our recommendations. Enforcement Agency regulations, or move the supplies as soon as possible to a location that meets these requirements; issue a written policy on the frequency of inventory counts and finalize and implement approved national and local operating plans addressing VA’s responsibilities for the procurement, storage, management, and deployment of OEP’s stockpiles; train VA personnel and conduct periodic quality reviews to ensure that national and local operating plans are followed; and immediately contact Food and Drug Administration or the pharmaceutical and medical supply manufacturers of items stored at its central location to determine the impact of items exposed to extreme temperatures, replace those items deemed no longer usable, and either add environmental controls to the current location or move the supplies as soon as possible to a climate-controlled space. To ensure that medical supplies on hand reflect those identified as being needed to respond to a chemical or biological terrorism incident, we recommend that the Marine Corps Systems Command program funding and complete the fielding plan for the CBIRF specific authorized medical allowance list and that the Commandant of the Marine Corps require the Commanding Officer of CBIRF to adjust its stock levels to conform with the authorized medical allowance list and remove expired items from its stock and replace them with current pharmaceutical and medical supplies. Implemented. CBIRF has implemented all of our recommendations. Critical Infrastructure Protection: Significant Challenges in Developing National Capabilities (GAO-01-323, Apr. 25, 2001). Recommendations, pp. 57, 68, and 85. We recommend that the Assistant to the President for National Security Affairs, in coordination with pertinent executive agencies, establish a capability for strategic analysis of computer-based threats, including developing a related methodology, acquiring staff expertise, and obtaining infrastructure data; develop a comprehensive governmentwide data-collection and analysis framework and ensure that national watch and warning operations for computer-based attacks are supported by sufficient staff and resources; and clearly define the role of the National Infrastructure Protection Center (NIPC) in relation to other government and private-sector entities, including lines of authority among NIPC and NSC, Justice, the FBI, and other entities; NIPC’s integration into the national warning system; and protocols that articulate how and under what circumstances NIPC would be placed in a support function to either DOD or the intelligence community. Partially implemented. According to the NIPC director, NIPC has received sustained leadership commitment from key entities, such as the Central Intelligence Agency and the National Security Agency, and it continues to increase its staff primarily through reservists and contractors. The Director added that the NIPC (1) created an NIPC Senior Partners Group similar to a board of directors, which holds quarterly meetings with the senior leadership of each agency that details personnel to the NIPC in order to ensure that their interests are addressed with respect to future NIPC initiatives and program plans and to share with them the status of ongoing initiatives; (2) has developed close working relationships with other Critical Infrastructure Protection (CIP) entities involved in analysis and warning activities, such as the Federal Computer Incident Response Center (FedCIRC), DOD’s Joint Task Force for Computer Network Operations, the Carnegie Mellon CERT® Coordination Center, and the intelligence and antivirus communities, and (3) had developed and implemented procedures to more quickly share relevant CIP information, while separately continuing any related law enforcement Investigation. In addition, the Director stated that two additional teams were created to bolster its analytical capabilities: (1) the critical infrastructure assessment team to focus efforts on learning about particular infrastructures and coordinating with respective infrastructure efforts and (2) the collection operations intelligence liaison team to coordinate with various entities within the intelligence community. We recommend that the Attorney General task the FBI Director to require the NIPC Director to develop a comprehensive written plan for establishing analysis and warning capabilities that integrates existing planning elements and includes milestones and performance measures; approaches (or strategies) and the various resources needed to a description of the relationship between the long-term goals and objectives and the annual performance goals; and a description of how program evaluations could be used to establish or revise strategic goals, along with a schedule for future program evaluations. Partially implemented. The NIPC Director recently stated that NIPC has developed a plan with goals and objectives to improve its analysis and warning capabilities and that NIPC has made considerable progress in this area. The plan establishes and describes performance measures for both its Analysis and Warning Section and issues relating to staffing, training, investigations, outreach, and warning. In addition, the plan describes the resources needed to reach the specific goals and objectives for the Analysis and Warning Section. However, according According to NIPC officials, the NIPC continues to work on making its goals more measurable, better reflect performance, and better linked to future revisions to strategic goals. We recommend that the Attorney General direct the FBI Director to task the NIPC Director to ensure that the Special Technologies and Applications Unit has access to the computer and communications resources necessary to analyze data associated with the increasing number of complex investigations; monitor implementation of new performance measures to ensure that they result in field offices’ fully reporting information on potential computer crimes to the NIPC; and complete development of the emergency law enforcement plan, after comments are received from law enforcement sector members. Partially implemented. According to NIPC officials, the Special Technologies and Applications Unit has continued to increase its computer resources. In addition, the director stated that the NIPC had developed and implemented procedures to more quickly share relevant CIP information, while separately continuing any related law enforcement investigation. However, because of the NIPC’s reorganization in August 2002, when the Computer Investigation and Operations Section was moved from NIPC to the FBI’s Cyber Crime Division, it is important that NIPC establish procedures to continue this information sharing. In addition, an emergency law enforcement services sector plan has been issued. As the national strategy for critical infrastructure protection is reviewed and possible changes considered, we recommend that the Assistant to the President for National Security Affairs define NIPC’s responsibilities for monitoring reconstitution. The President’s Critical Infrastructure Protection Board released a draft strategy on September 18, 2002, for comment. The draft states that a strategic goal is to provide for a national plan for continuity of operations, recovery, and reconstitution of services during a widespread outage of information technology in multiple sectors. However, NIPC’s responsibilities regarding monitoring reconstitution are not discussed. We recommend that the Assistant to the President for National Security Affairs (1) direct federal agencies and encourage the private sector to better define the types of information that are necessary and appropriate to exchange in order to combat computer-based attacks and procedures for performing such exchanges, (2) initiate development of a strategy for identifying assets of national significance that includes coordinating efforts already under way, such as those at DOD and Commerce, and (3) resolve discrepancies between PDD 63 requirements and guidance provided by the federal Chief Information Officers Council regarding computer incident reporting by federal agencies. Partially implemented. NIPC officials told us that a new ISAC development and support unit had been created, whose mission is to enhance private-sector cooperation and trust, resulting in a two- way sharing of information. Officials informed us that NIPC has signed information sharing agreements with most of the ISACs formed, including those representing telecommunications, information technology, water supply, food, emergency fire services, banking and finance, and chemical sectors. NIPC officials added that most of these agreements contained industry-specific cyber and physical incident reporting thresholds. NIPC has created the Interagency Coordination Cell to foster cooperation across government agencies in investigative matters and on matters of common interest. We recommend that the Attorney General direct the FBI Director to direct the NIPC Director to (1) formalize relationships between NIPC and other federal entities, including DOD and the Secret Service, and private-sector Informantion Sharing Analysis Centers (ISACs) so that a clear understanding of what is expected from the respective organizations exists, (2) develop a plan to foster the two- way exchange of information between the NIPC and the ISACs, and (3) ensure that the Key Asset Initiative is integrated with other similar federal activities. Partially implemented. According to NIPC’s Director, the relationship between NIPC and other government entities has significantly improved since our review, and the quarterly meetings with senior government leaders have been instrumental in improving information sharing. In addition, in testimony, officials from the FedCIRC and the U.S. Secret Service have discussed the collaborative and cooperative relationships that now exist between their agencies and NIPC. However, further work is needed to identify assets of national significance and coordinate with other similar federal activities. FBI Intelligence Investigations: Coordination Within Justice on Counterintelligence Criminal Matters Is Limited (GAO-01-780, July 16, 2001). Recommendations, p. 32. To facilitate better coordination of FBI foreign counterintelligence investigations meeting the Attorney General’s coordination criteria, we recommend that the Attorney General establish a policy and guidance clarifying his expectations regarding the FBI’s notification of the Criminal Division and types of advice that the division should be allowed to provide the FBI in foreign counterintelligence investigations in which the Foreign Intelligence Surveillance Act (FISA) tools are being used or their use is anticipated. Partially implemented. In an August 6, 2001, memorandum, the Deputy Attorney General outlined the responsibilities of the FBI, Criminal Division, and the Office of Intelligence Policy and Review (OIPR) regarding intelligence sharing in FISA cases and issued clarifications to the Attorney General’s 1995 coordination procedures. Specifically, these clarifications included defining “significant federal crime” to mean any federal felony and defining the term “reasonable indication” to be substantially lower than “probable cause.” The memorandum also requires notification to take place without delay. The only remaining open point, albeit a significant issue, is the type of advice that the Criminal Division is permitted to provide the FBI after it has been notified of a possible criminal violation. In this regard, in March 2002, the Attorney General signed revised proposed procedures for sharing and coordinating FISA investigations, including changes resulting from the USA Patriot Act of 2001. However, the procedures must be approved by the FISA Court, which recently rejected some of the them as going too far in terms of loosening the barriers between criminal investigations and intelligence gathering. To improve coordination between the FBI and the Criminal Division by ensuring that investigations that indicate criminal violations are clearly identified and by institutionalizing mechanisms to ensure greater coordination, we recommend that the Attorney General direct that all FBI memorandums sent to OIPR, summarizing investigations or seeking FISA renewals contain a section devoted explicitly to identifying any possible federal criminal violation meeting the Attorney General’s coordination criteria, and that those memorandums of investigation meeting the criteria for Criminal Division notification be timely coordinated with the division. Implemented. In an August 6, 2001, memorandum, the Deputy Attorney General directed the FBI to explicitly devote a section in its foreign counterintelligence case summary memorandums, which it sends to OIPR in connection with an initial FISA request or renewal, for identification of any possible federal criminal violations associated with the cases. OIPR is to make those memorandums available to the Criminal Division. The Deptuy Attorney General’s memorandum also required that, when the notification standard is met, notification should be accomplished without delay. To improve coordination between the FBI and the Criminal Division by ensuring that investigations that indicate a criminal violation are clearly identified and by institutionalizing mechanisms to ensure greater coordination, we recommend that the Attorney General direct the FBI Inspection Division, during its periodic inspections of foreign counterintelligence investigations at field offices, to review compliance with the requirement for case summary memorandums sent OIPR to specifically address the identification of possible criminal violations. Moreover, where field office case summary memorandums identified reportable instances of possible federal crimes, the Inspection Division should assess whether the appropriate headquarters unit properly coordinated those foreign counterintelligence investigations with the Criminal Division. Implemented. In a July 18, 2001, memorandum to the Deputy Attorney General, the Assistant Director of the FBI’s Inspection Division stated that the division has established a Foreign Intelligence/Counterintelligence Audit that is to be completed during its on-site inspections at applicable FBI field offices. The audit, according to the Assistant Director, will determine whether significant criminal activity was indicated during intelligence investigations and, where such activity was identified, determine whether it was properly coordinated with FBI headquarters and Justice’s Criminal Division. To improve coordination between the FBI and the Criminal Division by ensuring that investigations that indicate criminal violations are clearly identified and by institutionalizing mechanisms to ensure greater coordination, we recommend that the Attorney General issue written policies and procedures establishing the roles and responsibilities of OIPR and the core group as mechanisms for ensuring compliance with the Attorney General’s coordination procedures. Implemented. On June 12, 2001, OIPR issued policy guidance to its staff on compliance with the Attorney General’s 1995 coordination procedures. The issuance of this policy partially implements the GAO recommendation. Later on August 6, 2001, the Deputy Attorney General issued a memorandum to the Criminal Division, the FBI and OIPR establishing the roles and responsibilities of the Core Group to resolve disputes arising from the Attorney General’s 1995 guidelines. Combating Terrorism: Actions Needed To Improve DOD Antiterrorism Program Implementation and Management (GAO-01-909, Sept. 19, 2001). Recommendations pp. 26 and 27. To improve the implementation of the DOD antiterrorism program, we recommend that the Secretary of Defense direct the Assistant Secretary of Defense for Special Operations and Low-Intensity Conflict to identify those installations that serve a critical role in support of our national military strategy, and to ensure that they receive a higher headquarters vulnerability assessment regardless of the number of personnel assigned at the installations. Partially implemented. DOD is in the process of changing its antiterrorism standards. To improve the implementation of the DOD antiterrorism program, we recommend that the Secretary of Defense direct the Assistant Secretary of Defense for Special Operations and Low-Intensity Conflict to develop a strategy to complete higher headquarters vulnerability assessments at National Guard installations. Partially implemented. DOD ‘s primary action officer is working with Army and Air National Guard to provide vulnerability assessments. To improve the implementation of the DOD antiterrorism program, we recommend that the Secretary of Defense direct the Assistant Secretary of Defense for Special Operations and Low-Intensity Conflict to clarify the force protection standard requiring a criticality assessment at each installation to specifically describe the factors to be used in the assessment and how these evaluations should support antiterrorism resource priority decisions. Partially implemented. DOD is in the process of updating its antiterrorism handbook. To improve the implementation of the DOD antiterrorism program, we recommend that the Secretary of Defense direct the Assistant Secretary of Defense for Special Operations and Low-Intensity Conflict to expand the threat assessment methodology to increase awareness of the consequences of changing business practices at installations that may create workplace violence situations or new opportunities for individuals not affiliated with DOD to gain access to installations. Implemented. DOD has reviewed its threat methodology to ensure that no threat indicators are ignored or overlooked. To improve the implementation of the DOD antiterrorism program, we recommend that the Secretary of Defense direct the Assistant Secretary of Defense for Special Operations and Low-Intensity Conflict to require each installation commander to form a threat working group and personally and actively engage state, local, and federal law enforcement officials. These working groups should hold periodic meetings, prepare records of their discussions, and provide threat information to installation commanders regularly. Partially implemented. DOD is in the process of updating its antiterrorism handbook. Partially implemented. DOD is planning to issue a management plan to include the elements of GAO’s recommendation. A strategic plan that defines long-term antiterrorism goals, approaches to achieve the goals, and key factors that might significantly affect achieving the goals, and An implementation plan that describes performance goals that are objective, quantifiable, and measurable, and resources to achieve the goals; performance indicators to measure outputs; an evaluation plan to compare program results to established actions needed to address any unmet goals. Combating Terrorism: Selected Challenges and Related Recommendations (GAO-01-822, Sept. 20, 2001). Recommendations pp. 41, 42, 57, 86, 87, 104, and 128. We recommend that the President, in conjunction with the Vice President’s efforts, appoint a single focal point that has the responsibility and authority for all critical leadership and coordination functions to combat terrorism. Implemented. Through Executive Order (EO) 13228, the President established an Office of Homeland Security (OHS) to develop and coordinate the implementation of a comprehensive national strategy to secure the United States from terrorist threats or attacks. The focal point should be in the Executive Office of the President, outside individual agencies, and encompass activities to include prevention, crisis management, and consequence management. Implemented. EO 13228 establishes OHS within the Executive Office of the President. OHS functions include efforts to detect, prepare for, prevent, protect against, respond to, and recover from terrorist attacks within the United States. The focal point should oversee a national-level authoritative threat and risk assessment on the potential use of WMD by terrorists on U.S. soil. Such assessments should be updated regularly. Partially implemented. EO 13228 states that OHS shall identify priorities and coordinate efforts for collection and analysis of information within the United States regarding threats of terrorism against the United States and activities of terrorists or terrorist groups within the United States. OHS shall identify, in coordination with NSC, priorities for collection of intelligence outside the United States regarding threats of terrorism within the United States. EO 13228 does not address risk assessments. Implemented. EO 13228 states that OHS will develop a comprehensive national strategy to secure the United States from terrorist threats or attacks. The National Strategy for Homeland Security was issued in July 2002. The national strategy should include (1) desired outcomes that can be measured and are consistent with the Results Act, (2) state and local government input to better define their roles in combating terrorism, and (3) research and development priorities and needs in order to facilitate interagency coordination, decrease duplication, and leverage monetary resources. Partially implemented. (1) The National Strategy for Homeland Security, while not including measurable outcomes, calls for their development. (2) OHS worked with state and local governments to develop the national strategy. (3) The National Strategy for Homeland Security includes a discussion of research and development. The focal point should coordinate implementation of the national strategy among the various federal agencies. This would entail reviewing agency and interagency programs to ensure that they are being implemented in accordance with the national strategy and do not constitute duplication of effort. Partially implemented. EO 13228 directs OHS to coordinate the implementation of a comprehensive national strategy to secure the United States from terrorist threats or attacks. OHS shall work with, among others, federal agencies to ensure the adequacy of the national strategy for detecting, preparing for, preventing, protecting against, responding to, and recovering from terrorist attacks within the United States and shall periodically review and coordinate revisions to that strategy as necessary. The National Strategy for Homeland Security was issued in July 2002. Given the recent publication of the plan, it is too early to determine the OHS role in coordinating its implementation. The focal point should analyze and prioritize governmentwide budgets and spending to combat terrorism to eliminate gaps and duplication of effort. The focal point’s role will be to provide advice or to certify that the budgets are consistent with the national strategy, not to make final budget decisions. Implemented. EO 13228 states OHS shall work with OMB and agencies to identify homeland security programs, and shall review and provide advice to OMB and departments and agencies for such programs. Per EO 13228, OHS shall certify that the funding levels are necessary and appropriate for the homeland security-related activities of the executive branch. The focal point should coordinate the nation’s strategy for combating terrorism with efforts to prevent, detect, and respond to computer-based attacks on critical infrastructures. We do not see the focal point for combating terrorism also having responsibility for protecting computer-based infrastructures because the threats are broader than terrorism and such programs are more closely associated with traditional information security activities. Nonetheless, there should be close coordination between the two areas. Implemented. Per EO 13228, OHS shall coordinate efforts to protect the United States and its critical infrastructure from the consequences of terrorist attacks. In performing this function, the office shall work with federal, state, and local agencies, and private entities as appropriate to, among other things, coordinate efforts to protect critical public and privately owned information systems within the United States from terrorist attacks. In addition, the President created a Special Advisor for Cyberspace Security and appointed him as Chair of the President’s Critical Infrastructure Protection Board. This Chair reports to both OHS and NSC. The focal point should be established by legislation to provide it with legitimacy and authority, and its head should be appointed by the President with the advice and consent of the U.S. Senate. This would provide accountability to both the President and Congress. Also, it would provide continuity across administrations. Not implemented. However, there have been bills before Congress that would legislatively create a central focal point (e.g., OHS), making its director subject to appointment with the advice and consent of the U.S. Senate. The focal point should be adequately staffed to carry out its duties for planning and oversight across the federal government. Partially implemented. EO 13228 has provisions for OHS to hire staff, and for other federal departments to detail their staff to OHS. Given the relative newness of OHS, it is too early to determine whether staff levels are adequate. The focal point should develop a formal process to capture and evaluate interagency lessons learned from major interagency and intergovernmental federal exercises to combat terrorism. The focal point should analyze interagency lessons learned and task individual agencies to take corrective actions as appropriate. Partially implemented. Per EO 13228, OHS shall coordinate domestic exercises and simulations designed to assess and practice systems that would be called upon to respond to a terrorist threat or attack within the United States and coordinate programs and activities for training. OHS shall also ensure that such programs and activities are regularly evaluated under appropriate standards and that resources are allocated to improving and sustaining preparedness based on such evaluations. Given the relative newness of OHS, it is too early to determine how it has implemented this responsibility. To help support a national strategy, we recommend that the Attorney General direct the Director of the FBI to work with appropriate agencies across government to complete ongoing national-level threat assessments regarding terrorist use of WMD. Partially implemented. The Department of Justice and the FBI agreed to this recommendation. According to the FBI, it is currently working on a comprehensive national-level assessment of the terrorist threat to the U.S. homeland. The FBI said that this will include an evaluation of the chemical and biological weapons most likely to be used by terrorists and a comprehensive analysis of the risks of terrorists using other WMD. The FBI estimates the assessment will be completed in November 2002. To guide federal efforts in combating domestic terrorism, we recommend that the Attorney General use the Five-Year Interagency Counterterrorism and Technology Crime Plan and similar plans of other agencies as a basis for developing a national strategy by including (1) desired outcomes that can be measured and that are consistent with the Results Act and (2) state and local government input to better define their roles in combating terrorism. Partially implemented. The Department of Justice asserted that the Five-Year Plan included desired outcomes. We disagreed with the department and believed what it cited as outcomes are outputs— agency activities rather than results the federal government is trying to achieve. The National Strategy for Homeland Security, issued in July 2002, supercedes the Attorney General’s Five-Year Plan as the interagency plan for combating terrorism domestically. This strategy does not include measurable outcomes, but calls for their development. To improve readiness in consequence management, we recommend that the Director of FEMA play a larger role in managing federal exercises to combat terrorism. As part of this, FEMA should seek a formal role as a cochair of the Interagency Working Group on Exercises and help to plan and conduct major interagency counterterrorist exercises to ensure that consequence management is adequately addressed. FEMA agreed with the recommendation. GAO is working with FEMA to determine the status of implementation. In June 2002, the President proposed that a new Department of Homeland Security take the lead for developing and conducting federal exercises to combat terrorism. To ensure that agencies benefit fully from exercises in which they participate, we recommend that the Secretaries of Agriculture, Defense, Energy, Health and Human Services, and Veterans Affairs; the Directors of the Bureau of Alcohol, Tobacco, and Firearms, FEMA, FBI, and the U.S. Secret Service; the Administrator of the Environmental Protection Agency; and the Commandant of the U.S. Coast Guard require their agencies to prepare after-action reports or similar evaluations for all exercises they lead and for all field exercises in which they participate. Partially implemented. Several of the agencies agreed with this recommendation and cited steps they were taking to ensure that after-action reports or similar evaluations are completed as appropriate for exercises to combat terrorism. For example, DOD has used its Joint Uniform Lessons Learned System to document observations and lessons learned during exercises, including interagency exercises to combat terrorism. Other agencies taking steps to improve their evaluations of exercises include the Department of Energy and the FBI. To reduce duplication and leverage resources, we recommend that the Assistant to the President for Science and Technology complete efforts to develop a strategic plan for research and development to combat terrorism, coordinating this with federal agencies and state and local authorities. Partially implemented. The National Strategy for Homeland Security includes a chapter on science and technology, which includes an initiative to coordinate research and development of the homeland security apparatus. The proposed Department of Homeland Security, working with the White House and other federal departments, would set the overall direction for homeland security research and development. The proposed department would also establish a network of national laboratories for homeland security. Given that the department is only a proposal at this time, it is too early to determine how it might implement our recommendation. To eliminate overlapping assistance programs and to provide a single liaison for state and local officials, we recommend that the President, working closely with Congress, consolidate the activities of the FBI’s National Domestic Preparedness Office and the Department of Justice’s Office for State and Local Domestic Preparedness Support under FEMA. Partially implemented. In June 2002, the President proposed that a new Department of Homeland Security take the lead for federal programs to assist state and local governments. Given that the department is only a proposal at this time, it is too early to determine whether these offices and their functions have been successfully consolidated. To clarify the roles and missions of specialized National Guard response teams in a terrorist incident involving WMD, we recommend that the Secretary of Defense suspend the establishment of any additional National Guard Weapons of Mass Destruction Civil Support Teams until DOD has completed its coordination of the teams’ roles and missions with the FBI. We also recommend that the Secretary of Defense reach a written agreement with the Director of the FBI that clarifies the roles of the teams in relation to the FBI. Partially implemented. Subsequent to our earlier report on these teams, and a report by the DOD Inspector General, which found some similar problems, DOD agreed to review the National Guard teams and work with other agencies to clarify their roles in responding to terrorist incidents. In September 2001, DOD restricted the number of teams to 32. Not implemented: The President’s Critical Infrastructure Protection Board released a draft strategy on September 18, 2002, for comment. The draft does not specify roles and responsibilities, or performance measures. However, the President’s Critical Infrastructure Protection Board plans to periodically update the strategy as it evolves. The draft also states that other groups have developed strategies related to their portion of cyberspace they own or operate. Further, the President’s national strategy for homeland security, issued in July 2002, states that a comprehensive national infrastructures plan will be issued in the future. performance measures for which entities can be held We believe the federal government’s cyber-security strategy should be linked to the national strategy to combat terrorism. However, the two areas are different in that the threats to computer-based infrastructures are broader than terrorism and programs to protect them are more closely associated with traditional information security activities. Regarding the link with efforts to combat terrorism, the draft strategy states that it supports both the National Strategy for Homeland Security and the National Security Strategy of the United States. Homeland Security: Key Elements to Unify Efforts Are Underway but Uncertainty Remains (GAO-02-610, June 7, 2002). Recommendations, p. 20. We recommend that the President direct OHS to (1) develop a comprehensive, governmentwide definition of homeland security, and (2) include the definition in the forthcoming national strategy. Implemented. In July 2002, OHS published the National Strategy for Homeland Security. In this document, there is a detailed definition of homeland security. Nonproliferation R&D: NNSA’s Program Develops Successful Technologies, but Project Management Can Be Strengthened (GAO-02-904, Aug. 23, 2002). Recommendations, pp. 20-21. Partially implemented. NNSA agreed to the recommendation and stated that it will improve coordination with other agencies conducting research and development. In addition, coordination may be improved if two of the program’s divisions are moved to a new Department of Homeland Security, as proposed by the President. September 11: Interim Report on the Response of Charities. GAO-02-1037. Washington, D.C.: September 3, 2002. National Preparedness: Technology and Information Sharing Challenges. GAO-02-1048R. Washington, D.C.: August 30, 2002. Homeland Security: Effective Intergovernmental Coordination is Key to Success. GAO-02-1013T. Washington, D.C.: August 23, 2002. Homeland Security: Effective Intergovernmental Coordination is Key to Success. GAO-02-1012T. Washington, D.C.: August 22, 2002. Homeland Security: Effective Intergovernmental Coordination Is Key to Success. GAO-02-1011T. Washington, D.C.: August 20, 2002. Port Security: Nation Faces Formidable Challenges in Making New Initiatives Successful. GAO-02-993T. Washington, D.C.: August 5, 2002. Chemical Safety: Emergency Response Community Views on the Adequacy of Federally Required Chemical Information. GAO-02-799. Washington, D.C.: July 31, 2002. Aviation Security: Transportation Security Administration Faces Immediate and Long-Term Challenges. GAO-02-971T. Washington, D.C.: July 25, 2002. Critical Infrastructure Protection: Significant Challenges Need to Be Addressed, GAO-02-961T. Washington, D.C.: July 24, 2002. Homeland Security: Critical Design and Implementation Issues. GAO-02- 957T. Washington, D.C.: July 17, 2002. Homeland Security: New Department Could Improve Coordination but Transferring Control of Certain Public Health Programs Raises Concerns. GAO-02-954T. Washington, D.C.: July 16, 2002. Critical Infrastructure Protection: Federal Efforts Require a More Coordinated and Comprehensive Approach to Protecting Information Systems. GAO-02-474. Washington, D.C.: July 15, 2002. Critical Infrastructure Protection: Significant Homeland Security Challenges Need to Be Addressed. GAO-02-918T. Washington, D.C.: July 9, 2002. Homeland Security: New Department Could Improve Biomedical R&D Coordination but May Disrupt Dual-Purpose Efforts. GAO-02-924T. Washington, D.C.: July 9, 2002. Homeland Security: Title III of the Homeland Security Act of 2002. GAO- 02-927T. Washington, D.C.: July 9, 2002. Homeland Security: Intergovernmental Coordination and Partnership Will Be Critical to Success. GAO-02-901T. Washington, D.C.: July 3, 2002. Homeland Security: New Department Could Improve Coordination but May Complicate Priority Setting. GAO-02-893T. Washington, D.C.: June 28, 2002. Homeland Security: New Department Could Improve Coordination but May Complicate Public Health Priority Setting. GAO-02-883T. Washington, D.C.: June 25, 2002. Homeland Security: Proposal for Cabinet Agency Has Merit, But Implementation Will Be Pivotal to Success. GAO-02-886T. Washington, D.C.: June 25, 2002. FBI Reorganization: Initial Steps Encouraging but Broad Transformation Needed. GAO-02-865T. Washington, D.C.: June 21, 2002. Homeland Security: Key Elements to Unify Efforts Are Underway but Uncertainty Remains. GAO-02-610. Washington, D.C.: June 7, 2002. National Preparedness: Integrating New and Existing Technology and Information Sharing into an Effective Homeland Security Strategy. GAO-02-811T. Washington, D.C.: June 7, 2002. Review of Studies of the Economic Impact of the September 11, 2001, Terrorist Attacks on the World Trade Center. GAO-02-700R. Washington, D.C.: May 29, 2002. Homeland Security: Integration of Federal, State, Local, and Private Sector Efforts Is Critical to an Effective National Strategy for Homeland Security. GAO-02-621T. Washington, D.C.: April 11, 2002. Combating Terrorism: Enhancing Partnerships Through a National Preparedness Strategy. GAO-02-549T. Washington, D.C.: March 28, 2002. Homeland Security: Progress Made, More Direction and Partnership Sought. GAO-02-490T. Washington, D.C.: March 12, 2002. Homeland Security: Challenges and Strategies in Addressing Short- and Long-Term National Needs. GAO-02-160T. Washington, D.C.: November 7, 2001. Homeland Security: A Risk Management Approach Can Guide Preparedness Efforts. GAO-02-208T. Washington, D.C.: October 31, 2001. Homeland Security: Need to Consider VA’s Role in Strengthening Federal Preparedness. GAO-02-145T. Washington, D.C.: October 15, 2001. Homeland Security: Key Elements of a Risk Management Approach. GAO-02-150T. Washington, D.C.: October 12, 2001. Homeland Security: A Framework for Addressing the Nation’s Issues. GAO-01-1158T. Washington, D.C.: September 21, 2001. Combating Terrorism Chemical Weapons: Lessons Learned Program Generally Effective but Could Be Improved and Expanded. GAO-02-890. Washington, D.C.: September 10, 2002. Combating Terrorism: Department of State Programs to Combat Terrorism Abroad. GAO-02-1021. Washington, D.C.: September 6, 2002. Export Controls: Department of Commerce Controls over Transfers of Technology to Foreign Nationals Need Improvement. GAO-02-972. Washington, D.C.: September 6, 2002. Nonproliferation R&D: NNSA's Program Develops Successful Technologies, but Project Management Can Be Strengthened. GAO-02-904. Washington, D.C.: August 23, 2002. Diffuse Security Threats: USPS Air Filtration Systems Need More Testing and Cost Benefit Analysis Before Implementation. GAO-02-838. Washington, D.C.: August 22, 2002. Nuclear Nonproliferation: U.S. Efforts to Combat Nuclear Smuggling. GAO-02-989T. Washington, D.C.: July 30, 2002. Combating Terrorism: Preliminary Observations on Weaknesses in Force Protection for DOD Deployments Through Domestic Seaports. GAO-02- 955TNI. Washington, D.C.: July 23, 2002. Diffuse Security Threats: Technologies for Mail Sanitization Exist, but Challenges Remain. GAO-02-365. Washington, D.C.: April 23, 2002. Combating Terrorism: Intergovernmental Cooperation in the Development of a National Strategy to Enhance State and Local Preparedness. GAO-02-550T. Washington, D.C.: April 2, 2002. Combating Terrorism: Enhancing Partnerships Through a National Preparedness Strategy. GAO-02-549T. Washington, D.C.: March 28, 2002. Combating Terrorism: Critical Components of a National Strategy to Enhance State and Local Preparedness. GAO-02-548T. Washington, D.C.: March 25, 2002. Combating Terrorism: Intergovernmental Partnership in a National Strategy to Enhance State and Local Preparedness. GAO-02-547T. Washington, D.C.: March 22, 2002. Combating Terrorism: Key Aspects of a National Strategy to Enhance State and Local Preparedness. GAO-02-473T. Washington, D.C.: March 1, 2002. Combating Terrorism: Considerations for Investing Resources in Chemical and Biological Preparedness. GAO-01-162T. Washington, D.C.: October 17, 2001. Combating Terrorism: Selected Challenges and Related Recommendations. GAO-01-822. Washington, D.C.: September 20, 2001. Combating Terrorism: Actions Needed to Improve DOD’s Antiterrorism Program Implementation and Management. GAO-01-909. Washington, D.C.: September 19, 2001. Combating Terrorism: Comments on H.R. 525 to Create a President’s Council on Domestic Preparedness. GAO-01-555T. Washington, D.C.: May 9, 2001. Combating Terrorism: Observations on Options to Improve the Federal Response. GAO-01-660T. Washington, D.C.: April 24, 2001. Combating Terrorism: Comments on Counterterrorism Leadership and National Strategy. GAO-01-556T. Washington, D.C.: March 27, 2001. Combating Terrorism: FEMA Continues to Make Progress in Coordinating Preparedness and Response. GAO-01-15. Washington, D.C.: March 20, 2001. Combating Terrorism: Federal Response Teams Provide Varied Capabilities; Opportunities Remain to Improve Coordination. GAO-01- 14. Washington, D.C.: November 30, 2000. Combating Terrorism: Need to Eliminate Duplicate Federal Weapons of Mass Destruction Training. GAO/NSIAD-00-64. Washington, D.C.: March 21, 2000. Combating Terrorism: Observations on the Threat of Chemical and Biological Terrorism. GAO/T-NSIAD-00-50. Washington, D.C.: October 20, 1999. Combating Terrorism: Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attack. GAO/NSIAD-99-163. Washington, D.C.: September 7, 1999. Combating Terrorism: Observations on Growth in Federal Programs. GAO/T-NSIAD-99-181. Washington, D.C.: June 9, 1999. Combating Terrorism: Analysis of Potential Emergency Response Equipment and Sustainment Costs. GAO-NSIAD-99-151. Washington, D.C.: June 9, 1999. Combating Terrorism: Use of National Guard Response Teams Is Unclear. GAO/NSIAD-99-110. Washington, D.C.: May 21, 1999. Combating Terrorism: Observations on Federal Spending to Combat Terrorism. GAO/T-NSIAD/GGD-99-107. Washington, D.C.: March 11, 1999. Combating Terrorism: Opportunities to Improve Domestic Preparedness Program Focus and Efficiency. GAO-NSIAD-99-3. Washington, D.C.: November 12, 1998. Combating Terrorism: Observations on the Nunn-Lugar-Domenici Domestic Preparedness Program. GAO/T-NSIAD-99-16. Washington, D.C.: October 2, 1998. Combating Terrorism: Threat and Risk Assessments Can Help Prioritize and Target Program Investments. GAO/NSIAD-98-74. Washington, D.C.: April 9, 1998. Combating Terrorism: Spending on Governmentwide Programs Requires Better Management and Coordination. GAO/NSIAD-98-39. Washington, D.C.: December 1, 1997. Public Health: Maintaining an Adequate Blood Supply Is Key to Emergency Preparedness. GAO-02-1095T. Washington, D.C.: September 10, 2002. Homeland Security: New Department Could Improve Coordination But May Complicate Public Health Priority Setting. GAO-02-883T. Washington, D.C.: June 25, 2002. Bioterrorism: The Centers for Disease Control and Prevention’s Role in Public Health Protection. GAO-02-235T. Washington, D.C.: November 15, 2001. Bioterrorism: Review of Public Health and Medical Preparedness. GAO- 02-149T. Washington, D.C.: October 10, 2001. Bioterrorism: Public Health and Medical Preparedness. GAO-02-141T. Washington, D.C.: October 10, 2001. Bioterrorism: Coordination and Preparedness. GAO-02-129T. Washington, D.C.: October 5, 2001. Bioterrorism: Federal Research and Preparedness Activities. GAO-01-915. Washington, D.C.: September 28, 2001. Chemical and Biological Defense: Improved Risk Assessments and Inventory Management Are Needed. GAO-01-667. Washington, D.C.: September 28, 2001. West Nile Virus Outbreak: Lessons for Public Health Preparedness. GAO/HEHS-00-180. Washington, D.C.: September 11, 2000. Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attacks. GAO/NSIAD-99-163. Washington, D.C.: September 7, 1999. Chemical and Biological Defense: Program Planning and Evaluation Should Follow Results Act Framework. GAO/NSIAD-99-159. Washington, D.C.: August 16, 1999. Combating Terrorism: Observations on Biological Terrorism and Public Health Initiatives. GAO/T-NSIAD-99-112. Washington, D.C.: March 16, 1999. Disaster Assistance: Improvement Needed in Disaster Declaration Criteria and Eligibility Assurance Procedures. GAO-01-837. Washington, D.C.: August 31, 2001. FEMA and Army Must Be Proactive in Preparing States for Emergencies. GAO-01-850. Washington, D.C.: August 13, 2001. Federal Emergency Management Agency: Status of Achieving Key Outcomes and Addressing Major Management Challenges. GAO-01-832. Washington, D.C.: July 9, 2001. Performance Budgeting: Opportunities and Challenges. GAO-02-1106T. Washington, D.C.: September 19, 2002. Electronic Government: Proposal Addresses Critical Challenges. GAO-02- 1083T. Washington, D.C.: September 18, 2002. Results-Oriented Cultures: Insights for U.S. Agencies from Other Countries' Performance Management Initiatives. GAO-02-862. Washington, D.C.: August 2, 2002. Acquisition Workforce: Agencies Need to Better Define and Track the Training of Their Employees. GAO-02-737. Washington, D.C.: July 29, 2002. Managing for Results: Using Strategic Human Capital Management to Drive Transformational Change. GAO-02-940T. Washington, D.C.: July 15, 2002. Coast Guard: Budget and Management Challenges for 2003 and Beyond. GAO-02-538T. Washington, D.C.: March 19, 2002. A Model of Strategic Human Capital Management. GAO-02-373SP. Washington, D.C.: March 15, 2002. Budget Issues: Long-Term Fiscal Challenges. GAO-02-467T. Washington, D.C.: February 27, 2002. Managing for Results: Progress in Linking Performance Plans with Budget and Financial Statements. GAO-02-236. Washington, D.C.: January 4, 2002. Results-Oriented Budget Practices in Federal Agencies. GAO-01-1084SP. Washington, D.C.: August 2001. Managing for Results: Federal Managers’ Views on Key Management Issues Vary Widely across Agencies. GAO-01-0592. Washington, D.C.: May 2001. Determining Performance and Accountability Challenges and High Risks. GAO-01-159SP. Washington, D.C.: November 2000. Managing for Results: Using the Results Act to Address Mission Fragmentation and Program Overlap. GAO/AIMD-97-156. Washington, D.C.: August 29, 1997. | To protect the nation from terrorist attacks, homeland security stakeholders must more effectively work together to strengthen the process by which critical information can be shared, analyzed, integrated and disseminated to help prevent or minimize terrorist activities. The success of a homeland security strategy relies on the ability of all levels of government and the private sector to communicate and cooperate effectively with one another. Activities that are hampered by organizational fragmentation, technological impediments, or ineffective collaboration blunt the nation's collective efforts to prevent or minimize terrorist acts. The challenges facing the homeland security community require a commitment to focus on transformational strategies, including strengthening the risk management framework, refining the strategic and policy guidance structure to emphasize collaboration and integration among all relevant stakeholders, and bolstering the fundamental management foundation integral to effective public sector performance and accountability. Implementation of these strategies along with effective oversight will be necessary to institutionalize and integrate a long-term approach to sustainable and affordable homeland security. |
The H-1B program was created by the Immigration Act of 1990, which amended the Immigration and Nationality Act (INA). The H-1B visa category was created to enable U.S. employers to hire temporary workers as needed in specialty occupations, or those that require theoretical and practical application of a body of highly specialized knowledge. It also requires a bachelor’s or higher degree (or its equivalent) in the specific occupation as a minimum requirement for entry into the occupation in the United States. The Immigration Act of 1990 capped the number of H-1B visas at 65,000 per fiscal year. Since the creation of the H-1B program, the number of H-1B visas permitted each fiscal year has changed several times. Congress passed the American Competitiveness and Workforce Improvement Act of 1998 (ACWIA), which increased the limit to 115,000 for fiscal years 1999 and 2000. In 2000, Congress passed the American Competitiveness in the Twenty-First Century Act, which raised the limit to 195,000 for fiscal year 2001 and maintained that level through fiscal years 2002 and 2003. The number of H-1B visas reverted back to 65,000 thereafter. An H-1B visa generally is valid for 3 years of employment and is renewable for an additional 3 years. Filing an application with Labor’s Employment and Training Administration is the employer’s first step in hiring an H-1B worker, and Labor is responsible for either certifying or denying the employer’s application within 7 days (see app. II for the Labor Condition Application). By law, it may only review applications for omissions and obvious inaccuracies. Labor has no authority to verify the authenticity of the information. Employers must include on the application information such as their name, address, rate of pay and work location for the H-1B worker, and employer identification number. All employers are also required to make four attestations on the application as to: 1. Wages: The employer will pay nonimmigrants at least the local prevailing wage or the employer’s actual wage, whichever is higher, and pay for nonproductive time caused by a decision made by the employer; and offer nonimmigrants benefits on the same basis as U.S. workers. 2. Working conditions: The employment of H-1B nonimmigrants will not adversely affect the working conditions of U.S. workers similarly employed. 3. Strike, lockout, or work stoppage: No strike or lockout exists in the occupational classification at the place of employment. 4. Notification: The employer has notified employees at the place of employment of the intent to employ H-1B workers. Certain employers are required to make three additional attestations on their application. These additional attestations apply to H-1B employers who: (1) are H-1B dependent, that is, generally those whose workforce is comprised of 15 percent or more H-1B nonimmigrant employees; or (2) are found by Labor to have committed either a willful failure to meet H-1B program requirements or misrepresented a material fact in an application during the previous 5 years. These employers are required to additionally attest that: (1) they did not displace a U.S. worker within the period of 90 days before and 90 days after filing a petition for an H-1B worker; (2) they took good faith steps prior to filing the H-1B application to recruit U.S. workers and that they offered the job to a U.S. applicant who was equally or better qualified than an H-1B worker; and (3) prior to placing the H-1B worker with another employer, they inquired and have no knowledge as to that employer’s action or intent to displace a U.S. worker within the 90 days before and 90 days after the placement of the H-1B worker with that employer. After Labor certifies an application, the employer must submit to USCIS an H-1B petition for each worker it wishes to hire (see App. III for the H-1B petition and supplement). On March 1, 2003, Homeland Security took over all functions and authorities of Justice’s Immigration and Naturalization Service under the Homeland Security Act of 2002 and the Homeland Security Reorganization Plan of November 25, 2002. Employers submit to Homeland Security the application, petition, and supporting documentation along with the appropriate fees. When Congress passed ACWIA in 1998, it imposed a filing fee of $500 on H-1B petitions. In 2000, Congress passed legislation to increase the amount of filing fees to $1,000 then increased the amount again to $1,500 in 2004. Along with a $1,500 filing fee, an employer must also submit a $500 fraud prevention and detection fee to Homeland Security. Information on the petition must indicate the wages that will be paid to the H-1B worker, the location of the position and the worker’s qualifications. Through a process known as adjudication, Homeland Security reviews the documents for certain criteria, such as whether the petition is accompanied by a certified application from Labor, whether the employer is eligible to employ an H-1B worker, whether the position is a specialty occupation, and whether the prospective H-1B worker is qualified for the position. The Wage and Hour Division of Labor’s Employment Standards Administration performs investigative and enforcement functions to determine whether an employer has complied with its attestations on the application. An aggrieved individual or entity or certain non-aggrieved parties may file a complaint with Labor that an employer violated a requirement of the H-1B program. To conduct an investigation, the Administrator must have reasonable cause to believe that an employer did not comply with or misrepresented information on its application. Employers who violate any of the attestations on the application may be subject to civil money penalties or administrative remedy, such as paying back wages to H-1B workers or debarment, which disqualifies an employer from participating in the H-1B program for a specified period of time. Employers, the person who filed the complaint, or other interested parties who disagree with the findings of the investigation then have 15 days to appeal by requesting an administrative hearing. The Office of Special Counsel for Immigration Related Unfair Employment Practices (OSC) of the Department of Justice also has some enforcement responsibility. Under statutory authority created by the Immigration Reform and Control Act of 1986, OSC pursues charges of citizenship discrimination brought by U.S. workers who allege that an employer preferred to hire an H-1B worker. Figure 1 gives an overview of the H-1B visa process. The figure highlights the major steps that an employer takes when hiring an H-1B worker. Figure 2 highlights the process for investigations when a violation has been alleged. Labor’s H-1B authority is limited in scope, but the agency could improve its oversight of employers’ compliance with program requirements. While Labor’s review of employers’ applications to hire H-1B workers is timely, it lacks quality assurance controls and may overlook some inaccuracies, such as applications containing employer identification numbers with invalid prefix codes. Labor’s Wage and Hour Division investigates complaints made against H-1B employers and keeps a database of employers with prior violations. Labor has the authority to conduct random investigations of some of these employers and began doing so in April 2006. Labor uses education as the primary method of promoting compliance with the H-1B program. In addition to conducting compliance assistance programs for employers, it also coordinates with the Department of State to provide H-1B workers with information about their employee rights. Labor has reduced the time it takes to certify employers’ applications by reviewing them electronically and subjecting them to data checks. Labor increased the percentage of applications reviewed within the required seven days from 56 percent in fiscal year 2001 to 100 percent in fiscal year 2005. As of January 2006, all applications must be submitted electronically and Labor’s website informs employers that it will certify or deny applications within minutes based on the information entered. Our analysis of Labor’s data found that of the 960,563 applications that Labor electronically reviewed from January 2002 through September 2005, 99.5 percent were certified, as shown in table 1. Not all applications continue through the process and result in H-1B visas—employers can withdraw their applications, petitions can be denied, or the visa may not be issued. Therefore, Labor officials told us the number of applications submitted represents employers’ interest in the H-1B program rather than the actual number of H-1B visas that are issued. In addition to agreeing to certain attestations on the application, employers must provide information about themselves, such as address and employer identification number, as well as information about each position they are seeking to fill, the time period they will need the worker, the prevailing wage and location for the position, the wage the worker will be paid, and the number of workers they want to hire. On the applications submitted electronically from January 2002 through September 2005, approximately 90 percent of employers requested only one worker even though they are allowed to request multiple workers for the same occupation on an application. Approximately one-third of the applications were for workers in computer system analysis and programming occupations, with the next most frequent request, for college and university education workers, at 7 percent. About 30 percent of the positions were located in either California or New York. See appendix IV for more information on H-1B workers. Labor’s review of the application is limited by law to identifying omissions or obvious inaccuracies. Labor will not certify an application if the employer has failed to check all the necessary boxes or not filled in required information such as wage rate, prevailing wage or period of intended employment. Labor’s system will also deny an application if it contains obvious inaccuracies. In addition to checks to ensure that data fields have the correct number of digits or are numerical when required, Labor has defined obvious inaccuracies as when an employer: files an application after being debarred, or disqualified, from participating in the H-1B program; submits an application more than 6 months before the beginning date of the period of employment; identifies multiple occupations on a single application; states a wage rate that is below the Fair Labor Standards Act minimum wage; identifies a wage rate that is below the prevailing wage on the application; and identifies a wage range where the bottom of the range is lower than the prevailing wage on the application. Despite these checks, Labor’s system does not consistently identify all obvious inaccuracies. For example, although the overall percentage was small, we found 3,229 applications that were certified even though the wage rate on the application was lower than the prevailing wage for that occupation in the specific location. Table 2 shows the wage rates and corresponding prevailing wages from a sample of applications Labor incorrectly certified because the wage rate was not equal to or greater than the prevailing wage. Additionally, Labor does not identify other errors that may be obvious. Specifically, Labor told us its system reviews an application’s employer identification number to ensure it has the correct number of digits and that the number does not appear on the list of employers who are ineligible to participate in the H-1B program. However, our analysis of Labor’s data found that Labor’s review may not identify numbers that are erroneous. For example, we found 993 certified applications with invalid employer identification number prefixes. While an invalid employer identification number could indicate a fraudulent application, Labor does not consider it an obvious inaccuracy. Officials told us that in other programs, such as the permanent employment program, Labor matches the application’s employer identification number to a database with valid employer identification numbers; however, they do not formally do this with H-1B applications because it is an attestation process, not a verification process. According to Labor, most of the process of reviewing applications is automated—the primary reason an analyst will review an application is if the employer’s prevailing wage source is not recognized by Labor’s database. The analyst reviews the source of the prevailing wage provided by the employer just to ensure the source meets Labor’s criteria, not to verify that the prevailing wage is correct. The employer may obtain a prevailing wage from a state workforce agency, a collective bargaining agreement, or another source, such as a private employment survey. If the employer uses a private employment survey and the analyst finds the survey meets Labor’s criteria—such as having been conducted in the last 2 years and using a statistically valid methodology to collect the data—the survey will be added to Labor’s database and used to approve future applications. Officials also told us that analysts review from three to five applications per day. In an effort to promote consistency in prevailing wage determinations, Labor has issued guidance for its state workforce agencies as well as for employers using surveys. Labor officials told us they always advise employers to obtain prevailing wage rates from the state workforce agency, but they also said that because the application is an attestation process, employers are responsible for doing the required analysis to determine the prevailing wage and maintaining the proper documentation to support the prevailing wage provided on the application. We and others have previously reported that Labor’s review of the labor condition application is limited and provides little assurance that employers are fulfilling their H-1B responsibilities. In 2000, given Labor’s limited review of the application, we suggested Congress consider streamlining the H-1B approval process by requiring employers to submit the application directly to the Immigration and Naturalization Service, now the USCIS. Similarly, in 2003, Labor’s Inspector General reported that either Labor should have authority to verify the accuracy of the application information or employers should file their applications directly to USCIS. While Labor officials told us they frequently review the application process to determine where improvements can be made, they rely on a system of data checks rather than a formal quality assurance process because of the factual nature of the form and the number of applications received. Additionally, they said if they conducted a more in- depth review of the applications, they could overreach their legal authority and increase the processing time for applications. Officials also said the integrity of the H-1B program is ensured through enforcement and by the fact that there is actual review by staff when the employer submits the paperwork to USCIS. Labor enforces H-1B program requirements primarily by investigating complaints filed against employers. H-1B workers or certain others with knowledge of an employer’s practices who believe an employer has violated program requirements can file a complaint with Labor’s Wage and Hour Division, which received 1,026 complaints from fiscal year 2000 through fiscal year 2005. If the complaint meets certain criteria—such as being filed within 12 months of the violation—Labor said it notifies the employer of the investigation and requests information, including payroll records, prevailing wage determinations, and Labor’s certified applications. Labor also interviews the employer and workers, checks its violations database to determine if the employer has any previous violations, and assesses the employer’s compliance with all H-1B program requirements. As a result, an investigation may result in more than one violation. Once the investigation is complete, Labor told us it meets with the employer to explain the findings and follows up with a letter to the employer listing violations and penalties, such as payment of back wages due to H-1B workers who were not paid the required wage, civil money penalties, debarment, or other administrative remedies (see table 3). While the number of H-1B complaints and violations has increased from fiscal year 2000 through fiscal year 2005, the overall numbers remain small and may have been affected by changes to the program. As shown in table 4, our analysis of Labor’s data found the number of complaints increased from 117 in fiscal year 2000 to 173 in fiscal year 2005. The number of cases with violations more than doubled over the same period. The most common violation was not paying H-1B workers the required wage. With the increase in violations, the amount of penalties also increased. In fiscal year 2000, 226 H-1B workers were found to be due back wages of $1.2 million, by fiscal year 2005 the number had increased to 604 workers with back wages due of $5.2 million. In addition to the payment of back wages, employers were required to pay civil money penalties of more than $400,000 over the same period. From fiscal year 2002 through fiscal year 2005, Labor requested over 50 debarment periods from Homeland Security for employers that committed certain violations—for example, willfully failing to pay an H-1B worker the required wage—that resulted in their being disqualified from participating in the H-1B program for a specified period of time. Labor officials told us it is difficult to attribute changes in complaints and violations to any specific cause because of multiple legislative changes to the program, such as the temporary increase in the number of H-1B workers allowed to enter the country and the additional attestations for certain employers that expired and then were reinstated. In addition to investigating complaints, Labor’s Wage and Hour Division has recently begun randomly investigating employers who have willfully violated the program’s requirements. Labor has had the statutory authority to conduct random investigations of these employers since 1998. Under this authority, Labor can subject employers on a case-by-case basis to random investigations up to 5 years from the date the employer first willfully violated the requirements of the H-1B program or willfully misrepresented a material fact in the labor condition application. Officials told us that the WHD did not schedule random H-1B investigations of willful violators until recently because, by definition, such employers are debarred from employing H-1B workers for a fixed number of years (they often go out of business due to the debarment), the number of such employers is very small (the total didn’t reach 50 nationwide until late in fiscal year 2005) and trained H-1B investigators have heavy case loads. However, Labor said that it will initiate random investigations nationwide in fiscal year 2006. Labor has an existing database that it plans to use for targeting employers for investigations. The database contains information about employers who have previously violated their obligations under the H-1B program, including the types of violations and the penalties that were assessed. Although cases with willful violations represent a small number of all cases with violations, they have increased from 8 percent in fiscal year 2000 to 14 percent in fiscal year 2005. (See fig. 3) Officials said that they now have 59 cases on which they can follow-up to determine if the employer has committed another violation. Labor said that, in addition to initiating random investigations of willful violators nationwide, it will set up a system to track the data in its database and train its employees in fiscal year 2006. In April 2006, Labor sent a letter to its regional offices directing them each to initiate an investigation of at least one case prior to September 30, 2006. Labor uses education as the primary method of promoting employer compliance with the H-1B program. For example, Labor conducts compliance assistance programs, posts guidance on its website, and explains employers’ obligations under the law during complaint investigations. Labor held a total of 6 H-1B compliance assistance programs for H-1B employers from fiscal year 2000 through fiscal year 2005. Typically, compliance assistance programs are conducted by Labor’s district offices based upon requests by employers, employer associations, or employee groups. For example, in fiscal year 2002, Labor gave two presentations in Massachusetts, attended by 290 participants, mostly attorneys. In addition, Labor presented at two continuing education events for attorneys in Los Angeles and New Jersey in fiscal year 2004. Labor also holds seminars in response to requests for compliance information from employer associations and discusses compliance with H-1B program requirements with companies that do not have pending lawsuits related to the H-1B program. Labor provides information to employers through its website, such as employer guidance and fact sheets that describe employer responsibilities and employee rights under the H-1B program. Some of the fact sheets have not been updated since the program was amended by the H-1B Visa Reform Act in 2004, but officials told us they have developed 26 new fact sheets that will be made available on the agency’s website this fiscal year. Labor also publicizes violation cases by issuing press releases on its website, particularly when it debars an employer. Labor officials told us that the purpose of the press releases is to show that there are consequences for not complying with the law. Labor takes the opportunity to explain employer obligations under the law during its investigations of complaints filed against H-1B employers. At the beginning, an investigator sends the employer the regulations that pertain to the H-1B program and, during the investigation, highlights the law and regulations that are relevant to the case. The investigator also answers any questions the employer may have. At a final conference, Labor tells the employer which parts of the law the employer violated. Additionally, Labor always asks the employer it is investigating how it plans to change to come into compliance with the program. Labor is working with the Department of State to provide information cards to H-1B workers about their employment rights. Workers receive the information cards with their visas. Labor also distributes the cards to employers so that they are aware of an H-1B worker’s rights. The cards include information on employees’ rights regarding wages and benefits, illegal deductions, working conditions, records, and discrimination. (See fig. 4.) Homeland Security and Justice also provide information to employers in a variety of ways such as publishing newsletters, responding to written inquiries from employers and their counsel, informational bulletins, answering questions for employers who call, and providing information on their websites. Homeland Security publishes informational bulletins for employers seeking to hire foreign workers. The Department also uses its website to advise the public of any changes in the H-1B program regarding filing fees or eligibility resulting from changes in the law. Justice engages in educational activities through public service announcements aimed at employers, workers, and the general public. The agency also trains employers, and works with other federal agencies to coordinate education programs for employers. Justice also has a telephone intervention hotline for U.S. workers and H-1B employers to call when disputes arise. Justice uses the hotline to quickly address questions and to resolve problems. In addition, Justice answers e-mails, issues guidance, and provides information on its website. Labor, Homeland Security, and Justice all have responsibilities under the H-1B program, but Labor and Homeland Security could better address the challenges they face in sharing information. After Labor certifies an application, Homeland Security’s USCIS reviews the information but cannot easily verify how many times the employer has used the application. Also, USCIS staff told us that, during their review, they may find evidence that employers are not meeting their H-1B obligations. However, current law precludes the Wage and Hour Division from using this information to initiate an investigation of the employer. In addition to Homeland Security, Labor also shares enforcement responsibilities with Justice, which pursues charges filed by U.S. workers who allege that they were not hired, or were displaced, so that an H-1B worker could be hired instead. From 2000 through 2005, Justice entered six out-of-court settlements to remedy violations and assessed $7,200 in penalties. Homeland Security’s USCIS reviews Labor’s certified application as part of the adjudication process; however, it lacks the ability to easily verify whether employers have submitted petitions for more workers than originally requested on the application. Labor can certify applications for multiple workers and, therefore, employers can use one application in support of more than one petition. However, USCIS’ data system, CLAIMS 3, does not match each petition to its corresponding application because the system does not include a field for the unique number Labor assigns each application. As a result, USCIS cannot easily verify how many times the employer has used a given application or which petitions were supported by which application, potentially allowing employers to use the application for more workers than they were certified to hire. USCIS staff told us that when employers do not provide the names of the other H-1B workers approved using the same certified application, the adjudicator may request it from the employer. USCIS staff also told us that a letter is sent to the employer requesting the information and the employer has approximately 12 weeks to respond. Consequently, a request for information requires staff time and slows down the adjudication process. While USCIS told us it has attempted to add Labor’s application case number to its database, it has not been able to because of the system’s memory limitations. USCIS told us it is currently transforming its information technology system; however, it will be several years before the new system is operational. During the process of reviewing employers’ petitions, USCIS may find evidence the employer is not meeting the requirements of the H-1B program, but current law precludes the Wage and Hour Division from using this information to initiate an investigation of the employer. For example, to extend an H-1B worker’s stay in the United States, an employer may submit a petition with the worker’s W-2 form as supporting documentation. USCIS staff told us they have reviewed petitions where the wage on the W-2 form was less than the wage the employer indicated it would pay on the original Labor application. In these cases, USCIS asks the employer to explain the wage discrepancy. If the employer has a legitimate explanation and documentation—for example the worker was on some type of extended leave—the petition may be approved. However, if the employer is unable to adequately explain the discrepancy, USCIS said it may deny the petition but generally does not report these employers to Labor for investigation. USCIS does not have a formal process for reporting the discrepancy to Labor. According to officials from Labor, it does not consider Homeland Security to be an aggrieved party; therefore, Labor would not initiate an investigation based on information received from, or a complaint filed by, Homeland Security. Labor and Homeland Security also coordinate when employers have committed violations resulting in debarment. After Labor’s Wage and Hour Division determines that an employer has committed a debarrable offense—such as willfully not paying an H-1B worker the required wage— Labor notifies USCIS, which in turn provides dates for the period of time that it will automatically deny petitions from the employer. Labor’s Wage and Hour Division then sends a letter informing the employer that it is ineligible to sponsor workers for the H-1B program for that period of time. A copy of the letter is sent to Labor’s Employment and Training Administration so that it will not certify any applications from the employer for the same period. Both Labor and USCIS officials said they are working to improve communication between the two agencies. For example, Labor, Homeland Security, and the State Department convened a multi-agency fraud working group, which met in March 2006, to discuss strategies for dealing with fraud in the H and L visa programs. Justice pursues charges filed by U.S. workers who allege that an H-1B worker was hired in their place. The Immigration and Nationality Act, as amended, gives U.S. workers the right to file a charge against an employer when they believe an employer preferred to hire an H-1B visa holder. When a charge has been filed, Justice’s Office of Special Counsel opens an investigation for 120 or 210 days, as determined by statute. Charges may be resolved through a complaint before an administrative law judge, an out of court settlement, or a dismissal for lack of reasonable cause to believe a violation has occurred. Between 2000 and 2005, no cases were heard in court by an administrative law judge. Most of the 101 investigations started by Justice from 2000 through 2005 were found to be incomplete, withdrawn, untimely, dismissed, or investigated without finding reasonable cause for a violation. If Justice finds that an employer hired an H-1B worker instead of a U.S. worker, Justice may assess penalties, impose debarment, or seek administrative remedies such as back wages. Justice may assess penalties on cases settled out of court if it finds that an employer hired an H-1B worker over a better-qualified U.S. worker. From 2000 through 2005, Justice found discriminatory conduct in 6 out of the 97 investigations closed. Justice assessed a total of $7,200 in penalties in three of the six cases, all in 2003. U.S. employers continue to request high numbers of foreign temporary workers under the H-1B nonimmigrant visa program. Labor, along with Homeland Security and Justice, must address the desires of U.S. employers for skilled foreign workers as well as ensure the program’s integrity and protect both domestic and foreign workers. Labor’s authority to review the Labor Condition Application is restricted to looking for completeness and obvious inaccuracies, but it could improve its oversight of employers’ compliance with program requirements. Additionally, USCIS may find information in the materials submitted by an H-1B employer that indicates the employer is not complying with program requirements. However, current law restricts Labor from using such evidence to initiate an investigation of the employer. USCIS also has an opportunity to improve its oversight of employers’ petitions to hire H-1B workers by matching information from its petition database with Labor’s application case numbers to detect whether employers are requesting more H-1B workers than they were originally certified to hire. As Congress deliberates changes to U.S. immigration policy, ensuring that employers are in compliance with the program’s requirements that protect both domestic and H-1B workers is essential. To increase employer compliance with the H-1B program and protect the rights of U.S. and H-1B workers, Congress should consider (1) eliminating the restriction on using application and petition information submitted by employers as the basis for initiating an investigation, and (2) directing Homeland Security to provide Labor with information received during the adjudication process that may indicate an employer is not fulfilling its H-1B responsibilities. To strengthen oversight of employers’ applications to hire H-1B workers, we recommend that Labor improve its procedures for checking completeness and obvious inaccuracies, including developing more stringent, cost-effective methods of checking for wage inaccuracies and invalid employer identification numbers. To ensure employers are complying with program requirements, we recommend that as USCIS transforms its information technology system, the Labor application case number be included in the new system, so that adjudicators are able to quickly and independently ensure that employers are not requesting more H-1B workers than were originally approved on their application to Labor. We provided a draft of this report to the Departments of Labor, Homeland Security, and Justice for their review and comments. Each agency provided technical comments, which we incorporated as appropriate. Justice did not have formal comments on our report. Homeland Security agreed with our recommendations and stated that USCIS intends to include Labor’s application case number in its new information technology system. Labor questioned whether our recommendation for more stringent measures is supported by the magnitude of the error rate that was found, as well as whether the benefits of instituting such measures would equal or exceed the added costs of implementing them. In addition, Labor said that Congress intentionally limited the scope of Labor’s application review in order to place the focus for achieving program integrity on USCIS. We believe that Labor is at risk of certifying H-1B applications that contain more errors than were found in the scope of our review. For example, we checked only for employer identification numbers with invalid prefix codes, and did not look for other combinations of invalid numbers or data. Therefore, we do not know the true magnitude of the error rate in the certification process. We continue to believe there are cost-effective methods that Labor could use to check the applications more stringently that would enhance the integrity of the H-1B process. We are sending copies of this report to the Secretary of Labor, the Secretary of Homeland Security, the Attorney General, relevant congressional committees, and others who are interested. Copies will also be made available to others upon request. The report will be available on GAO’s web site at http://www.gao.gov. If you or your staff have any questions about this report please contact me on (202) 512-7215 or nilsens@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. To understand the H-1B certification, adjudication, and enforcement processes and the responsibilities of each agency involved, we hosted a joint meeting with officials from the Departments of Labor, Homeland Security U.S. Citizen and Immigration Services (USCIS), and Justice. We also reviewed laws and regulations related to the H-1B program. To obtain information on the characteristics of employers who filed Labor Condition Applications (applications) and the positions they sought to fill with H-1B workers, we analyzed the Efile H-1B Disclosure Data from the Employment and Training Administration (ETA) of the Department of Labor. These data included all the applications filed electronically from January 2002 through September 2005. We analyzed the data from a total of 960,563 applications to determine (1) the number that had been certified or denied, (2) the employers who requested the most workers, (3) the most frequently requested occupation codes, (4) the locations of the H-1B positions, (5) the source of the prevailing wage used by employers, and (6) how many applications were certified with invalid employer identification number prefixes when compared with a list of valid prefix codes obtained from the Internal Revenue Service. We also analyzed how prevailing wages compared to actual wage rates. The H-1B Visa Reform Act, which was passed on December 8, 2004, requires employers to pay H-1B workers at least 100 percent of the prevailing wage for each specific occupation and location. Prior to the enactment of this law, Labor’s regulations permitted employers to pay actual wages that were only 95 percent of the prevailing wage. Accordingly, to ensure we did not incorrectly identify any applications as erroneously certified during the time between the passage of the H-1B Visa Reform Act and Labor’s implementation of the new 100 percent requirement, our analysis only identified those cases where the actual wage rate was less than 95 percent of the prevailing wage. Additionally, we interviewed officials from ETA regarding the application approval process, including the circumstances under which applications are reviewed by an analyst for discrepancies, how prevailing wage sources are determined to be legitimate, and the ETA resources that are used to process and review applications. Additionally, we accessed the application online system to determine when the employer would receive error notices when filling out the application. We conducted a data reliability assessment of the H-1B Disclosure Data by testing for completeness and accuracy, reviewing documentation, and interviewing knowledgeable officials. We found it to be sufficiently reliable for our purposes. To analyze the number and type of H-1B complaints received by Labor’s Wage and Hour Division (WHD) and the outcomes of the associated investigations, we received a data extract from WHD’s Wage and Hour Investigative Support and Reporting Database (WHISARD). From fiscal years 2000 through 2005, we analyzed the number of H-1B complaints, violations, and the penalties assessed including the number of employees due back wages, the amount of back wages due, civil money penalties, the most common violation, and the trend in the number of willful violations as a percentage of all violations. We also interviewed WHD officials on the complaint and investigation process, the appeal process, educational outreach to improve employer compliance, and the WHD resources used to process and investigate complaints. We conducted a data reliability assessment of the WHISARD data by testing for completeness and accuracy, reviewing documentation, and interviewing knowledgeable officials. We found it to be sufficiently reliable for our purposes. To determine the number of employers who had been debarred, or disqualified from participating in the H-1B program for a specified period of time, we requested that WHD officials provide the number of times per fiscal year from 2000 through 2005 that they sent a letter to USCIS requesting a debarment period. We also requested that USCIS provide the number of request letters it had received from WHD. To determine the number and type of H-1B petitions submitted by employers and adjudicated by the Department of Homeland Security US Citizenship and Immigration Service, we analyzed service center data from the Computer Linked Application Information Management System, Version 3.0 (CLAIMS 3) database from fiscal years 2000 through 2005. We analyzed (1) the number of petitions approved or denied; (2) the basis for the classification of the worker, such as whether the petition was for a new H-1B employee or for a continuation of a worker’s stay; (3) the employer’s requested action; (4) the educational level of the H-1B workers; (5) the number of H-1B workers requested on each petition; and (6) the occupation codes requested. Additionally, we conducted a data reliability assessment of selected variables by testing for completeness and accuracy, reviewing documentation, and interviewing knowledgeable officials. We reported on the variables that we found to be reliable enough for our purposes. To understand the policies and procedures of the program, we interviewed officials at USCIS headquarters. To understand the petition adjudication process, we conducted site visits at the USCIS Service Centers in Saint Albans, Vermont, and Laguna Niguel, California. According to USCIS, from October 2004 through December 2005 these service centers combined processed 63 percent of the H-1B petitions. To obtain context and facilitate our understanding of the electronic CLAIMS 3 data, we requested to review a non-probability sample of 48 petition files representing a variety of H-1B adjudication processes. During our site visits, we reviewed those that were available. To determine the type of violations and the process for investigations of U.S. worker displacement allegations we interviewed Department of Justice officials. We analyzed a summary report provided by Justice of the number of employers investigated from 2000 through 2005 and the outcomes of those cases. To determine the number and outcomes of investigations, and the types and amounts of penalties assessed on employers, we obtained documentation from Justice. The following tables provide additional information on analyses conducted on the application data from the Department of Labor’s Efile H-1B Disclosure Database and the petition data from USCIS’s Computer Linked Application Information Management System, Version 3.0. Alicia Puente Cackley, Assistant Director; Gretta L. Goodwin, Senior Economist; Amy J. Anderson, Senior Analyst; and Pawnee Davis, Analyst, made significant contributions to all phases of this report. In addition, William J. Schneider, Intern, assisted with data collection and analysis; Sheila R. McCoy provided legal assistance; Luann M. Moy provided methodological assistance; Susan F. Baker, Cynthia L. Grant, Lynn M. Milan, and Melinda L. Cordero provided data analysis; and Rachael C. Valliere, Communications Analyst, assisted in report development. Homeland Security: Better Management Practices Could Enhance DHS’s Ability to Allocate Investigative Resources. GAO-06-462T. Washington, D.C.: March 28, 2006. Immigration Benefits: Additional Controls and a Sanctions Strategy Could Enhance DHS’s Ability to Control Benefit Fraud. GAO-06-259. Washington, D.C.: March 10, 2006. Homeland Security: Visitor and Immigrant Status Program Operating, but Management Improvements Are Still Needed. GAO-06-318T. Washington, D.C.: January 25, 2006. Immigration Benefits: Improvements Needed to Address Backlogs and Ensure Quality of Adjudications. GAO-06-20. Washington, D.C.: November 21, 2005. Immigration Enforcement: Weaknesses Hinder Employment Verification and Worksite Enforcement Efforts. GAO-05-813. Washington, D.C.: August 31, 2005. Department of Homeland Security, U.S. Citizenship and Immigration Services: Allocation of Additional H-1B Visas Created by the H-1B Visa Reform Act of 2004. GAO-05-705R. Washington, D.C.: May 18, 2005. Homeland Security: Some Progress Made, but Many Challenges Remain on U.S. Visitor and Immigrant Status Indicator Technology Program. GAO-05-202. Washington, DC: February 23, 2005. Alien Registration: Usefulness of a Nonimmigrant Alien Annual Address Reporting Requirement Is Questionable. GAO-05-204. Washington, D.C.: January 28, 2005. Highlights of a GAO Forum: Workforce Challenges and Opportunities For the 21st Century: Changing Labor Force Dynamics and the Role of Government Policies. GAO-04-845SP. Washington, D.C.: June 1, 2004. H-1B Foreign Workers: Better Tracking Needed to Help Determine H-1B Program’s Effects on U.S. Workforce. GAO-03-883. Washington, D.C.: September 10, 2003. Information Technology: Homeland Security Needs to Improve Entry Exit System Expenditure Planning. GAO-03-563. Washington, D.C.: June 9, 2003. High-Skill Training: Grants from H-1B Visa Fees Meet Specific Workforce Needs, but at Varying Skill Levels. GAO-02-881. Washington, D.C.: September 20, 2002. Immigration Benefits: Several Factors Impede Timeliness of Application Processing. GAO-01-488. Washington, D.C.: May 4, 2001. H-1B Foreign Workers: Better Controls Needed to Help Employers and Protect Workers. GAO/HEHS-00-157. Washington, D.C.: September 7, 2000. | The H-1B visa program assists U.S. employers in temporarily filling certain occupations with highly-skilled foreign workers. There is considerable interest regarding how Labor, along with Homeland Security and Justice, is enforcing the requirements of the program. This report describes: (1) how Labor carries out its H-1B program responsibilities; and (2) how Labor works with other agencies involved in the H-1B program. We interviewed officials and analyzed data from all three agencies. While Labor's H-1B authority is limited in scope, the agency could improve its oversight of employers' compliance with program requirements. Labor's review of employers' applications to hire H-1B workers is timely, but lacks quality assurance controls and may overlook some inaccuracies. From January 2002 through September 2005, Labor electronically reviewed more than 960,000 applications and certified almost all of them. About one-third of the applications were for workers in computer systems analysis and programming occupations. By statute, Labor's review of the applications is limited to searching for missing information or obvious inaccuracies and it does this through automated data checks. However, our analysis of Labor's data found certified applications with inaccurate information that could have been identified by more stringent checks. Although the overall percentage was small, we found 3,229 applications that were certified even though the wage rate on the application was lower than the prevailing wage for that occupation. Additionally, approximately 1,000 certified applications contained erroneous employer identification numbers, which raises questions about the validity of the application. In its enforcement efforts, Labor's Wage and Hour Division (WHD) investigates complaints made against H-1B employers. From fiscal year 2000 through fiscal year 2005, Labor reported an increase in the number of H-1B complaints and violations, and a corresponding increase in the number of employer penalties. In fiscal year 2000 Labor required employers to pay back wages totaling $1.2 million to 226 H-1B workers; by fiscal year 2005, back wage penalties had increased to $5.2 million for 604 workers. Program changes, such as a higher visa cap in some years, could have been a contributing factor. In April 2006, WHD began the process of randomly investigating willful violators of the program's requirements. Labor, Homeland Security, and Justice all have responsibilities under the H-1B program, but Labor and Homeland Security could better address the challenges they face in sharing information. Homeland Security reviews Labor's certified application but cannot easily verify whether employers submitted petitions for more workers than originally requested on the application because USCIS's database cannot match each petition to Labor's application case number. Also, during the process of reviewing petitions, staff may find evidence that employers are not meeting their H-1B obligations. For example, Homeland Security may find that a worker's income on the W-2 is less than the wage quoted on the original application. Homeland Security may deny the petition if an employer is unable to explain the discrepancy, but it does not have a formal process for reporting the discrepancy to Labor. Additionally, current law precludes the Wage and Hour Division from using this information to initiate an investigation of the employer. Labor also shares enforcement responsibilities with Justice, which pursues charges filed by U.S. workers who allege they were displaced by an H-1B worker. From 2000 through 2005, Justice found discriminatory conduct in 6 out of the 97 investigations closed and assessed $7,200 in penalties. |
The authority to appoint and set pay for special consultants and fellows was provided as part of the Public Health Service Act in 1944. Section 209(f) authorizes the employment of special consultants to assist and advise in the operation of the PHS. The PHS is comprised of most operating divisions within HHS—including the National Institutes of Health (NIH), the Food and Drug Administration (FDA), and the Centers for Disease Control and Prevention (CDC)—as well as some staff divisions within the Office of the Secretary. See figure 1 for HHS’s organizational structure, including those operating divisions and main staff divisions considered to be within the PHS. Section 209(g) authorizes fellowships in the PHS for individual scientists who may be assigned for studies and investigations either in the United States or abroad. Sections 209(f) and (g) both authorize the establishment of regulations to further implement these authorities. HHS Office of the Secretary develops agencywide policy and guidance and operating divisions may set additional or supplemental policy as necessary. In 2005, Congress provided EPA with the authority to use section 209 to make a limited number of appointments in its Office of Research and Development (ORD).Congress initially granted this authority to EPA for fiscal years 2006 through 2011, but Congress amended the authority twice and currently EPA is permitted to employ up to 30 persons at any one time through fiscal year 2015. EPA issued regulations in 2006 implementing this authority, which closely follow HHS regulations. HHS regulations for section 209(f) provide that special consultants may only be appointed when the PHS cannot obtain services through regular civil service appointments or under the compensation provisions of the Classification Act of 1949. The regulations further provide that rates of compensation for special consultants are to be set in accordance with criteria established by the Surgeon General. The Surgeon General is part of the Office of the Assistant Secretary for Health. HHS has used this authority, for example, to appoint doctors and others with expertise in specialty fields to initiate or provide assistance in conducting medical research and set pay for those individuals at rates above those allowed under other federal government pay systems. HHS regulations covering section 209(g) provide that fellowships may be provided to secure the services of talented scientists for limited duration (up to 5 years) for health-related research, studies, and investigations. The regulations further provide that the Secretary may authorize procedures to extend the term of fellowships, may authorize stipends for the fellows, and is responsible for establishing appointment procedures beyond those set forth in the regulations. Some Title 42 employees earn pay within or exceeding pay levels found in the Executive Schedule. The Executive Schedule is a five-level, basic pay schedule applicable to the highest-ranking executive appointments in the federal government. Executive Schedule pay rates range from Executive Level V ($145,700 since 2010) to Executive Level I ($199,700 since 2010). Only HHS and EPA are authorized to use Title 42 hiring authority. By contrast, regular hiring authorities such as those under title 5 of the U.S. Code—commonly referred to as Title 5—may be used by any federal agency. Pursuant to HHS and EPA policy, employees at HHS and EPA originally hired under Title 5 or other authorities may be converted to Title 42 in some circumstances. Under these policies, employees hired under Title 42 are eligible for performance bonuses, incentives, and other nonsalary payments made available to federal employees compensated under Title 5. Title 42 employees most frequently work within one of three operating divisions: NIH is the nation’s medical research agency and is comprised of the Office of the Director and 27 institutes and centers, including the National Cancer Institute; National Institute on Aging; National Heart, Lung, and Blood Institute; and the National Center for Complementary and Alternative Medicine. Each institute and center has its own specific research agenda, often focused on a particular disease or body system. As the central office at NIH, the Office of the Director establishes NIH-specific policy and oversees the institutes and centers to ensure they operate in accordance with said policy. While most of its budget goes to extramural research personnel at more than 3,000 universities and research institutions, NIH also has intramural research laboratories on the NIH main campus in Bethesda, Maryland. The main campus is also home to the NIH Clinical Center, which is the largest hospital in the world totally dedicated to clinical research. FDA is responsible for, among other things, protecting the public health by assuring the safety, efficacy, and security of human and veterinary drugs, biological products, medical devices, the nation’s food supply, cosmetics, and products that emit radiation. FDA is also responsible for regulating tobacco products. CDC conducts activities such as identifying and defining preventable health problems and maintaining active surveillance of diseases; serving as the PHS lead agency in developing and implementing operational programs relating to environmental health problems; and operational research aimed at developing and testing effective disease prevention, control, and health promotion programs. EPA uses section 209(g) as the basis for hiring some scientists within ORD, the scientific research arm of EPA. ORD’s work at EPA laboratory and research centers provide the science and technology to identify environmental hazards, assess risks to public health and ecosystems, and determine how best to control or prevent pollution. According to EPA documents and officials, EPA uses Title 42 to secure the services of experienced and talented scientists for renewable appointments where, because of the nature of the work and expertise needed, regular hiring authorities are impractical. EPA has not made appointments using section 209(f). During 2010, HHS had 6,697 employees who were appointed under sections 209(f) or (g). All but 27 of these employees served at NIH, FDA, or CDC, while the remaining employees served in the Office of the Secretary or within other operating divisions, as shown in figure 2. The number of employees appointed under sections 209(f) and (g) increased overall at HHS by 25 percent from 2006 through 2010, as shown in table 1. Since 2006, the number of Title 42 employees grew by 15 percent at NIH, by 54 percent at FDA, and by 81 percent at CDC, while declining by 48 percent at the Office of the Secretary and all other operating divisions. The increased use of Title 42 authority came during a period when HHS made recruiting and retaining mission-critical elements of its workforce a priority. HHS’s 2007-2012 Strategic Plan included strategic objectives: (1) recruiting, developing, and retaining a competent health care workforce, and (2) strengthening the pool of qualified health and behavioral science researchers. HHS officials generally attributed the increases in Title 42 employees to the agency’s response to urgent public health matters and effects of the economic downturn on the private sector and academia, which, according to officials, made the agency more attractive to prospective or on-board employees. Specifically, according to HHS: The 15 percent increase from 2006 through 2010 at NIH can be attributed, in part, to the effects of the economic downturn on the biomedical research labor market. Officials told us that as extramural research funding available in the private sector and academia is shrinking, NIH is able to use Title 42 to more successfully recruit and retain biomedical investigators and clinical specialists. The spike in Title 42 appointments at FDA in 2008 and 2009 is a result of the Food and Drug Administration Amendments Act of 2007 and the Food Protection Plan, FDA’s strategy for protecting the nation’s food supply. Additionally, in 2008 FDA launched its first class of Commissioner’s Fellows (hired under section 209 (g) for up to a two year period) beginning with 50 fellows, another class of 50 in 2009, and a third class of 45 in 2010. At CDC, increased use of Title 42 was attributed to the urgency of certain programs such as the overseas Global AIDS Program and those under the Office of Public Health Preparedness and Response. For these programs, officials told us they needed employees with specialized scientific skills or training and experience and would not have been able to obtain them without Title 42. As discussed later, we were unable to determine which section authority—sections 209(f) or (g)—was used more often because HHS section authority data was not reliable for this purpose. As shown in table 2, NIH relies on Title 42 authority for a greater percentage of its total workforce than does FDA and CDC. In 2010, 25 percent of all NIH employees were Title 42 employees, while 10 percent of CDC employees and 6 percent of FDA employees were Title 42. NIH relied on Title 42 authority for a substantial portion—44 percent—of its total research and clinical practitioner workforce. Title 42 employees at HHS serve in a variety of functional areas, including scientific and medical research support and in senior, director-level leadership positions. Base salary ranges for Title 42 employees varied by operating division and occupation. In 2010, almost 60 percent of Title 42 employees at NIH served in one of five general occupations: staff scientist, research fellow, senior investigator, clinical research nurse, and clinical fellow. Table 3 describes some of the general responsibilities and duties, educational characteristics, and salary data for these occupations at NIH. At FDA and CDC, the most common occupation of Title 42 employees is a fellow. In 2010, 340 (39 percent) of FDA’s Title 42 employees were staff fellows. These positions are for promising research and regulatory review scientists. In general, staff fellows at FDA conduct or support research, provide technical direction and supervision to other researchers, publish scientific articles, and review contract and grant proposals designed to support their research projects. Staff fellows must have a doctoral degree in bio-medical, behavioral, or related science and, according to FDA policy, total compensation may not exceed certain pay limits ($155,500 in 2010) unless the Director of Human Resources and Management and Services grants an exception. FDA staff fellows’ base salary range in 2010 is approximately $42,000 to $224,000, with an average base salary of about $96,000 and a median salary of about $92,000. Three of 340 staff fellows at FDA earned more than $155,500 in 2010. Of CDC’s Title 42 employees in 2010, 687 (74 percent) were senior service fellows or associate service fellows appointed to study areas such as basic and applied research in medical, physical, biological, mathematical, social, biometric, epidemiological, behavioral, computer sciences, and other fields directly related to the mission of CDC. Senior service fellows must have a doctoral degree and associate service fellows must have a master’s degree. Senior service fellows had a base salary range in 2010 of approximately $49,000 to $155,500, with an average base salary of about $103,000 and a median salary of about $100,000. Associate service fellows had a base salary range of approximately $44,000 to $93,000, with an average base salary of about $69,000 and a median salary of about $71,000. The average base salary for all HHS Title 42 employees in 2010 was about $116,000 and the median salary was about $101,000. More than one-fifth of all Title 42 employees at HHS, however, earned a base salary above Executive Level IV ($155,500 in 2010). Congress regularly refers to executive salary levels in order to express minimum or maximum levels of pay authorized for positions in the federal government. For example, Congress has imposed a cap of Executive Level IV on salary (i.e., basic pay) rates where pay is fixed by administrative action under 5 U.S.C. § 5373. In a related effort to this audit, we are issuing a legal opinion on whether there are any statutory caps on pay for consultants and fellows appointed under 42 U.S.C. §§ 209(f) or (g), including whether the cap under section 5373 applies. Table 4 shows the number of Title 42 employees whose base salary is within or above the various Executive Salary Levels in 2010. HHS has converted a number of employees from positions under the General Schedule or other pay systems to positions under Title 42. Of the 1,183 new Title 42 appointees in 2010, 45 of them—or about 4 percent— were current HHS employees that were converted to Title 42 positions. Thirty of these conversions occurred at NIH. We also found that employees converted to Title 42 from other pay systems generally earned higher compensation than in their previous position. Employees converted in 2010 earned, on average, $34,000 more in base salary than earned in their previous position. However, many did not receive the same amount of nonsalary payments (including retention incentives) received while employed under the General Schedule or other pay system. Therefore, the average increase in total compensation (base salary and incentive or other nonsalary payments) was about $14,000 in 2010. Under HHS policy, Title 42 employees are eligible to receive performance bonuses; recruitment, retention, and relocation incentives; and other nonsalary payments that are available to other HHS employees. In 2010, HHS issued nonsalary payments to 6,336 of its 6,697 Title 42 employees. Seventy-one percent of Title 42 employees earned ratings- based individual cash awards. Less than 1 percent (60) of Title 42 employees received nonsalary payments in the form of recruitment, retention, or relocation incentives. According to senior officials at HHS’s human resource office and NIH, Title 42 authority provides two primary benefits—appointment agility and compensation flexibility. These officials said appointment agility enables the agency to hire scientists, doctors, and other consultants to quickly fill knowledge, skill, and ability gaps so that medical research can move forward and to respond to medical emergencies. For example, according to HHS officials, the agency used Title 42 authority to quickly hire experts needed to develop a vaccine in response to the H1N1 flu pandemic of 2009. Appointment agility is also important because many research projects, particularly those at NIH, are not meant to be long-term and Title 42 appointments can align with project time frames better than hiring full- time permanent staff under regular hiring authorities. In some cases, the temporary appointment of a researcher with highly-specialized skills to assist with a limited-scope, limited-duration study may be more appropriate than a permanent position. According to officials, compensation flexibility helps HHS compete with the private sector and academia to hire and retain highly qualified employees with rare and critical skill sets, such as neuroscientists, applied researchers in dietary intakes, and engineers that can operate particle accelerators. HHS human resource officials stated the salaries HHS can offer to its top researchers are often not commensurate with private sector salaries. However, they said the higher compensation limits under Title 42 combined with other benefits—such as name recognition and access to advanced research equipment and technology not often available in the private sector or academia—can help offset compensation disparities and make HHS attractive to researchers, doctors, and scientists. Because HHS does not consistently electronically record the authority under which many of its Title 42 employees were appointed, the number of employees hired under either section 209(f) or (g) could not be determined. When an employee is hired under Title 42 authority, HHS human resource officials create a personnel record in its central personnel transaction system, the Enterprise Human Resources and Payroll (EHRP) system. A required field in the personnel record exists to select a code from a drop-down menu designating the general authority under which the individual was hired, such as Title 42 or Title 5 authority. The personnel record also contains an open-ended text field to manually enter a specific section authority such as sections 209(f) or (g), applicable to Title 42 authority. Our analysis of HHS data found thousands of cases where the section authority applicable to Title 42 was not recorded in EHRP. We also found that when the section authority field was used, there were more than 400 different types of entries made in the EHRP records. According to HHS officials, there are some data elements in the EHRP system—including the section authority under Title 42—that are unreliable. The majority of the unreliable data elements are those from nonrequired data entry fields. Whereas required fields must be completed before a personnel action is saved in the system, Title 42 section authority is a free-form, open-ended field and there is no system control in place to ensure the field is recorded or recorded accurately prior to saving the personnel action. Our case reviews found the section authority for appointment—such as sections 209(f) or 209(g)—was always documented on hard copy personnel action forms, but in many cases was not recorded in personnel records in the EHRP system. We have previously reported that effective workforce planning and management require that human capital staff and other managers base their workforce analyses and decisions on complete and accurate personnel data. The lack of reliable information in this area may preclude HHS, Congress, and other organizations from providing effective oversight of the Title 42 program and evaluating its effectiveness. For example, the lack of section authority data in EHRP has made it difficult for HHS to provide accurate headcounts of employees hired under sections 209(f) or (g) and resulted in HHS overstating the number and operating division of its employees hired under these sections to oversight bodies, including Congress, and in response to this audit. We identified more than 600 instances where HHS erroneously included employees in its data submission to us that were not appointed under sections 209(f) or (g). Some erroneous cases included individuals we later found were hired under appointing authorities other than sections 209(f) or (g), including appointing authorities under 42 U.S.C. §§ 247b-8 and 210(g). One result of including these cases in error was HHS reported it had made appointments under 209(f) or (g) at the Centers for Medicare and Medicaid Services, which would be prohibited by law. Our analysis found these appointments were made under different authorities. HHS officials acknowledged there were potentially many cases included that were not employees hired under sections 209(f) or (g) as it was sometimes difficult to discern from available data whether employees were hired under sections 209(f) or (g), rather than other authorities under Title 42. According to human resource officials, when attempting to report on the agency’s Title 42 employees, they chose to include questionable cases rather than risk an undercount. HHS did not consistently adhere to certain sections of its policy for hiring and converting employees under section 209(f). We conducted 28 case file reviews of appointments made under existing section 209(f) policy to determine the extent to which HHS practices were consistent with its policy.employees, the case file reviews indicate that HHS appointment practices are consistent with some aspects of its section 209(f) policy. For example, all appointees met education requirements for the type of scientific position being filled. While not an explicit requirement of the policy, HHS consistently documented the basis for compensation and any recruitment or retention incentives provided to section 209(f) employees. In some cases, however, HHS did not consistently adhere to its requirements, as shown in table 5. While not generalizable across the population of Title 42 incumbent is directly involved in or manages scientific research or activities, and administrative positions that require the incumbent to have scientific credentials. Requires that the same recruitment plan be used for both Title 5 and Title 42 to demonstrate that other available personnel systems failed to yield qualified candidates. Further, the policy also explains the process and documentation requirements necessary to demonstrate that other available personnel systems, including Title 5, the Senior Biomedical Research Service, and PHS Commissioned Corps, have failed to yield qualified candidates. Identifies specific positions and/or categories of positions at NIH that may be filled through section 209(f) without “exhausting” other recruitment mechanisms or authorities. HHS has no agencywide implementing policy for appointing and compensating employees hired as fellows under section 209(g), including details about what documents are needed to support the basis for appointments and compensation. We have previously reported that agencies should have clearly defined, well-documented, transparent, and consistently applied criteria for appointing and compensating personnel. In lieu of guidance from HHS, the individual operating divisions established their own policies and guidance for appointing and compensating fellows under 209(g), each with different levels of detail, compensation limits, and documentation requirements. NIH has instructions for appointing fellows as well as guidance for the use of recruitment and retention incentives. FDA’s Service Fellowship Plan provides appointment and compensation setting procedures for section 209(g) fellows and caps total compensation at Executive Level IV, with some exceptions above that cap available for consideration. CDC’s policy for its 209(g) Fellowship Program provides provisions for all fellows and general compensation guidance. Top pay for a fellow is set at the equivalent of the Grade 15, Step 10 pay level. The lack of an HHS-wide policy poses the risk that compensation decisions for section 209(g) fellows at HHS may not be made consistently across operating divisions. Although some guidance exists at the operating division level for setting compensation targets, in 11 of the 20 case studies we conducted of section 209(g) fellows, we found either no or insufficient documentation to support the basis for compensation. Without an agencywide policy, an agency cannot be assured that it is allocating its resources most appropriately. According to senior human resource officials at HHS, an agencywide policy is needed and the agency is developing a policy for appointment and compensating fellows under 209(g). However, it is not clear that the policy will address important issues such as documenting the basis for compensation. The section 209(g) policy was still in development as of May 2012. Congress provided EPA with the authority to use 42 U.S.C. § 209 to employ up to 30 persons at any one time through fiscal year 2015. EPA has appointed 17 fellows in ORD from 2006 to 2011 under section 209(g). Of the 17 fellows appointed under Title 42, 12 were hired from outside EPA, while the remaining 5 converted from other positions within EPA. Of the 17 appointments, 14 were selected through advertised competition. To date, all 17 fellows remain with EPA and appointments for the three fellows hired in 2006 have been renewed for another 5-year term.or conversion. See figure 3 for the cumulative onboard Title 42 staff, by new hire integrated systems toxicology research program, was previously an Associate Dean at a university where the individual led similar research efforts, and leads an ORD division with more than 80 staff. Another leads a research program by developing biological measures to assess the impact of environmental exposure on human health and serves as Director for the Environmental Public Health Division. The lead scientist for bioinformatics within the National Center for Computational Toxicology (NCCT) is a Title 42 fellow, responsible for conducting data analysis and developing solutions for data management, and serving as senior advisor to the center’s director. According to EPA officials, Title 42 provides two important tools EPA needs to achieve its mission. First, EPA reported that Title 42 provides the flexibility to be competitive in recruiting top experts who are also sought after by other federal agencies, private industry, and academia. Prior to using Title 42, EPA had difficulty recruiting and retaining scientists in certain highly specialized disciplines under regular hiring authorities. We reported in 2001 that EPA faced significant challenges in recruiting and maintaining a workforce with mission-critical skills in key technical areas such as environmental protection, environmental engineering, toxicology, and ecology. EPA officials told us Title 42 has helped the agency recruit individuals in cases where, because of the specialization of expertise needed, authority to set pay over the limits of other hiring authorities was needed to be competitive in the labor market. As such, the minimum base salary for Title 42 employees at EPA is equal to the highest base pay level for employees paid under the General Schedule, and the maximum base salary is $250,000. EPA officials also stated Title 42 provides the appointment flexibility needed to align experts with specific skills to changing scientific priorities. One specific program where EPA cited the importance of using Title 42 in that way was in the development of the NCCT. There are four Title 42 fellows at NCCT, including its director. The fellows assist in the development of NCCT initiatives, such as the Computational Toxicology Research Program, a program that is developing alternatives to traditional animal testing. A 2010 review by the National Academy of Sciences National Research Council reported “the use of Title 42 appointments to develop NCCT is an excellent example of how such appointments can be used to build new capacity and advance the state of science.” EPA officials stated it is not the agency’s intention to hire a fellow long-term under Title 42, but rather employ the individual as long as a priority remains high. For the three fellows hired in 2006, EPA renewed the terms for another 5-year appointment. Annual salaries range from approximately $153,000 to $216,000, with an average salary of about $176,000 and a median salary of about $171,000. As shown in table 6, 15 of the 17 EPA fellows had salaries exceeding Executive Level IV. Of the 12 new hires from outside EPA, 8 earned more in annual pay than earned in the position previously held, 3 earned less than in their previous position, and 1 appointee’s salary did not change. Salary changes from previous positions ranged from a decrease of $85,000 to an increase of $40,000, not including recruiting incentives. Eight of the 12 new hires received recruitment incentives ranging from $10,000 to $50,000. EPA documents indicate that the recruitment incentives were offered to compete with private industry and to aid in career transition. All five employees converted from other positions within EPA received a salary increase, ranging from $6,000 to $15,000. None of the converted employees received incentive payments. Converted employees generally assumed additional responsibilities as a Title 42 employee. Our case studies included four of the five EPA employees who converted to Title 42. Of the four appointees who came from within EPA, one was promoted from the lead oil research program scientist to the director of the land remediation and pollution division, one moved from being an associate director to a division director within the same national center, one was promoted from a branch chief to a division director, and one remained a director. In December 2010, EPA began a pilot of using market salary data to estimate salaries of what Title 42 candidates could earn in positions outside of government given their education, experience, professional standing, and other factors. EPA used the market salary data to inform salary negotiations for the five fellows appointed since the implementation of the pilot. According to EPA officials, the market salary pilot concludes in December 2012 and its effect will be analyzed at that time. In appointing Title 42 fellows, EPA generally followed appointment guidance described in its Title 42 Operations Manual. The manual provides guidance for managers, supervisors, and human resources specialists implementing Title 42. In all 10 case files we reviewed, documents provided by EPA show Title 42 practices were generally consistent with its guidance and requirements. Table 7 shows some selected Title 42 appointment requirements and observations from our case reviews. We conducted 10 case file reviews of EPA Title 42 employees and in 2 cases we discovered issues related to EPA’s procedures for mitigating potential financial conflicts of interest. EPA’s Title 42 employees are subject to the same laws and regulations that govern the ethical conduct of other federal employees. For example, covered Title 42 employees are required to submit a public financial disclosure report (SF-278) as part of the appointment process and annually thereafter. Title 42 employees are also covered under the criminal conflict of interest law, 18 U.S.C. § 208. Section 208 prohibits a federal employee from participating personally and substantially in a particular matter in which he or she has a personal financial interest. The statute is intended to prevent an employee from allowing personal interests to affect his or her official actions and to protect governmental processes from actual or apparent conflicts of interest. The application of the statute can be waived so that an employee need not divest his or her financial interest or recuse themselves from the particular matter, where the nature and size of the financial interest and the nature of the matter in which the employee would participate are unlikely to affect an employee’s official actions. EPA’s Title 42 guidance includes pre-employment ethics clearance procedures for identifying and mitigating potential conflicts of interest prior to appointment. As part of the procedures, an ethics official in EPA’s Office of General Counsel (OGC/Ethics) works with the candidate to ensure that all required information is reported on the disclosure form and to develop an ethics agreement, as necessary, to mitigate or resolve any identified potential conflicts. A job offer may only be extended after OGC/Ethics signs the public financial disclosure report. Although EPA has preappointment ethics clearance procedures as noted above, it does not have postappointment procedures in place to ensure Title 42 employees meet ethics requirements to which they have previously agreed. In two cases we reviewed, employees had potential conflict of interest situations arise after appointment resulting, in part, from the agency’s failure to ensure Title 42 employees followed agreed upon ethics requirements. In one case, EPA general counsel determined stock owned by the candidate could be a potential conflict of interest and directed the candidate to either recuse himself from certain duties or divest himself of the stock as a condition of employment. The candidate agreed to divest of the stock and was subsequently hired. A year later, during the routine review of the employee’s annual financial disclosure form, EPA discovered that the employee still owned the stock. The employee was ordered to divest of the stock and this time immediately complied. EPA also reviewed the projects for which the employee was involved while still owning the stock and determined that the employee had not participated in any particular matter which would have constituted a conflict of interest. According to EPA, there was confusion concerning who, if anyone, was tasked to ensure the divestiture occurred. In another case, based on the review of the candidate’s public financial disclosure form, EPA and the candidate entered into an ethics agreement, which documented ethical constraints that would apply to the candidate and to caution the candidate about certain assets held. The agreement listed entities in which the individual held stock and advised that, as required by 18 U.S.C. § 208, the individual should not participate in any particular matter that affected any of the listed entities unless the individual first obtained a written waiver from EPA/OGC or the value of the asset was low enough to qualify under a regulatory de minimis exemption. Despite these efforts, a year later, while responding to the employee’s request for additional time to file the annual public financial disclosure form, EPA discovered that the employee was participating in a matter while holding stock in a company (a listed entity in the ethics agreement) that EPA/OGC initially believed could be affected by this matter. Concluding that the employee’s participation was a conflict of interest, EPA/OGC directed the employee, who had been working on the matter for approximately 3 days, to immediately stop working on the matter. The employee immediately complied and sold the stock holding in question in order to resume working on the matter. OGC/Ethics made no inquiry into the specific activities the employee engaged in during those 3 days. Almost 2 years later, OGC/Ethics officials now conclude that this company was not sufficiently affected by the matter to present a violation of 18 U.S.C. § 208 in light of facts that subsequently emerged. EPA officials acknowledge that beyond these two cases, its efforts to identify and mitigate potential conflicts of interest after appointment can be improved and have taken steps to improve ethics oversight. For example, in order to increase overall awareness of ethics responsibilities, EPA reported it provided additional training to a senior ethics official and now copies Deputy Ethics Officials – officials responsible for assisting employees in being compliant with ethics requirements – when cautionary memoranda are issued. EPA also told us it has plans to develop mandatory training sessions for ethics officials in its field laboratories and centers and implement a process where employees hired under the Title 42 and other authorities send EPA OGC confirmation of such actions as stock divestitures or signed recusals. As details and implementation timelines for these plans were not available at the time of our review, it is not clear that these plans fully consider and address ethics issues that arise after appointment and ensure previously agreed upon ethics requirements are followed, as was the issue in the two cases above. HHS and EPA have used Title 42 to recruit and retain highly skilled, in- demand personnel to government service. Although HHS relies on Title 42 to fill some of its most critical scientific and medical research positions, the lack of complete data and guidance may limit the agency’s ability to strategically manage the use of the authority. HHS erroneously reported appointments made under sections 209(f) and (g) that would have been prohibited by law, indicating the agency’s data management practices may preclude effective oversight of the program and workforce planning. Effective oversight is particularly important in light of HHS’s increasing use of Title 42 and the number of employees earning salaries higher than most federal employees. Inconsistencies between HHS’s policies and practices related to section 209(f) may result in that authority being used in ways for which it was not intended. Recent changes to 209(f) policy issued by HHS should help the agency more consistently follow requirements. As appointments have been made under 209(g) without documentation showing the basis for compensation, relying on 209(g) guidance issued only at the operating division level may not be sufficient to ensure appointments and compensation under this authority are appropriate and consistent. HHS has acknowledged the need for agencywide 209(g) guidance, but has not determined if it will include requiring documentation showing the basis for compensation. EPA generally followed its Title 42 policies and has incorporated some modifications to improve its appointment and compensation practices; however, EPA’s current ethics guidance does not sufficiently ensure Title 42 employees meet ethics requirements after appointment. EPA acknowledged it could improve its postappointment ethics oversight and reported it has plans to ensure that Title 42 employees send OGC confirmation of stock divestitures and other ethics requirements. However, at the time of our review, EPA had not provided us with implementation plans or timeframes. Although its plans appear to be prudent steps for addressing the specific issues that arose in the cases we reported, it will be important for EPA to implement them as soon as possible to mitigate the risk of future potential conflict of interest issues. To help ensure HHS has the data and guidance necessary to effectively oversee and manage its Title 42 authority, we recommend that the Secretary of HHS take the following three actions: Ensure section authority—sections 209(f) or (g)—be consistently entered in appropriate automated personnel systems, such as making section authority a required, drop-down field in its personnel system where this information is initially entered. As part of its effort to implement new section 209(f) guidance, systematically document how policy requirements were fulfilled when hiring or converting 209(f) employees. This could include such items as: − − the basis for which the position is considered scientific, recruitment and retention efforts made under other hiring authorities before using Title 42, − a conversion’s recognition as a national or international expert in − a conversion’s original scientific or scholarly contributions of major significance in the field, − a conversion’s leadership in the field equivalent to a full-tenured professor in academia, and − a conversion’s special knowledge and skills of benefit to the agency. As part of its ongoing effort to develop agencywide policy for appointing and compensating employees hired under section 209(g), ensure the policy requires and provides guidance for documenting the basis for employee compensation. To help improve enforcement of ethics requirements, we recommend the Administrator of the EPA direct the Designated Agency Ethics Official: As part of its efforts to improve postappointment ethics oversight, develop and document a systematic approach for ensuring Title 42 employees are compliant with ethics requirements after appointment. Implement, as part of this approach, reported plans to require Title 42 employees to provide proof of compliance with ethics agreements to a designated ethics official within a reasonable timeframe after appointment. We provided the Secretary of Health and Human Services and the Administrator of the EPA an opportunity to comment on a draft of this report. The HHS Assistant Secretary for Legislation and EPA’s Acting Assistant Administrator for Research and Development provided written responses and technical comments, which we incorporated as appropriate. The agencies’ comments appear in appendix II and III. In a June 7, 2012, letter responding to a draft of this report, HHS agreed with each of the three recommendations. HHS’s ongoing and proposed actions noted in the response address our concerns and are likely to improve the agency’s management and oversight of its Title 42 authority. HHS agreed with our first recommendation to ensure section authority is consistently entered in appropriate automated personnel systems. Specifically, HHS stated that as it moves forward with the implementation of a new enterprise human resources system, it will explore the possibility of using a drop-down field to enter Title 42 section authority. HHS stated that its Office of Human Resources will continue to work with the Operating Divisions and Staff Divisions to ensure that Title 42 personnel actions are processed in a consistent and accurate manner. HHS also agreed with our two recommendations addressing Title 42 policies. HHS stated that, in part due to our findings, it updated its section 209(f) policy to address our concern that HHS document how policy requirements were fulfilled when hiring or converting section 209(f) employees. In addition, HHS agreed with our recommendation to develop agencywide policy for appointing and compensating employees hired under section 209(g) authority. HHS stated that the section 209(g) policy will be implemented in the near future. In a June 6, 2012, letter responding to our draft, EPA disagreed with the recommendation directed to EPA and our discussion of the second ethics case. Specifically, EPA requested that we update our discussion to note that the individual had not yet visited a site related to work on the matter. EPA stated that since the individual had not yet visited the site, EPA is not aware of any evidence that the employee personally and substantially participated in the matter. We do not believe a change in the discussion of this ethics matter is warranted. GAO made no independent conclusions as to whether the individual’s participation during the brief period of time we note constituted personal and substantial participation in the matter and whether this was a conflict of interest in violation of 18 U.S.C. § 208. Rather, our discussion of this case, including whether the individual’s participation was a conflict of interest, was based exclusively on and attributed to conclusions made by EPA/OGC, both at the time of the event and in subsequent interviews conducted for this engagement. Specifically, documentary evidence at the time of our review supports the fact that EPA’s concern was the individual’s participation in the matter in general, and that EPA’s concern was not influenced by the fact that the individual was not yet on site. As we reported, EPA/OGC directed the individual to stop working on the matter when it found he owned stock in a company that could be affected by the matter he was working on (the individual immediately stopped working on the matter and sold the stock in order to resume working on this matter.) EPA disagreed with our statement that it is not clear that EPA plans to develop an approach to address ethics issues that arise after appointment and ensure previously agreed upon ethics requirements are followed. In its comments, EPA noted that on February 17, 2012, it sent us a letter documenting the steps it has taken and plans to take to address postappointment ethics issues and ensure previously agreed upon ethics requirements are followed. Specifically in its February letter, EPA reported it recently implemented a process in which they now copy Deputy Ethics Officials when cautionary memoranda are issued to OGE 278 filers. EPA also reports it has plans to implement a process for public filers, including employees hired under the Title 42 special hiring authority, to ensure that they send OGC confirmation of stock divestitures, for example, or signed recusals. We agree that providing cautionary memoranda to the officials responsible for assisting the employee in remaining compliant with ethics requirements is a step that could improve EPA’s postappointment ethics oversight and added this example to the report accordingly. However, because EPA did not provide a firm date or timelines for implementing its reported plan to ensure employees send OGC confirmation of stock divestitures or signed recusals, we did not revise the finding. EPA disagreed with the recommendation that it develop and document a systematic approach for ensuring Title 42 employees are compliant with ethics requirements after appointment and consider adding steps to the ethics clearance process that require Title 42 employees to provide proof of compliance with ethics agreements. EPA asked that we remove the recommendation or revise it to acknowledge the plans mentioned above and that EPA continues working towards implementation. We acknowledge EPA is considering a plan to require proof of compliance with ethics agreements and, because we believe this is a prudent and needed step for improved ethics oversight, have revised the recommendation to reflect EPA’s plans. As the two ethics issues we reported occurred over two years ago and EPA has acknowledged improvements in its postappointment ethics oversight are needed, such plans should be implemented as soon as possible. We maintain that the recommendation is still necessary to ensure EPA develops detailed plans and begins moving toward implementation as soon as possible to mitigate the risk of additional potential conflict of interest issues. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees. We are also sending copies to the Secretary of Health and Human Services and the Administrator of the Environmental Protection Agency. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2757 or goldenoffr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. This report examines the extent to which the Department of Health and Human Services (HHS) and the Environmental Protection Agency (EPA) have (1) used authority under 42 U.S.C. §§ 209(f) and (g) to appoint and set pay for employees since January 2006, and (2) followed applicable agency policy, guidance, and internal controls for Title 42 appointments and compensation. To address the first objective, we obtained and analyzed personnel data from HHS and EPA to describe Title 42 appointment and compensation trends at HHS and EPA since 2006, including the number of Title 42 employees; the types of occupations and positions held by Title 42 employees; compensation rates, including the number of Title 42 employees earning more than certain federal salary levels; the number of nonsalary payments (e.g., performance bonuses and retention incentives) provided to Title 42 employees and their purpose; and the number of civil servants that have been converted to Title 42 appointments and compensation changes associated with those conversions. We determined 2006 was the most appropriate beginning year for our analysis because, according to HHS human resource officials, personnel data prior to 2006 was likely not sufficiently reliable for our analysis. Also, EPA began using Title 42 in 2006. HHS data presented in this report is 2006 through the end of 2010, the last year of complete data available; and at EPA, 2006 through the end of 2011. We conducted a variety of data tests and interviews with agency officials to correct and refine HHS Title 42 data and were able to develop a data set that was sufficiently reliable for our purposes. We could not, however, report on the number of HHS Title 42 employees hired under a particular section authority—sections 209(f) or (g)—because section authority is not consistently recorded by HHS. For EPA, we performed data testing and interviewed agency officials to identify any data gaps or inconsistencies with compensation data provided and compared EPA data to information found in official agency documents. We determined that EPA’s data were sufficiently reliable for the purposes of our report. To assess the extent to which HHS and EPA have followed applicable policy, guidance, and internal controls, we reviewed the policies and guidance at HHS and EPA in order to understand the conditions under which Title 42 appointees are to be recruited, appointed, compensated, and managed. We determined case file reviews would be the most appropriate approach to obtain the information needed to (1) compare practices with policy and guidance, and (2) provide illustrations and context for data analysis results. We conducted a total of 63 case file reviews out of 1,502 HHS cases within selected strata in two phases. In the first phase, we conducted 23 case file reviews to address data reliability concerns. The number of case file reviews in this phase was proportional to the frequency with which we identified and observed cases with data characteristics that deviated from our understanding of the purpose and use of sections 209(f) and (g). In the second phase, we conducted 40 case file reviews based on a random selection of cases that had characteristics related to various areas of HHS Title 42 policy and guidance. For the HHS case file selection, cases were grouped into strata based on certain characteristics—such as hired under section 209(f), hired under section 209(g), newly hired in 2010, converted in 2010, or with aspects of data inconsistent with our understanding of Title 42’s purpose—and randomly selected from within those strata. For EPA, we selected 10 of the 17 Title 42 employees for case file reviews based on a cross section of (1) labs and centers within EPA to understand if Title 42 was implemented uniformly across the agency; (2) Title 42 candidate sources such as the private sector, academia, and conversions to determine if differences existed in recruitment and pay setting; (3) length of service as a Title 42 employee to understand the effect of recent appointment and compensation guidance; and (4) compensation characteristics. We developed a data collection instrument for both HHS and EPA file reviews to capture information that was uniform and comparable. At the conclusion of each phase of our case file reviews, we analyzed the results and recorded our observations and listed the next steps—such as interviews with agency officials and further data analysis—needed to obtain further context for our observations. The results from the case file reviews and subsequent activities enabled us to understand the results of our data analyses and provided the basis for findings. We determined the number of case file reviews was sufficient to identify incidences where practices were or were not consistent with policies and guidance, but our findings are not generalizable to the entire population of sections 209(f) and (g) employees at HHS or EPA. We conducted this performance audit from May 2011 through July 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Trina Lewis, Assistant Director; Shea Bader; Carl Barden; Laurel Beedon; Andrew Ching; Sara Daleski; Jeffrey DeMarco; Karin Fangman; Ellen Grady; James Lager; Cynthia Saunders; Jeff Schmerling; and Gregory Wilmoth made major contributions to this report. | HHS and EPA have been using special hiring authority provided under 42 U.S.C. §§209(f) and (g)referred to in this report generally as Title 42 or specifically as section 209(f) or section 209(g)to appoint individuals to fill mission critical positions in science and medicine and, in many cases, pay them above salary limits usually applicable to federal government employees. GAO was asked to assess the extent to which HHS and EPA have (1) used authority under sections 209(f) and (g) to appoint and compensate employees since 2006, and (2) followed applicable agency policy, guidance, and internal controls for appointments and compensation. GAO analyzed agency Title 42 data, interviewed agency officials, and conducted file reviews. The Department of Health and Human Services (HHS) use of special hiring authorities under 42 U.S.C. §§ 209(f) and (g) has increased in recent years. Nearly all HHS Title 42 employees work in one of three HHS operating divisions: the National Institutes of Health (NIH), the Food and Drug Administration (FDA), and the Centers for Disease Control and Prevention (CDC). Title 42 employees at HHS serve in a variety of areas, including scientific and medical research support and in senior, director-level leadership positions. At NIH, one-quarter of all employees, and 44 percent of its researchers and clinical practitioners, were Title 42 appointees. HHS reported that Title 42 enables the agency to quickly fill knowledge gaps so medical research can progress and to respond to medical emergencies. HHS further reported Title 42 provides the compensation flexibility to compete with the private sector. In 2010, 1,461 HHS Title 42 employees earned salaries over Executive Level IV ($155,500 in 2010). HHS does not have reliable data to manage and provide oversight of its use of Title 42 because the section authority used to hire Title 42 employees is not consistently recorded into personnel systems. Moreover, HHS did not consistently adhere to certain sections of its 209(f) policy. For example, the policy states that 209(f) appointments may only be made after non-Title 42 authorities have failed to yield a qualified candidate, but GAO found few instances where such efforts were documented. HHS has recently issued updated 209(f) policy that addresses most of these issues. HHS is developing agencywide policy for appointing and compensating fellows under 209(g), but it is not clear the policy will address important issues such as documenting the basis for compensation. Since 2006, the Environmental Protection Agency (EPA) has used section 209(g) to appoint 17 employees. Title 42 employees lead scientific research initiatives and some manage or direct a division or office. According to EPA officials, Title 42 provides the flexibility to be competitive in recruiting top experts who are also sought by private industry, academia, and others. Also, Title 42 provides the appointment flexibility needed to align experts with specific skills to changing scientific priorities. Fifteen of EPAs 17 Title 42 employees earned salaries over Executive Level IV in 2010. EPA appointment and compensation practices were generally consistent with its guidance; however, EPA does not have postappointment procedures in place to ensure Title 42 employees meet ethics requirements to which they have previously agreed. GAO recommends HHS (1) ensuresection authority209(f) or 209(g)be consistently entered in appropriatepersonnel systems, (2) systematically document how policy requirements were fulfilled when hiring or converting 209(f) employees, and (3) ensure agencywide 209(g) policy currently in development provides guidance for documenting the basis for employee compensation. GAO recommends EPA develop and document a systematic approach for ensuring Title 42 employees are compliant with ethics requirements after appointment. HHS agreed with GAOs recommendations, while EPA disagreed, citing certain actions already taken. GAO acknowledges EPAs plans to address these issues, but maintains the recommendation is needed to ensure implementation. |
FEBs were established by a Presidential Directive in 1961 to improve coordination among federal activities and programs outside Washington, D.C. The boards’ overall mission includes supporting and promoting national initiatives and responding to the local needs of federal agencies and their communities. They provide a point of coordination for the development and operation of federal programs having common characteristics. Approximately 85 percent of all federal employees work outside the greater Washington, D.C., area, and the number of FEBs has grown from 10 to 28 over the past 46 years. When President Kennedy established the FEBs, they were located in the major cities in each of the 10 Civil Service Commission administrative regions. He later added 2 more boards, while President Johnson authorized 3 more, President Nixon added 10, and President Ford added 1. Two more boards were added by OPM in the 1990s bringing the total number of boards to 28. Figure 1 shows the metropolitan areas where the 28 boards are located. According to the regulations that guide the FEBs, the Director of OPM is responsible for overseeing and directing the operations of all of the FEBs consistent with the law and with the directives of the President. The boards are composed of the federal field office agency heads and military commanders in their cities, and the regulations state that each FEB should have a chair elected by the FEB members to serve a term not to exceed a year. The regulations also state that the boards should be governed by bylaws or other rules for their internal governance that are developed for each board. Although through Presidential Directive FEB membership is mandatory for the senior agency officials within the FEB’s geographic boundaries, the boards have no independent authority and they rely on the voluntary cooperation of their members to accomplish their goals. The FEB funding structure is unusual within the federal government. The boards have no legislative charter and receive no congressional appropriation. Rather, each FEB is supported by a host agency, usually the agency with the greatest number of employees in the region. These host agencies provide varying levels of staffing, usually one or two full-time positions—an executive director and an executive assistant. Some agencies also temporarily detail employees to the FEB staff to assist their local boards and to provide developmental opportunities for their employees. Additionally, the FEBs are supported by member agencies through contribution of funds as well as in-kind support, such as office space, personal computers, telephone lines, and Internet access. In 2006, OPM estimated the cost of FEB operations at approximately $6 million. To assist in standardizing emergency activities across the FEB system, OPM and the FEBs are establishing an emergency preparedness, security, and employee safety set of activities with performance measures that will be common to all of the boards. Although this effort is not completed, all of the selected FEBs were doing some emergency activities, such as hosting emergency preparedness training and exercises. For example, FEMA officials and the FEB representatives reported working together, often with the General Services Administration (GSA), on COOP training and exercises. In the past, some of the selected FEBs also played a role in responding to emergencies, although not all of the FEB representatives felt this was an appropriate activity for the boards. OPM and the FEBs are developing a multiyear strategic plan that will include a core function for the FEBs called emergency preparedness, security, and employee safety. The plan will include expectations and measures to assess how well each FEB is performing the activities. OPM has reported working with the boards on emergency planning issues since 2001, and in March 2004, a document summarizing the FEB role in emergency situations was finalized. The boards’ emergency support responsibilities include elements such as serving as a federal liaison between state and local emergency officials, establishing notification networks and interagency emergency preparedness councils, and hosting emergency preparedness exercises for agencies. A complete list of the FEB emergency support responsibilities detailed in the 2004 document can be found in appendix II. According to an OPM official, designating emergency support as a core function of the FEBs will further enhance the FEB role in emergency situations. OPM officials recognize that the FEBs can add value to regional preparedness efforts as vehicles for communication, coordination, and capacity building but acknowledge that the emergency activities of the FEBs have varied from board to board. The emergency support function is intended to provide consistent delivery of FEB emergency preparedness and response programs and activities for the federal workforce across the system of 28 boards. Not all of the representatives from the selected FEBs were convinced that the boards should have an expanded emergency service support role. Although all of the selected boards had some type of emergency communication network and emergency preparedness council in place, there was disagreement among the FEB representatives on the role the FEBs should play in emergency service support, particularly during an emergency. Without adequate staff and resources, some of the executive directors expressed concern that they will not be able to meet expectations. One executive director, for example, noted that because her local board lacked 24/7 communication and coordination abilities, it could not be held accountable for emergency service roles and responsibilities. Another executive director commented that there was a general expectation within the board’s metropolitan federal community that the FEB will assume a significant leadership role during a possible future emergency. However, he observed that limited and declining funding does not provide for an effective communication system. As a consequence, he felt this expectation was unrealistic and may contribute to major misunderstandings in the event of a significant emergency. On the other hand, several of the executive directors felt that the FEBs would be able to accomplish much more in this area with additional resources. For example, one executive director, with an emergency operations background, emphasized that if the boards were given dependable funding and increased stature within the federal government by formal recognition of their emergency support role, their return on investment in terms of emergency support functions would be substantial. In general, the consensus among those who viewed the FEBs as having an increased role in emergency operations was that with dependable funding and resources, all the boards in the FEB system could and should provide a similar level of emergency operations support. Several FEB representatives also stated that OPM leadership and direction in clearly outlining emergency operations expectations and OPM’s oversight of these activities would diminish uncertainty about the boards’ role in emergency support, both among the boards and federal agencies in general. They were encouraged by the designation of emergency services as a core FEB function. The FEBs are charged with providing timely and relevant information to support emergency preparedness and response coordination, and OPM expects the boards to establish notification networks and communications plans to be used in emergency and nonemergency situations. The boards are also expected to disseminate relevant information received from OPM and other agencies regarding emergency preparedness information and to relay local emergency situation information to parties such as OPM, FEB members, media, and state and local government authorities. FEB representatives generally viewed the boards as an important communications link between Washington and the field and among field agencies. For example, the Atlanta FEB’s executive director described the boards as a conduit for both emergency and nonemergency information to member agencies through e-mail, telephone, and Web sites. While many of the items needing dissemination are also passed through normal agency channels, several FEB representatives noted that it usually takes longer for communication to be received through their agency headquarters than through the FEB channel. The Oklahoma FEB chair described the FEBs as central depositories that receive information from headquarters and quickly disseminate that information to the field, reducing the information gap between Washington, D.C., and the rest of the country. Previously, much of the emergency support responsibility of FEBs was in providing communication regarding hazardous and inclement weather conditions. Almost all of the selected FEBs reported this as an emergency activity for which they continue to have responsibility. For example, the Atlanta FEB executive director said that during potential weather emergencies, she and members of the Policy and Steering Committee from GSA and the National Weather Service gather information about the forecast and road conditions. The executive director, FEB chair, and members of the Policy and Steering Committee then conduct a 4:00 a.m. conference call to make a decision about suggested agency closings or delayed reporting. Following the conference call, the FEB executive director posts a message on the board’s emergency hazard line that designated agency employees can check. This message is also posted to the FEB general telephone line and the FEB Web site. Several of the executive directors emphasized that they can only make recommendations to the federal agencies in their areas of service, but they cannot mandate that federal agencies close for weather or other emergencies. Although each of the selected boards we reviewed reported conducting communications activities as a key part of its emergency support service, they used a number of different types of communication systems. The Boston FEB, for example, operates two electronic communications mechanisms to be in contact with senior federal agency officials during local and national emergencies, both during and after hours. The first is an Internet portal, developed and maintained by the DHS Federal Protective Service, which is designed to provide senior agency officials access to up- to-date information, such as threat assessments and emergency weather. The second communications system is called EDIAL, housed and maintained by the First U.S. Coast Guard District’s 24-hour command system. EDIAL, funded for the FEB by GSA New England, enables the board to communicate with agency officials simultaneously via an electronic telephone message in times of emergency. Several of the executive directors mentioned the importance of having access to the Government Emergency Telecommunications Service (GETS) cards, a White House-directed emergency phone service. GETS provides emergency preparedness personnel a high probability of completion for their phone calls when the probability of completing a call through normal channels is significantly decreased. The majority of the selected boards reported keeping an emergency contact list for officials in their member agencies. Several of the executive directors emphasized the importance of standardizing the communications systems of the boards so that every FEB is communicating in the same way. The communication abilities among the selected FEBs did vary, often dependent on the communication system provided by a supporting agency. For example, the Atlanta FEB reported previously using an emergency call-down system supplied by the Atlanta U.S. District Court, but the system was too slow. The executive director there said she was exploring the possibility of transferring to the Southwestern Emergency Response Network, which would give her greater capacity to notify area agencies in emergency situations. A complaint about many of the FEB communication systems was that they were slow or needed to be manually updated. The Dallas-Fort Worth FEB executive director noted that with the boards becoming more of a national network and serving as backups to one another, the importance of a fully supported national communication network for the FEBs is becoming even more evident. According to OPM, the FEB role in emergency service support also includes coordination activities. For example, OPM reported that it expects the boards to serve as federal liaisons for state and local emergency officials and to assess local emergency situations in cooperation with federal, state, and local officials. Although all of the boards reported some involvement of state and local officials in their emergency activities, the degree of board connections with state and local officials varied. The Minnesota FEB and the Oklahoma FEB, for example, reported strong relationships with state and local government officials, state and local emergency management leaders, and private sector businesses. The Dallas-Fort Worth FEB executive director reported that the board partners with state and local government representatives, the private sector, law enforcement, and first responders, all of which are key players in assessing local emergency situations. On the other hand, the Chicago FEB executive director said that because Chicago is so large, the board has few established relationships with state and local officials. The chair of the Boston FEB said its board had 24-hour contact numbers for some state officials but not city officials. In terms of coordination, the FEBs are also charged with identifying a core group of federal leaders in each community to discuss planned courses of action, such as delayed arrival and shelter in place, in the event of an emergency. All of the selected boards had some type of emergency preparedness council. In the case of the Los Angeles FEB, however, the emergency preparedness committee had to disband because of significant transportation challenges in the Los Angeles area. The board’s executive director said they now have an emergency preparedness e-mail group. In addition, OPM expects the boards to provide problem resolution assistance as appropriate, to include identifying federal resources that may be available to assist the community in responding to, or recovering from, an emergency. Examples of some of the selected boards’ past responses during emergencies are detailed in a section below. OPM expects the FEBs in their capacity-building role to facilitate training for member agencies regarding their responsibilities related to occupant emergency plans, COOP planning, and other emergency preparedness topics. All of the selected FEBs reported hosting at least one emergency preparedness briefing, training, or exercise during the past year. The Minnesota FEB, for example, hosted homeland security briefings by the Federal Bureau of Investigation, the Transportation Security Administration, the Minnesota Department of Health, the Secret Service, FEMA, the Federal Protective Service, state and county emergency management directors, and the Department of Defense. The Denver FEB conducts a yearly scenario-based COOP exercise usually in conjunction with FEMA, the National Archives and Records Administration (NARA), and GSA. In addition to other preparedness exercises, the Chicago FEB hosted an exercise dealing with emergency preparedness and people with disabilities. Several FEB representatives made the point that these emergency preparedness exercises and activities are particularly valuable for the smaller federal agencies. While military, law enforcement, and public safety federal agencies may have a solid grasp of emergency preparedness, some of the smaller administrative agencies need help defining what their responsibilities are in this area. In addition, an FEB executive director and a chair said that the interagency exercises help to ensure that federal workers are receiving consistent treatment across the agencies. One of the FEB emergency support responsibilities is facilitating COOP training for federal agencies, and the FEB representatives reported working with FEMA and, in many cases, GSA to accomplish this. As mentioned previously, COOP planning is an effort conducted by agencies to ensure that the capability exists to continue essential agency functions across a wide range of potential emergencies. FEMA, GSA, and OPM are the three agencies that have the most direct impact on individual agency efforts to develop viable COOP capabilities. FEMA, as the lead agency for executive branch COOP planning, has responsibility for formulating guidance, facilitating interagency coordination, and assessing the status of executive branch COOP capabilities. GSA is responsible for working with FEMA in providing COOP training for federal agencies and assisting agencies in acquiring alternate facilities in the event of an emergency, while OPM is responsible for maintaining and revising human capital management guidance for emergency situations and assisting the heads of other departments and agencies with personnel management and staffing during national security emergencies. FEB representatives said they work with FEMA and GSA to develop and strengthen agency COOP and other emergency plans. For example, most of the boards have COOP working groups or emergency committees, often lead by FEMA and GSA, which help conduct various emergency exercises. The exercises are designed to provide insight and guidance that can be used to develop specific action plans that address interruptions in services provided by their agencies, and FEB representatives said that COOP plans are tested through these exercises. A FEMA official testified in May 2006 that the COOP working groups established with the FEBs in New Orleans, Houston, and Miami prior to the hurricanes of 2005 and the many COOP training and exercise activities conducted by these organizations were instrumental in facilitating federal agency recovery and reconstitution efforts following hurricanes Katrina, Rita, and Wilma. During the past year, FEMA Region III nominated the Philadelphia FEB COOP working group for a 2006 Excellence in Government Award because the group had improved the federal image of preparedness among the Philadelphia community through training, exercises, and interagency coordination projects. The group received a Silver Medal Award as a result of the nomination. As another example of joint activities, through a campaign that is a collaboration between FEMA, the Red Cross, and other emergency response groups, the Boston FEB hosted a series of seminars aimed at educating employees about home preparedness. Almost all of the FEB executive directors or chairs from the selected boards cited a positive and beneficial working relationship with FEMA. Some of the executive directors also said that a strong relationship exists between their boards and the FEMA regional directors in their areas. In addition, the regional FEMA officials we interviewed all said the FEBs assist FEMA with its mission. Another FEMA official noted that reaching out to the field can be difficult, but the FEBs provide communications and access to the majority of federal agencies, which makes FEMA’s job much easier. Although FEMA does not have a formal agreement with the FEBs, FEMA and the FEBs have common interests in making sure the federal workforce is protected, and the relationship proves mutually beneficial. According to a FEMA official, many of the agencies in the field have COOP policies, procedures, and planning in place in part because the FEBs have assisted FEMA in getting this program out to them. He noted that the FEBs carry the COOP activities forward and, although the boards operate under tenuous conditions, their outreach is invaluable. Similar to most of the opinions expressed regarding FEMA’s work with the FEBs, the Seattle FEB chair said that FEMA has displayed active leadership and has proven to be a good connection for sharing information. The Oklahoma FEB response to the bombing of the Oklahoma City Murrah Federal Building on April 19, 1995, illustrates the role of some of the boards in aiding emergency response. The board staff knew all of the agencies in the Murrah Building; the home telephone numbers of critical staff; the city, county, and state principals in Oklahoma City; and which federal agencies were available to provide immediate relief and support. According to the Oklahoma executive director, with the information the FEB was able to provide and a blueprint of the Murrah Building, the first responders were able to determine where they might find more people after the bombing. The FEB staff also played a role in providing support to the victims and families of those who died in the bombing through activities such as arranging counseling. In addition, shortly after the disaster the Oklahoma FEB hosted a meeting with the Vice President in which local agency leaders discussed what worked well and what needed attention in recovering from the disaster. Hurricanes Katrina and Rita represented huge disasters in the history of our nation, and according to a FEMA official, through these catastrophes the New Orleans FEB’s executive director established and maintained an essential communication link between FEMA’s Office of National Security Coordination (ONSC) and OPM. A FEMA official noted that many federal agencies, specifically smaller agencies or agencies with limited resources, were better prepared because of the coordination, collaboration, training, and resource sharing the New Orleans FEB was able to provide. The New Orleans FEB executive director also became part of the nation’s first federal agency COOP and Reconstitution Team, made up of representatives from the New Orleans and Dallas-Fort Worth FEBs, GSA, NARA, OPM, and FEMA. Additionally, following the interruption of communications and loss of contact with federal leaders, the executive director was able to work through ONSC to locate and reestablish contact with all members of the FEB Policy Committee at their alternate sites, beginning the reconstitution of the New Orleans FEB. The FEB served as a conduit for information between Washington and the representative local agencies, and the Policy Committee was able to provide status updates to identify common needs or problems that agency leaders were facing that required expedited assistance to resolve. According to a FEMA official, the lessons learned during the conference calls with the New Orleans FEB Policy Committee following Hurricane Katrina allowed for better national response and coordination during Hurricane Rita. The New Orleans FEB executive director reported that part of her role during Hurricane Katrina was to raise awareness that many of the essential personnel of the federal workforce in New Orleans had no housing and, therefore, were not able to return to work. Eventually, essential federal and local workers and members of the New Orleans police and fire departments and their families were housed aboard ships. As another example of FEB support following hurricanes Katrina, Rita, and Wilma, FEMA Region V put into place a temporary Chicago call center that was scheduled to open in early September 2005. The call center was created in response to the projected volume of calls from victims of the disasters to enable FEMA to more effectively and rapidly communicate with them. Because of the requirement that call center staff must be fingerprinted and have security clearances, federal employees were the only ones who could immediately meet FEMA’s need to staff the center. The Chicago FEB executive director coordinated with agency officials in soliciting nearly 300 federal employees who were detailed to the center while negotiations were being conducted with a contractor who would then backfill these positions. According to FEMA and the Chicago FEB, the effort in sharing federal personnel was highly successful. During nonemergency but disruptive events, such as political conventions or rallies, the FEBs in the affected areas have helped to contain the potential disturbance for federal agencies’ operations. For example, the FEB representatives from Boston and New York City said their boards played a role during the national political conventions held there in the summer of 2004. In preparation for the events, OPM conducted a series of emergency preparedness seminars for local agency representatives through the FEBs in both cities. The sessions provided information on emergency planning and human resource flexibilities available to agencies for use in emergency situations and during major public events and were designed to prepare all federal agencies for emergencies, both natural and man-made. In addition, OPM gave the Boston FEB vice chair and the New York City chair onetime authority during the event to make decisions regarding the nonemergency workforce should that become necessary. As another example, during the immigration rallies in the summer of 2006 in Chicago, the Chicago FEB reported that it was communicating with the Federal Protective Service, which shared security information with the board. The Chicago FEB was able to pass this information on to the local agencies so employees could prepare and make alternative travel arrangements since some streets were closed. The distinctive characteristics of the FEBs within the federal government help to explain the key challenges the boards face in providing emergency support services. Factors including the boards’ lack of a defined role in national emergency support structures, their accountability framework, and the differences in their capacities present challenges in providing a needed level of emergency support across the FEB service areas. According to several FEMA officials we interviewed, the FEBs could carry out their emergency support role more effectively if their role was included in national emergency management plans. FEMA officials from two different regions with responsibility for emergency activities in 11 states said they felt the boards could be used more effectively and that they add value to the nation’s emergency operations. They agreed with several of the FEB executive directors we interviewed who felt the boards lacked recognition within the federal government’s emergency response structure and that their value in emergency support was often overlooked by federal agency officials unfamiliar with their capabilities. A FEMA regional director noted that it is very important that the FEB emergency support role is understood, and he believed including the boards in emergency management plans was an opportunity to communicate the role of the FEBs and how they could contribute in emergencies involving the federal workforce. The FEMA officials provided examples of areas where the FEBs could support the existing emergency response structure and where the boards’ role could be defined in emergency management plans. For example, while FEBs are not first responders, the National Response Plan’s emphasis on local emergency response suggests using the existing local connections and relationships established by the FEBs. The National Response Plan is also intended to provide a framework for how federal departments and agencies will work together and coordinate with state, local, tribal, private sector, and nongovernmental organizations during incidents through the establishment of several multiagency coordination structures. Among other activities, these coordination structures are responsible for maintaining situational awareness, information sharing, and communications; coordinating internal operations; and coordinating among the different entities. The FEMA officials agreed that the FEBs could provide support to the existing emergency response structure via these multiagency coordination centers, given the FEBs’ connections and knowledge of their local communities. The boards could provide real-time information to the centers and have access to status reports that they could share with high-level federal officials within their service areas during an emergency affecting the federal workforce. FEMA officials had specific suggestions for where formal inclusion of the FEBs should be considered in multiagency coordination centers. One official noted that when a disaster threatens the federal community, it would be advantageous for the FEB to have a seat in the joint field office (JFO). A JFO is a temporary federal facility established locally to coordinate operational federal assistance activities to the affected areas during incidents of national significance. Within the JFO, senior federal representatives form a multiagency coordination entity and direct their staff in the JFO to share information, aid in establishing priorities among incidents and associated resource allocation, and provide strategic coordination of various federal incident management activities. The reasoning behind the suggestion to include the FEBs was that the boards have knowledge of the departments and agencies in their cities, making them able to assess the status of the local federal community affected by the disaster. According to the same official, another place for the FEBs to contribute that merits consideration is the regional response coordination center, which coordinates regional response efforts, establishes federal priorities, and implements local federal program support until a JFO is established. FEMA officials also suggested that the FEBs could maintain the vital records related to COOP, such as alternative COOP sites, phone numbers, and emergency contacts. FEMA officials proposed that FEMA could provide technical assistance to the FEBs to develop a COOP directory format containing the specific information for their member agencies, while the FEBs would be responsible for maintaining, updating, protecting, and distributing the directory. FEMA officials also suggested that it may be helpful for the FEBs and FEMA to draft a memorandum of understanding that formalizes the role and responsibilities of the FEBs in assisting FEMA with COOP and other emergency activities. The need for formal agreements on emergency roles and responsibilities has been highlighted in our previous work. For example, in assessing the response to Hurricane Katrina, we recommended that it was important for FEMA and the Red Cross to clarify their respective roles and responsibilities. In May 2006, the two organizations entered into a memorandum of understanding that outlines their areas of mutual support and cooperation in disaster response and recovery operations and in performance of their respective roles under the National Response Plan. According to OPM, leadership and oversight of the FEBs is conducted from OPM Headquarters in Washington, D.C. Although the FEB regulations state that the chairs of the FEBs should report to OPM through regional representatives, who were charged with overseeing the activities of their FEBs, an OPM official explained that the regional oversight these regulations refer to is now done from headquarters. Within OPM, the Associate Director for Human Capital Leadership and Merit System Accountability (HCLMSA) supervises the Director for FEB Operations. Within the HCLMSA division, the field services group managers are intended to serve in a liaison and support role with the FEBs in their geographic areas. An OPM official said there are five field service managers who interact with the FEBs in their jurisdictions. While the official said the managers are not expected to provide oversight of FEB activities, they are expected to regularly attend FEB executive board meetings and help coordinate OPM-provided training. Some FEB representatives reported that their OPM field service managers were active in their FEBs, while others said their managers were not. In light of the recent emphasis on systemwide expectations and accountability measures for the boards, many of the FEB representatives we interviewed believed OPM needs to provide additional leadership and feedback to them. The relationship between OPM and the FEBs is complicated, in part because the boards need a certain level of autonomy to address regionally identified issues through projects and programs specific to their localities. More recently, however, particularly with the emergency support expectations for the boards that cut across the FEB system, many of the FEB representatives felt more assistance and feedback from OPM on FEB activities are warranted. Many were frustrated with what they perceived as a lack of priority given to the boards by OPM. For example, some noted that the Director of FEB Operations is a one-person office, which they felt was inadequate to meet the needs of and provide oversight for the 28 boards. Several of the FEB representatives also pointed to a recent incident where the FEB system’s host Web site server, contracted out by OPM, was defaced. Service was not restored to some of the FEB Web sites until several weeks later. The accountability structure for the FEB executive directors poses additional challenges. An OPM official reported that the executive directors are rated by their supervisors of record in their host agencies. In 2004, OPM worked with the FEB executive directors to develop critical performance standards to be used by the FEB chairs to provide input to the host agency supervisors on the performance of the FEB executive directors. Executive directors were asked by OPM to use the standards to solicit input from their FEB chairs for their performance evaluations, although there is no provision to ensure the performance standards are consistently applied among the individual director ratings. Of the 14 selected boards, 5 boards had an arrangement where the performance appraisal was done by the host agency supervisor who received performance appraisal input from the FEB chair. Four executive directors reported they were rated by their host agencies with no input from the FEB chairs, while for four of the executive directors, the chair provided the executive director’s rating to the host agency. One executive director did not receive a performance appraisal because she was still considered an employee of one agency even though her salary was paid by another agency. Some of the executive directors we interviewed said that under their current accountability structure, they answer to OPM, the chair or policy committee of the FEB, and the board’s host agency, which generally pays their salaries. When asked about accountability, some of the executive directors said they would follow the host agency’s guidance given that their salaries were paid by them. Others said they would answer primarily to their chairs or policy committees. One of the FEB representatives noted that he believes the current performance system does not reward high- performing FEBs. As we reported in 2004, the context in which the FEBs operate, including varying capacities among the boards for emergency preparedness efforts, could lead to inconsistent levels of preparedness across the nation. Figure 2 illustrates that the service areas of the FEBs differ substantially in the size of their formal jurisdictions, and table 1 shows how the number of federal employees and agencies served by each board varies. These factors may affect a board’s capacity to provide emergency support. For example, FEB representatives from Chicago and Los Angeles said their locations in large cities made providing FEB emergency support services for their service areas more difficult. The Los Angeles executive director, for example, noted that the Los Angeles FEB primarily serves a six-county area in the immediate vicinity of Los Angeles with notable transportation problems. This makes in-person meetings a challenge. The service area includes approximately 120,000 federal employees from 230 different agencies. Yet the executive director noted that the FEB’s staffing is similar to that of FEBs covering much smaller areas and numbers of employees and agencies. The Cincinnati FEB, in contrast, covers approximately 15,000 federal employees from 90 different agencies. Appendix III lists the 28 FEBs along with their host agencies. There is no consistency for funding the FEBs nationwide, and the levels of support provided to the boards in terms of operating expenses, personnel, and equipment vary considerably. For example, some of the executive directors reported they received an operating budget allocation for travel and supplies, while others said they received nothing or very little in this regard. Without adequate and consistent levels of funding and resources across the FEB system, some FEB representatives we interviewed were skeptical as to whether any standardization of emergency activities could be implemented. The FEBs’ dependence on host agencies and other member agencies for their resources also creates uncertainty for the boards in planning and committing to provide emergency support services. The lack of funding in a particular year may curtail the amount of emergency support an individual board could provide. Many of the FEB representatives characterized the board funding structure as dysfunctional, and some expressed concern that their activities will be further affected by reduced agency funding and resource support as agency budgets grow more constrained. When boards’ funding is precarious, the executive directors spend the majority of their time soliciting resources from member agencies, without adequate time or resources to focus on mission-related activities. Federal agencies that have voluntarily funded FEB positions in the past have begun to withdraw their funding support. Of our 14 case study boards, representatives from 3 of the boards said they had recently had their host agencies withdraw funding for their boards’ executive assistant positions. Several FEB representatives felt the uncertainty about the funding of the FEBs raises questions as to the survivability of the system and its ability to fulfill its emergency support function. Recognizing that the capacities of FEBs vary across the nation, OPM established an internal working group in August 2003 to study the strengths and weaknesses of the boards. According to OPM, the working group reviewed funding and staffing levels for possible recommendations of funding enhancements in challenged areas and developed several products to assist OPM in communicating the value of the FEBs to agencies. In 2006, OPM proposed a three-part plan, including restructuring the network of 28 boards to try to address the resource issues of some of the boards by combining them with other boards. Federal population numbers and geographic proximity of existing FEBs were used to develop the proposed structure, which reduced the 28 boards into a system of 21 boards. The majority of the FEBs did not support the restructuring component of the plan, asserting that the proposal was not well developed and stressing the importance of maintaining local presence for FEB operations and activities in the current locations. OPM decided not to pursue the approach. However, OPM officials said they will revisit restructuring the FEB network if resource issues remain a problem. There have been different options considered for FEB funding in the past. For example, in 1988, OPM developed a budget proposal to include in its fiscal year 1990 budget submission base dollars and full-time equivalents to fully fund the FEBs. Ultimately, OPM reported only receiving a fraction of the money requested, and OPM did not request additional funding for the next fiscal year. OPM has not requested funding of this type for the FEBs since that time. The current funding arrangements continue to emphasize local agency responsibility whereby usually one major department or agency in each city provides funding for an executive director and an assistant, although other federal agencies can contribute. OPM officials said they continue to support local agency commitment to the FEBs. From OPM’s vantage point, the boards that have developed strong relationships with their partner agencies have more success securing the necessary resources within existing funding arrangements. Although OPM officials stated they play an integral role in facilitating discussions to resolve FEB funding issues, some of the FEB representatives reported that OPM told them that if any of the FEBs encountered funding difficulties, the boards were on their own to solve the problems since the FEBs were unwilling to accept OPM’s restructuring proposal. The problem of unstable resources is one that could affect any networked organization similar to the FEBs that relies, more or less, on voluntary contributions from members. Agencies may be reluctant to contribute resources to an initiative that is not perceived as central to their responsibilities, especially during periods of budgetary constraints. This reluctance may, however, limit the long-term investment of the federal government in working more collaboratively. For example, we recently reported on the Joint Planning and Development Office (JPDO), a congressionally created entity designed to plan for and coordinate a transformation from the current air traffic control system to the next generation air transportation system by 2025. Housed within the Federal Aviation Administration, JPDO has seven federal partner agencies. One of the greatest challenges that JPDO officials cited was creating mechanisms to leverage partner agency resources. Although leveraging efforts have worked well so far, we noted that JPDO could face difficulties in securing needed agency resources if the priorities of the partner agencies change over time. This has been a long-standing problem for the FEBs as well. In a 1984 report, we concluded that although the FEBs have contributed to improved field management, the future of the boards was uncertain because funding for staff and board participation had declined. Similar to the boards’ current situation, in 1983, five FEBs lost all or part of their staff support as agency budgets grew more constrained. In Canada, the federal government has adopted a mix of both central funding and departmental contributions for its regional coordinating entities. Regional federal councils, the Canadian equivalent of the FEBs, are sustained by a balance between central funding and departmental contributions at the local level. The role of the councils was the subject of in-depth consideration by Canadian government officials in 1996, and at that time, the Treasury Board increased the level of support it provided to the councils, including central funding to support staff positions and some operating expenses. A 2000 report on the councils concluded that a balance between central funding and departmental contributions at the local level may well be the model best suited to financially sustain the councils. Although OPM and the FEBs are now involved in a strategic planning effort, OPM has not to date considered the resource requirements to support an expanded emergency support role for the FEBs. Yet, as we have pointed out in our previous reports, a strategic plan should include a description of the resources—both sources and types—that will be needed for the strategies intended to achieve the plan’s goals and objectives. Despite the challenges the FEBs face in providing emergency support, their potential to add value to the nation’s emergency preparedness and response is particularly evident given an event like pandemic influenza. The distributed nature of a pandemic and the burden of disease across the nation dictate that the response will be largely addressed by each community it affects. Using their established and developing community relationships to facilitate communication and coordination with local federal agency leaders and state and local governments, FEBs are well positioned to assist in pandemic preparedness and response. In the current pandemic planning stages, many of the selected FEBs were already acting as conveners, hosting pandemic influenza preparedness events, such as briefings and training and exercises, and were considering how federal agencies could share resources during a pandemic. According to the Homeland Security Council, the distributed nature of a pandemic, as well as the sheer burden of disease across the nation, means that the physical and material support states, localities, and tribal entities can expect from the federal government will be limited in comparison to the aid it mobilizes for geographically and temporarily bounded disasters like earthquakes and hurricanes. Unlike those incidents that are discretely bounded in space or time, an influenza pandemic could spread across the globe over the course of months or over a year, possibly in waves, and would affect communities of all sizes and compositions. While a pandemic will not directly damage physical infrastructure, such as power lines or computer systems, it threatens the operation of critical systems by potentially removing the essential personnel needed to operate them from the workplace for weeks or months. The Homeland Security Council issued two documents to help address the unique aspects of pandemic influenza. The November 2005 National Strategy for Pandemic Influenza is intended to guide the overall effort to address the threat and provide a planning framework consistent with the National Security Strategy and the National Strategy for Homeland Security. This planning framework is also intended to be linked with the National Response Plan. In May 2006, the Homeland Security Council also issued the Implementation Plan for the National Strategy for Pandemic Influenza. This plan lays out broad implementation requirements and responsibilities among the appropriate federal agencies and also describes expectations for nonfederal stakeholders, including state and local governments, the private sector, international partners, and individuals. Further, all federal agencies are expected to develop their own pandemic plans that along with other requirements, describe how each agency will provide for the health and safety of its employees and support the federal government’s efforts to prepare for, respond to, and recover from a pandemic. The Implementation Plan for the National Strategy for Pandemic Influenza states that the greatest burden of the pandemic response will be in the local communities. Local communities will have to address the medical and nonmedical effects of pandemic influenza with available resources. The implementation plan maintains that it is essential for communities, tribes, states, and regions to have plans in place to support the full spectrum of their needs over the course of weeks or months, and for the federal government to provide clear guidance on the manner in which these needs may be met. As pandemic influenza presents unique challenges to the coordination of the federal effort, joint and integrated planning across all levels of government and the private sector is essential to ensure that available national capabilities and authorities produce detailed plans and response actions that are complementary, compatible, and coordinated. Research has shown that systems like the FEBs have proven to be valuable public management tools because they can operate horizontally, across agencies in this case, and integrate the strengths and resources of a variety of organizations in the public, private, and nonprofit sectors to effectively address critical public problems, such as pandemic influenza. Government leaders are increasingly finding that using traditional hierarchical organizations does not allow them to successfully address complex problems. As a result, they are beginning to explore the use of collaborative networks that reach across agencies and programs. The boards bring together the federal agency leaders in their service areas and have a long history of establishing and maintaining communication links, coordinating intergovernmental activities, identifying common ground, and building cooperative relationships. Documents supporting the establishment of the FEBs noted that it is important that field executives have a broader picture of government and a general understanding of the interrelationships of government activity. The boards also partner with community organizations and participate as a unified federal force in local civic affairs. This connection to the local community could play a role in pandemic influenza preparedness and response as predisaster relationship building and planning are often the cornerstones to incident management. Many of the selected FEBs cultivated relationships within their federal, state, and local governments and their metropolitan area community organizations as a natural outgrowth of their general activities. For example, FEB activities, such as the Combined Federal Campaign and scholarship programs, brought the boards into contact with local charities and school boards. In addition, through activities such as hosting emergency preparedness training or through participation in certain committees, some of the selected FEBs reported a connection with emergency management officials, first responders, and health officials in their communities. Through their facilitation of COOP exercises and training, the FEBs bring together government leaders, health officials, and first responders in a venue where the parties can share ideas, discuss plans, and coordinate approaches. The San Francisco FEB executive director and chair said they attend FEMA’s Regional Interaction Steering Committee meetings, which brought them in contact with federal, state, and local government emergency management partners. The Minnesota FEB plays an active role in both the Association of Minnesota Emergency Managers (AMEM) and the Metropolitan (Twin Cities) Emergency Managers Association. The Minnesota FEB executive director, for example, serves on the AMEM board of directors as federal agency liaison, a newly created partnership with the organization. As another example, the Oklahoma FEB partnered with the fire departments in Oklahoma City and Tulsa to provide site visits to the federal agencies there to help strengthen emergency preparedness plans and update evacuation and shelter-in-place plans. The executive director said the site visits also provided agency leaders with the opportunity to interact with the most likely first responders in the event of an emergency and to obtain valuable information to include in emergency preparedness plans. As with the boards’ emergency support role in general, some of the FEB representatives envisioned their boards taking a more active role in pandemic influenza preparedness and response than others did. While some FEB representatives stressed the unique characteristics of the boards that position them to help prepare and respond to pandemic influenza, others noted the boards’ limited staffing and resources. One FEB executive director remarked that although the boards have no real authority, they are valuable because of the community relationships they have forged and their unique ability to coordinate resources and communicate. As previously discussed, several representatives were concerned, however, about the role the FEBs could play in the event of a large-scale emergency, such as an influenza pandemic. In terms of current pandemic planning, many of the selected FEBs were building capacity for pandemic influenza response within their member agencies and community organizations by hosting pandemic influenza training and exercises. The Implementation Plan for the National Strategy for Pandemic Influenza highlights training and exercises as an important element of pandemic planning. For example, 13 of the 14 selected FEBs were involved in pandemic influenza-related activities that ranged from informational briefings to coordinating pandemic exercises, some that included nonprofit organizations, the private sector, and government. The one exception was the New Orleans FEB, where the executive director said the board is still too heavily involved with Hurricane Katrina recovery to focus on helping agencies to collaborate on pandemic influenza preparedness. A number of the selected FEBs have held pandemic influenza tabletop exercises. A pandemic influenza tabletop exercise would be based on a fictitious account of a plausible outbreak of pandemic influenza with scenarios constructed to facilitate problem solving and to provoke thinking about gaps and vulnerabilities. The Boston FEB, together with the Massachusetts Emergency Management Agency and FEMA, held a pandemic influenza tabletop exercise in November 2006. The exercise objectives included goals such as helping to increase the awareness of federal, state, local, and tribal government agencies of the requirement to incorporate pandemic influenza procedures into COOP planning and identifying special considerations for protecting the health and safety of employees and maintaining essential government functions and services during a pandemic outbreak. In addition, the Baltimore FEB hosted a pandemic influenza exercise on November 1, 2006, facilitated by FEMA Region III and the Maryland Emergency Management Agency. The Seattle FEB, with the assistance of FEMA and the City of Seattle, sponsored an all-day conference in October 2006 called Pandemic Flu: Get Smart, Get Ready! Conversation Tools and Tips. The Minnesota FEB has been a leader among the boards in pandemic influenza planning. Using a tabletop exercise it created, the board hosted its first pandemic influenza exercise in February 2006, with a follow-up exercise in October 2006. The October exercise included approximately 180 participants from 100 organizations within federal agencies, state and local government, and the private sector. Figure 3 illustrates the breadth of participation in the exercises, including key infrastructure businesses such as power and telecommunications. The Minnesota FEB executive director noted that Minnesota has excellent state and local government relationships, which help to facilitate planning of this nature. Examples of partnerships the board has with state and local entities include those with the State of Minnesota Division of Homeland Security and Emergency Management, the Minnesota Department of Health, the St. Paul Chamber of Commerce, and the American Red Cross. The Implementation Plan for the National Strategy for Pandemic Influenza emphasizes that government and public health officials must communicate clearly and continuously with the public throughout a pandemic. The plan recognized that timely, accurate, credible, and coordinated messages will be necessary. According to many of the FEB representatives we interviewed, the communications function of the boards is a key part of their activities and could be an important asset for pandemic response. For example, when asked about the role they envision the FEBs playing in the response to a pandemic, the Dallas-Fort Worth FEB representatives said that because the board is viewed by its member agencies as a credible source of information, the board’s role should be to coordinate communications among member agencies. They gave the example of the Department of Health and Human Services working through the board to disseminate medical information to their local community. In addition to their communications role, during pandemic influenza the FEBs have the potential to broaden the situational awareness of member agency leaders and emergency coordinators and provide a forum to inform their decisions, similar to what the FEBs provide for other hazards, such as inclement weather conditions. A FEMA official noted that FEBs have vital knowledge of the federal agencies in their jurisdictions, which can provide valuable situational awareness to community emergency responders. Some of the FEBs were also considering the role they can play in assisting member agencies by supporting human capital functions, such as supporting the federal workforce and coordinating the deployment of personnel among member agencies as may be appropriate. Several FEB representatives said, for example, that they were considering how they could provide assistance in coordinating support to federal agencies responding to pandemic influenza, such as addressing personnel shortages by locating available resources among member agencies. Other FEB representatives we interviewed reiterated a theme that even the critical federal employees in the field can be left to fend for themselves when disasters strike their communities. Consequently, they are not able to handle the emergency issues of the federal government. For example, according to the New Orleans executive director, in New Orleans after Hurricane Katrina the oil and gas workers had their companies as powerful advocates in securing housing for them so they could resume working. She reported that in sharp contrast, there was no entity nationally that was an advocate for the local federal workforce to ensure the speedy reconstitution of essential services. In the majority of cases, she said that essential federal employees queued up for temporary housing in long lines. She intervened to bring attention to the need for expedited temporary housing for federal employees, who were responsible for providing essential functions, but who were also victims of the disaster. To avoid a similar situation during pandemic influenza, the Minnesota and Oklahoma FEBs are trying to negotiate with their states to create memorandums of agreement between the states and the federal agencies, represented by the FEBs. Their objectives are to identify how medical supplies and vaccines from the Advanced Pharmaceutical Cache (APC) or the Strategic National Stockpile, which will be distributed by the states, will be dispersed to essential federal government employees in the event of a pandemic or bioterrorist attack. To accomplish this, the FEBs are working with their federal members to apply the states’ guidelines for vaccine priorities to the federal workforce in their areas of service so that essential federal employees, such as air traffic controllers, federal law enforcement officers, and correctional facilities staff, are appropriately integrated in the state vaccine distribution plans. They also want to identify federal agencies and their resources that can augment the states’ operation of the mass vaccine dispensing sites. The Minnesota FEB has inventoried all of the federal agencies within its jurisdiction and feels it has a good idea of the resources that will be needed. According to the Minnesota FEB executive director, however, Minnesota currently does not have enough medical supplies, pharmaceuticals, and vaccines in its APC to cover the emergency personnel of the federal government in Minnesota nor does it have the resources for purchasing these supplies. Achieving results for the nation increasingly requires that federal agencies work with each other and with the communities in which they serve. The federal executive boards are uniquely able to bring together federal agency and community leaders in major metropolitan areas outside Washington, D.C., to meet and discuss issues of common interest, such as preparing for and responding to pandemic influenza. As we reported in 2004, such a role is a natural outgrowth of general FEB activities and can add value in coordinating emergency operations efforts. Several interrelated issues limit the capacity of FEBs to provide a consistent and sustained contribution to emergency preparedness and response. These issues may present limitations to other areas of FEB activities, not solely to emergency preparedness. Among them are the following: The role of the FEBs in emergency support is not defined in national emergency guidance and plans. Performance standards, for which the boards will be held accountable, with accompanying measures, are not fully developed for FEB emergency support activities. The availability of continuing resource support for the FEBs is uncertain and the continued willingness of host and member agencies to commit resources beyond their core missions may decrease, especially in times of increasing budgetary constraints. While the FEBs and FEMA have established important working relationships in a number of locations, these have, to date, been largely informal. As FEMA officials have noted, including the FEBs in federal emergency guidance and plans provides an opportunity for the FEBs to leverage the network of community relationships they have already established. OPM and FEMA could formalize the FEBs’ contribution to FEMA’s emergency preparedness and response efforts through a memorandum of understanding, or some similar mechanism, between FEMA and the FEBs, and a formal designation of the FEB role in FEMA guidance. Likewise, recognition of the FEB emergency support role in the national emergency structure could help the boards carry out their emergency support role more effectively by underscoring the value they add, which may be overlooked by federal agency officials unfamiliar with their capabilities. The ability of FEBs and organizations like them to fulfill important collaborative national missions is hampered if they are dependent on the willingness of host agencies to provide support. OPM has determined that the FEBs should have an important and prominent role in emergency support and envisions a set of emergency support activities across the FEB system. The current structure of host agencies and in-kind contributions puts at risk the achievement of that goal. OPM’s work on a strategic plan with the FEBs affords the opportunity to complete the development of clear expectations for the FEBs in emergency operations and to develop appropriate performance measures for these expectations. OPM also has an opportunity, as part of this planning process, to consider alternative funding arrangements that would better match the roles envisioned for the FEBs. As noted earlier, a strategic plan should describe how goals and objectives are to be achieved, including how different levels of resources lead to different levels of achievement and the sources of those resources. Consistent with OPM’s ongoing efforts in this regard, we recommend that the Director of OPM take the following four actions to help improve the ability of the FEBs to contribute to the nation’s emergency preparedness efforts, particularly given the threat of pandemic influenza: Once OPM completes defining emergency support expectations for the FEBs, OPM should work with FEMA to develop a memorandum of understanding, or some similar mechanism, that formally defines the FEB role in emergency planning and response. OPM should initiate discussion with DHS and other responsible stakeholders to consider the feasibility of integrating the FEB emergency support responsibilities into the established emergency response framework, such as the National Response Plan. OPM should continue its efforts to establish performance measures and accountability for the emergency support responsibilities of the FEBs before, during, and after an emergency event that affects the federal workforce outside Washington, D.C. As an outgrowth of the above efforts and to help ensure that the FEBs can provide protection of the federal workforce in the field, OPM, as part of its strategic planning process for the FEBs, should develop a proposal for an alternative to the current voluntary contribution mechanism that would address the uncertainty of funding sources for the boards. We provided the Director of OPM and the Secretary of Homeland Security a draft of this report for review and comment. We received written comments from OPM, which are reprinted in appendix IV. While not commenting specifically on the recommendations, OPM stated that it understands the importance of the issues raised in the report, noting that it is building the boards’ capacity by developing a national FEB strategic and operational plan that will ensure consistent delivery of services across the FEB network. By documenting results and creating a consistent accountability mechanism, OPM said it is building a strong business case through which it can address the resources FEBs need to continue operations. OPM also stated that it believed institutionalized relationships with strategic partners like FEMA can demonstrate FEBs’ business value and help address ongoing funding issues. In comments received from FEMA by e-mail, FEMA concurred with the findings of the report and welcomed the opportunity to work with OPM to develop a memorandum of understanding that more formally defines the FEB role in emergency planning and response. FEMA also recognized the current personnel and budget limitations of the FEBs in supporting emergency planning and response activities and said that a proposal for an alternative to the current FEB voluntary contribution mechanism should assist with providing an improved capability for the boards. We are sending copies of this report to the Director of OPM and the Secretary of Homeland Security and appropriate congressional committees. We will also provide copies to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-6806 or steinhardtb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. The objectives of our review were to identify the actions the federal executive boards (FEB) have taken to fulfill their emergency preparedness and response roles and responsibilities, describe the key challenges facing the FEBs in fulfilling these roles and evaluate the extent to which the FEBs can contribute to emergency preparedness and response to pandemic influenza. To address these objectives, we reviewed FEB annual reports and academic literature as well as prior GAO reports about leveraging collaborative networks. Additionally, we reviewed the National Response Plan, Implementation Plan for the National Strategy for Pandemic Influenza, and the Joint Field Office Activation and Operations Interagency Integrated Standard Operating Procedure to assess the feasibility of FEB involvement in those plans. We interviewed Office of Personnel Management (OPM) officials, and we consulted with three GAO field office managers who are members of their local FEBs to gain a greater understanding of FEB activities. We selected 14 of the 28 FEBs for more detailed review. Atlanta, Baltimore, Boston, Chicago, Dallas-Fort Worth, Denver, Los Angeles, New York City, Oklahoma, Philadelphia, San Francisco, and Seattle were selected because they are 12 of the 15 largest FEBs in terms of number of federal employees served. Minnesota was selected because it is considered a leader in pandemic influenza planning, and New Orleans was selected because of its recent emergency management experience with Hurricane Katrina. GAO headquarters and field office teams interviewed at least two key FEB representatives, including the chair or vice chair and the executive director from the 14 selected boards. Additionally, we obtained and reviewed FEB documents, such as annual reports, monthly activity reports, minutes, and correspondence, at the selected sites. Because our selection of FEBs was nonprobabilistic, the results of our review of these selected FEBs are not generalizable to all other FEBs. However, the challenges and issues that were identified in our coverage of half of all FEBs along with our review of materials concerning the FEBs as a group suggests that these matters are not limited to just the selected FEBs. OPM provided data on the counties of jurisdiction for all of the boards as well as their host agencies and the number of federal and military employees and agencies in each service area. We determined these data were sufficiently reliable for the purposes of this report. We also interviewed Federal Emergency Management Agency (FEMA) officials at their headquarters in Washington, D.C. FEMA serves as the Department of Homeland Security’s designated lead agent for continuity of operations (COOP) plans for the FEBs’ executive branch members. Because the FEBs and FEMA collaborate on COOP activities in the field, we interviewed the FEMA regional directors in regions V and VI based in Chicago, Illinois, and Denton, Texas, respectively, to obtain an outside perspective of the boards and their role in emergency operations. Our analysis of the capacity of FEBs to support emergency preparedness is drawn from our collective review and assessment of information and documents provided to us by officials from OPM and FEMA and the FEB representatives at the selected FEBs as well as our examination of the relevant literature described above. Our review was conducted from March 2006 through February 2007 in accordance with generally accepted government auditing standards. Appendix II: Office of Personnel Management Document Describing the FEB Role and Responsibilities in Emergency Situations ROLE: PROVIDE EMERGENCY LIAISON AND COMMUNICATIONS - FEBs stand ready to provide timely and relevant information to support emergency preparedness and response coordination. -FEBs will serve as a Federal liaison for State and Local emergency officials. -FEBs will establish notification networks and develop a protocol (Communications Plan) to be used in nonemergency and emergency situations. -FEBs will disseminate relevant information received from OPM/DC regarding emergency preparedness information (memorandums from OPM officials, emergency guides, training opportunities, information from other departments/agencies, etc.) -FEBs will identify a core group of Federal leaders in each community who will meet regularly to discuss planned courses of action (delayed arrival, early dismissal, shelter in place, emergency personnel only, etc.) in the event of an emergency. -FEBs will survey and/or facilitate training for member agencies regarding their roles and responsibilities related to occupant emergency plans. -FEBs will facilitate training on Continuity of Operations (COOP), and other emergency preparedness topics, i.e., shelter in place, triage, onsite responder, etc. for Federal agencies. -FEBs will assess local emergency situations in cooperation with Federal, State and Local officials. -FEBs will activate established notification system for transmission of local emergency information, as prescribed by the FEB’s protocol (Communications Plan). -FEBs will provide problem resolution assistance as appropriate, to include identifying Federal resources which may be available to assist the community in responding to, or recovering from, an emergency. -FEBs relay local emergency situation information, by way of periodic reports to the appropriate authorities, to include, but not limited to: OPM/DC, FEB members, media, State and Local government authorities. -FEBs will disseminate information received from OPM/DC regarding emergency information at the national level – decision on employee work status, information from other departments/agencies, etc. -FEBs alert those responsible for implementing the Occupant and Agency Emergency Plans and serve as a redundant (back-up) communication vehicle to ensure notification. In addition to the contact named above, key contributors to this report were William Doherty, Assistant Director; Dominic Barranca; Scott Behen; Kathleen Boggs; Deirdre Brown; Beverly Burke; Jimmy Champion; Betty Clark; Derrick Collins; Daniel Concepcion; Amber Edwards; Richard Guthrie; Bonnie Hall; Charles Hodge; Aaron Kaminsky; Judith Kordahl; Susan Mak; Signora May; Samuel Scrutchins; Gabriele Tonsil; George Warnock; and Daniel Zeno. In addition, William Bates, Thomas Beall, David Dornisch, and Donna Miller provided key assistance. | The Office of Personnel Management (OPM), which provides direction to the federal executive boards (FEBs), is now emphasizing that in the post-9/11 environment, the boards have a transformed emergency support role. The report discusses the boards' emergency preparedness roles and responsibilities and their potential role in preparing for and responding to pandemic influenza. GAO selected 14 of the 28 FEBs for review because they coordinate the greatest number of federal employees or had recent emergency management experience. Located outside Washington, D.C., in 28 cities with a large federal presence, the federal executive boards (FEB) are interagency coordinating groups designed to strengthen federal management practices, improve intergovernmental relations, and participate as a unified federal force in local civic affairs. Created by a Presidential Directive in 1961, the boards are composed of the federal field office agency heads and military commanders in their cities. Although membership by agency heads on the boards is required, active participation is voluntary in practice. The boards generally have staff of one or two full-time personnel, including an executive director. The FEBs have no congressional charter and receive no congressional appropriation but rather rely on voluntary contributions from their member agencies. Although the boards are not intended to be first responders, the regulations that guide the FEBs state that emergency operations is one of their functions. The Office of Personnel Management (OPM) and the FEBs have designated emergency preparedness, security, and employee safety as a core function of the boards and are continuing to work on a strategic plan that will include a common set of performance standards for their emergency support activities. All of the selected FEBs were performing emergency activities, such as organizing preparedness training, and FEB representatives and Federal Emergency Management Agency (FEMA) officials reported that these activities mutually advanced their missions. The FEBs, however, face key challenges in carrying out their emergency support role. First, their role is not defined in national emergency plans. According to several FEMA officials, FEBs could carry out their emergency support role more effectively if it was included in national emergency management plans. The framework within which the FEBs operate with member agencies and OPM also poses challenges in holding the boards accountable for their emergency support function. In addition, the funding sources for the boards are uncertain, affecting their ability to plan for and commit to providing emergency support services. Despite these challenges, the nature of pandemic influenza, which presents different concerns than localized natural disasters, makes the FEBs a particularly valuable asset in pandemic preparedness and response. Many of the selected boards had already hosted pandemic preparedness events, which included their member agencies and local community organizations. With the greatest burden of pandemic response resting on the local communities, the FEBs' outreach and their ability to coordinate across organizations suggest that they may be an important resource in preparing for and responding to a pandemic. |
The Congress and others have been addressing the question of how to strengthen the acquisition workforce since 1974 when the OFPP was created to establish governmentwide procurement policies for executive agencies. One of the primary responsibilities of this office and its Federal Acquisition Institute (FAI) is to strengthen acquisition workforce training. The concern about the quality of the acquisition workforce deepened in the 1990s, as it became clear that the government was experiencing significant contracting failures partly because it lacked skilled personnel to manage and oversee contracts. There was also concern that program managers and other personnel integral to the success of the acquisition process were only marginally involved with the contracts. Two of the most significant steps taken in this regard were the passage of the Defense Acquisition Workforce Improvement Act in 1990 and the Clinger-Cohen Act in 1996. The Defense Acquisition Workforce Improvement Act, among other things, provided specific guidance on DOD’s acquisition workforce definition. The Clinger-Cohen Act required civilian agencies to establish acquisition workforce definitions. Those definitions were to include contract and procurement specialist positions and other positions “in which significant acquisition-related functions are performed.” The Clinger-Cohen Act also required civilian agencies to collect standardized information on their acquisition workforce and establish education, training, and experience requirements that are “comparable to those established for the same or equivalent positions” in DOD and the military services. Table 1 provides more details on this act and other legislation and federal agency initiatives. OFPP Policy Letter 97-01 directs executive agencies to establish core training for entry and advancement in the acquisition workforce. Agencies normally establish specific core training required to meet the standards for certification in each career field in their acquisition workforce (e.g., contracting officers, CORs, and COTRs). For contracting officers, agencies usually establish several warrant levels, with specified contracting authority for each level. Agencies issue permanent warrants only to contracting officers who have completed the core training required for each warrant level and who have the necessary work experience and formal education. Because contracting officers’ warrant levels generally correspond to their grade levels, employees’ career development and advancement is dependent on attending and passing required core training courses. The OFPP policy letter also established continuing education requirements for contract specialists and contracting officers. DOD includes a wide variety of disciplines—ranging from contracting, to technical, to financial, to program staff—in its acquisition workforce definition, but civilian agencies have employed narrower definitions that are largely limited to staff involved in awarding and administering contracts. Having a broader definition is important because it is one method to facilitate agencies’ efforts to ensure that training reaches all staff integral to the success of a contract. While most civilian agencies acknowledge that the acquisition process requires the efforts of multiple functions and disciplines beyond those in traditional contracting offices, few have broadened their definitions of the acquisition workforce to include them. Officials at two agencies we reviewed said that they had not broadened their definitions because officials responsible for managing the acquisition workforce did not have management responsibility for or control of the training of individuals in offices other than their own. DOD is required by the Defense Acquisition Workforce Improvement Act to include, at a minimum, all acquisition-related positions in 11 specified functional areas in its definition of its acquisition workforce. It is also required to include acquisition-related positions in “management headquarters activities and in management headquarters support activities.” Therefore, DOD’s acquisition workforce includes contracting, program, technical, budget, financial, logistics, scientific, and engineering personnel. DOD uses a methodology, known as the Refined Packard methodology,to identify its acquisition workforce personnel. Using the Refined Packard methodology, DOD now includes personnel in its acquisition workforce from three categories: (1) specific occupations that are presumed to be performing acquisition-related work no matter what organization the employee is in, (2) a combination of an employee’s occupational series and the organization in which the employee works, and (3) specific additions and deletions to the first two categories. DOD is currently coding the positions and employees identified by the Refined Packard methodology into its official personnel systems. DOD components and the military services’ estimate that the number of personnel included in the acquisition workforce will expand when the coding is completed in October 2002. All the civilian agencies we reviewed include personnel in the contract specialist and purchasing agent job series as specified by the Clinger- Cohen Act. All agencies also include contracting officers and three include CORs and COTRs as required in OFPP’s policy enumerating acquisition- related positions. Every civilian agency includes additional positions in which contracting functions are performed, such as property disposal or procurement clerks. However, only VA and DOE include positions in which acquisition-related functions are performed (i.e., program managers). Table 2 shows how the agencies defined their acquisition workforces. Agencies are aware of the need to expand their definitions to include all positions in which “significant acquisition-related functions are performed,” as required by the Clinger-Cohen Act. To assist agencies in this effort, OFPP Policy Letter 97-01 identified acquisition workforce positions, in addition to contracting and purchasing specialists, to include contracting officers, CORs, and COTRs. Furthermore, OFPP Policy Letter 97-01 stated that the Administrator would “consult with the agencies in the identification of other acquisition related positions.” All agencies include positions other than those enumerated in the Clinger-Cohen Act and OFPP policy, and GSA plans to do so. Specifically: VA includes program managers and procurement clerks in its definition. DOE includes program managers and property managers in its definition. HHS and NASA include procurement clerks in their definitions. GSA is identifying and including other acquisition-related positions in its acquisition workforce and expects to include program managers and other positions in the future, but GSA has not established a firm time frame. NASA asserted that managing a much wider range of acquisition personnel, including “other equivalent positions,” such as CORs and COTRs, would be much more difficult than current practice because agency managers responsible for acquisition workforce training did not have authority over personnel in offices other than theirs to require they take specific training courses. However, HHS, which has CORs and COTRs (which it refers to as project officers) not under control of the acquisition office, established regulations requiring the head of each contracting activity ensure their CORs and COTRs receive specified training. In addition, DOE, which has similar oversight concerns, has established an “umbrella” directive governing acquisition career development. Two offices, the Acquisition Career Development Program office and the Project Management Career Development Program office, monitor the training of employees in their respective career fields. Every agency we reviewed has established specific training requirements for each position identified in their acquisition workforce. The Defense Acquisition Workforce Improvement Act and the Clinger-Cohen Act established similar career management requirements, including education, experience, and training requirements employees must meet to qualify for each acquisition workforce position. These requirements are further defined, for DOD, by DOD regulations and other guidance, and for the civilian agencies by OFPP and the agencies’ own regulations. Two agencies also established training requirements for acquisition-related positions not formally included in their acquisition workforce definitions. The DAU develops curricula, approved by the Under Secretary of Defense (Acquisition, Technology & Logistics), that include descriptions of the education, experience, and core training required to meet the standards for certification in each acquisition career field. In addition, DAU offers assignment-specific training. Annually, advisors from each DOD career field determine whether certification standards and assignment-specific training requirements should be updated and whether training curricula are current. Any changes must be approved by the Director of Acquisition Education, Training, and Career Development before they are published in the DAU catalog. The DAU curriculum includes courses identified by the Under Secretary of Defense (Acquisition, Technology & Logistics) as integral to the education and training of personnel in identified positions. These courses are intended to provide unique acquisition knowledge for specific assignments, jobs, or positions; maintain proficiency; and remain current with legislation, regulation, and policy. They also cover topics such as program management, systems acquisition, construction, and advanced contract pricing. OFPP’s FAI develops training and career development programs for civilian agency acquisition workforce personnel. Specifically, FAI developed the contracting and procurement curriculum for the acquisition workforce, worked closely with DAU in its course development, and coordinated with colleges and universities to identify and develop education programs for the acquisition workforce. In addition, FAI is developing several Web-based courses for various acquisition personnel. All DOD agencies follow the DAU curriculum. Some civilian agencies, including NASA and DOE, also follow the DAU curriculum for the contracting and purchasing functions. Other agencies, including GSA and VA, have developed training programs and courses that follow the curriculum established by FAI. While HHS has awarded contracts to teach courses for its own acquisition workforce, the curriculum and course contents are modeled on those developed by FAI. The civilian agencies we reviewed all had policies describing the education and training requirements for each member of their acquisition workforce. Even when agencies do not include all positions that play a role in their acquisition process in their acquisition workforce, they established education and training requirements for those positions. For instance, NASA and HHS, which do not include COTRs in their acquisition workforce, established training requirements for that position. To ensure training requirements are being met, DOD and the military services use a centralized management information system that is automatically updated with training and personnel data. The civilian agencies use less sophisticated spreadsheet programs to collect and maintain information on the education, training, and continuing education received by their acquisition workforce. At least once a year, each agency collects data from its regional offices and/or contracting components and consolidates the data into its tracking system. Although we obtained data from DOD and the civilian agencies to determine the various elements collected, we did not assess the reliability or adequacy of their systems. Our purpose was to ascertain that DOD and the civilian agencies maintained data on the training received by their acquisition workforce and not to validate the accuracy of that data. While we have reported weaknesses in the data maintained by VA and GSA,those agencies are taking action to improve the reliability and completeness of their tracking systems. Civilian agencies said that they did not have centralized management information systems because they were awaiting development and implementation of OFPP’s proposed Web-based Acquisition Career Management Information System (ACMIS), expected to be available in September 2002. The civilian agencies, with the exception of VA, viewed their systems as being interim. As a result of not having a centralized management information system, these agencies must rely on the data submitted periodically by training coordinators in their various locations throughout their agencies. Also, this data is often maintained on unofficial manual records or on various spreadsheets, making it difficult for the responsible acquisition officer to verify its accuracy. Because of ACMIS development delays, VA developed its own management information system to alleviate these problems, and it is currently entering historical employee training data into the database. ACMIS is to be a federal Web-accessible database of records to track acquisition workforce training and education. It is expected that the data in ACMIS will be used in making budgeting, staffing, and training decisions and monitoring the status of staff warrants. The baseline data for ACMIS will come from the Office of Personnel Management’s Centralized Data Personnel File and agency workforce databases. Those records will then be supplemented with education, training, warrant, and certification data provided by individuals in the acquisition workforce. In addition, the system is to provide for computer-to-computer interfaces for bulk and automated data transfers (i.e., updates from agency personnel files or updates of multiple employee records with a common set of data, such as the completion of a course). The development of the new system, however, has experienced considerable delays. Although OFPP tasked FAI to develop the system in September 1997, it has not yet been implemented. In 2000, we reported that delays in developing the system were largely attributable to difficulties in obtaining agreement on the requirements for the system. Since our report, FAI, under OFPP direction, has published functional specifications and data requirements for the system. In December 2001, FAI contracted for development of the system, and FAI officials said the contractor was on track to meet the September 2002 implementation. While DOD and the agencies we reviewed had varying degrees of funding available, all reported that they managed to meet their acquisition workforces’ current required training needs. However, we did not review or validate acquisition workforce training budget and obligation data. Officials explained that knowing what training courses employees will need, determining the courses that will be provided to meet training needs, and knowing the costs of providing each course, including related travel costs, allowed them to establish the funding required for needed training. DOD employs a centralized approach in determining its funding requirements for acquisition workforce training for its services and components. Using its management information system and estimated costs, DOD and the military services and components go through the iterative process of reconciling course needs, class size, instructor availability, and other costs, such as travel. DAU funds (1) the cost of developing and presenting the courses and (2) the travel expenses for DOD employees attending the courses. The civilian agencies we reviewed employ similar procedures relying on the data available to them in their interim systems comprised of spreadsheets and unofficial manual records. DOD, the military services, and civilian agencies stated they had sufficient funds to meet their current minimum core training requirements. NASA and HHS reported making acquisition workforce training a priority and earmarking sufficient funds for it. Other agencies–GSA and VA—said that because they use revolving funds to pay for their training, they also had sufficient funds earmarked for their acquisition workforce training. However, DOE, which reported having limited funds for training, often relied on DOD and NASA courses provided free of charge, on a space available basis, for much of its acquisition training. Although they could fund current core training, DOD, the military services, and DOE–because they rely on DAU for much of their training–expressed concerns with their ability to meet future required training and career development needs of their employees, since DAU faces budget reductions. A DOD official noted that fiscal year 2001 budget reductions combined with 2 years of “straight-line” budgets have precluded DAU from providing all the courses requested by the DOD components. Also, while all employees received core training for their current positions and grades, they were often unable to receive core training needed to obtain warrants at the next higher level to allow them to work on larger contracts and to be competitive for promotion to a higher grade. Army and Navy officials cited similar concerns regarding DAU’s budget reductions. Air Force officials stated that anticipated increases in the acquisition workforce, because of the implementation of the Refined Packard methodology, the replacement of retirees, and its planned increases in cross training between acquisition specialties to meet strategic objectives, would require additional funding for core training in the future. A DOE official said that DAU’s budget cuts also potentially affect DOE’s ability to meet its future training requirements because of its reliance on DAU-provided courses. The official also noted that DOE’s limited training funds have curtailed funding tuitions for college courses, intern programs, continuing education, as well as management and leadership development programs, which could have an impact on the acquisition workforce’s career development. Other agencies reviewed did not indicate concerns about future training and career development. DOD and the military services have a more broadly defined acquisition workforce, including functions beyond the traditional contracting function. Civilian agencies’ definitions are narrower. Regardless of whether or not an agency determines to include a particular position in its acquisition workforce, each agency needs to take active steps to identify all those positions that have a role in the acquisition process important enough to warrant specific training. This knowledge can be fed into the agencies’ strategic planning efforts and increases their ability to provide human capital strategies to meet their current and future programmatic needs. The challenge for civilian agencies ensuring their acquisition workforce is receiving the proper training has been made more difficult by OFPP’s slow progress in implementing ACMIS. Continued delays in implementing this system will increase the time in which agencies have to use less sophisticated tools for tracking acquisition workforce training. In an effort to ensure agencies succeed in defining a multifunctional and multidimensional acquisition workforce, we recommend that the Administrator of OFPP work with all the agencies to determine the appropriateness of further refining the definition of the acquisition workforce and to determine which positions, though not formally included in the acquisition workforce, nonetheless require certain training to ensure their role in the acquisition process is performed efficiently and effectively. We also recommend that the Administrator of OFPP continue to monitor the ACMIS contract milestones to ensure that the contractor and FAI complete and implement the proposed governmentwide system on schedule. We received written comments on a draft of this report from the Administrator of OFPP. She generally concurred with our recommendations and made observations about OFPP’s efforts regarding the acquisition workforce (see appendix I). However, the Administrator took issue with our conclusion that delays in implementing the ACMIS system caused difficulties in ensuring the civilian agencies acquisition workforce is trained. The Administrator noted that, despite the absence of a centralized system, the agencies are responsible for managing the training of their workforce. Our recommendations are intended to help ensure that all staff integral to the success of agencies’ acquisition efforts receive appropriate training. Also, as we noted in the report, the civilian agencies said they had not developed centralized management information systems because they were awaiting the implementation of OFPP’s proposed governmentwide system that OFPP originally tasked FAI to develop in September 1997. We also received written comments from DOE, NASA, and VA and comments via e-mail from DOD, HHS, and GSA as discussed below. All agencies generally agreed with our findings. DOE concurred with our findings and offered additional technical comments regarding the inclusion of financial assistant specialists in its acquisition workforce and the status of certification and training requirements for personnel in its acquisition workforce. We incorporated these comments where appropriate. DOE’s comments appear in appendix II. NASA noted that it included procurement clerks in its acquisition workforce. We changed the report to reflect this. NASA also provided additional specific information regarding the training required of those acquisition personnel not included in its acquisition workforce definition. NASA’s comments appear in appendix III. VA concurred with our findings and noted the release of its Procurement Reform Task Force Report, which addresses the need for acquisition workforce enhancements. VA’s comments appear in appendix IV. DOD provided several technical comments and suggestions to clarify our draft report. We incorporated these comments and suggestions where appropriate. HHS concurred with our findings and provided technical comments. HHS noted that although certain acquisition personnel are not under the control of its acquisition office, that office has established regulations to ensure they receive required training. We believe our report adequately reflects their concerns. GSA stated it had reviewed our report and had no comments. To accomplish the objectives, we reviewed policies and procedures, examined records, and interviewed acquisition personnel, training, and budget officials at DOD, Army, Navy, Air Force; VA, DOE, HHS, GSA, and NASA. However, we did not attempt to determine the adequacy or timeliness of the training these agencies provided their employees. These agencies are the largest in terms of their annual expenditures and among the largest in terms of the number of people in their acquisition workforce. In fiscal year 2000, their acquisition workforce included almost 25,000 contract specialists and purchasing agents (the primary career fields in the acquisition workforce), who were responsible for nearly $200 billion in federal obligations for goods and services. To obtain information on the oversight and guidance provided to federal agencies, we reviewed legislation, regulations, directives, and policies and interviewed officials at OFPP and FAI. We conducted our review between October 2001 and June 2002 in accordance with generally accepted government auditing standards. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. At that time, we will send copies to other interested congressional committees, the secretaries of Defense, Army, Air Force, Navy, Energy, Health and Human Services, and Veterans Affairs; and the administrators of General Services Administration and the National Aeronautics and Space Administration, and the Office of Federal Procurement Policy. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-4125 or Hilary Sullivan at (214) 777-5652 if you have any questions regarding this report. Major contributors to this report were Thom Barger, Cristina Chaplain, Susan Ragland, Sylvia Schatz, and Tanisha Stewart. | GAO's continuing reviews of the acquisition workforce, focusing on the Department of Defense (DOD); the Departments of the Army, Navy, and Air Force; the Departments of Veterans Affairs, Energy, and Health and Human Services; the General Services Administration; and the National Aeronautics and Space Administration, indicate that some of the government's largest procurement operations are not run efficiently. GAO found that requirements are not clearly defined, prices and alternatives are not fully considered, or contracts are not adequately overseen. The ongoing technological revolution requires a workforce with new knowledge, skills, and abilities, and the nature of acquisition is changing from routine simple buys toward more complex acquisitions and new business practices. DOD has adopted multidisciplinary and multifunctional definitions of their acquisition workforce, but the civilian agencies have not. DOD and the civilian agencies reviewed have developed specific training requirements for their acquisition workforce and mechanisms to track the training of acquisition personnel. All of the agencies reviewed said they had sufficient funding to provide current required core training for their acquisition workforce, but some expressed concerns about funding training for future requirements and career development, particularly because of budget cuts made recently at the Defense Acquisition University. |
In May 1995, the Commission on Roles and Missions of the Armed Forces proposed the idea of a comprehensive quadrennial review by DOD of the country’s defense strategy and force structure. In August 1995, the Secretary of Defense endorsed the idea, and the following year legislation directed DOD to conduct the 1997 QDR. Congress created a permanent requirement for DOD to conduct a QDR every 4 years in the National Defense Authorization Act for Fiscal Year 2000, passed in 1999. According to this legislation, DOD was to conduct a comprehensive examination of the national defense strategy, force structure, force modernization plans, infrastructure, budget plan, and other elements of the country’s defense program and policies with a view toward determining and expressing the nation’s defense strategy and establishing a defense program for the next 20 years. Originally the legislation identified 14 specific issues for DOD to address, such as a comprehensive discussion of the national defense strategy of the United States and the force structure best suited to implement that strategy at a low-to-moderate level of risk. In addition, it allowed the Secretary of Defense to review any other issues he considers appropriate. The legislation in effect during the 2006 QDR reflected several amendments to the original legislation, for example, requiring DOD to assess the national defense mission of the Coast Guard. (See app. II for the legislation in effect during the 2006 QDR.) Among other requirements, the 1999 QDR legislation required that the Secretary of Defense assess the nature and magnitude of the political, strategic, and military risks associated with executing the missions called for under the national defense strategy. In the 2001 QDR report, DOD introduced a new risk management framework that identified four areas of risk—operational, force management, future challenges, and institutional. According to the 2001 QDR report, the framework would enable DOD to address the tension between preparing for future threats and meeting the demands of the present with finite resources. Further, the framework was intended to ensure that DOD was sized, shaped, postured, committed, and managed with a view toward accomplishing the strategic priorities of the 2001 QDR. Future QDRs will be affected by the new reporting elements added to the QDR legislation by the John Warner National Defense Authorization Act for Fiscal Year 2007. Specifically, the legislation requires DOD to establish an independent review panel to conduct an assessment of the QDR no later than 6 months before the date that DOD’s report on the QDR is submitted to Congress. The panel is required to submit, within 3 months after the date on which the QDR is submitted, an assessment of the review, including its recommendations, the stated and implied assumptions incorporated in the review, and the vulnerabilities of the strategy and force structure underlying the review. The legislation also specifies that the QDR review should not be constrained to comply with the budget submitted to Congress by the President. In addition, the legislation added several specific issues that DOD is required to address such as providing the specific capabilities, including the general number and type of specific military platforms, needed to achieve the strategic and warfighting objectives. Lastly, the authorization act directs DOD to submit to the Senate and House Armed Services Committees a report on the implementation of recommendations identified in the 2006 QDR report no later than 30 days after the end of each fiscal year quarter. (See app. III for a summary of additions to the QDR legislation, 10 U.S.C. §118 as a result of the John Warner National Defense Authorization Act for Fiscal Year 2007.) DOD considers the 2006 QDR a refinement of its predecessor 2001 QDR, which detailed the department’s intent to shift the basis of defense planning from the long-standing “threat-based” model, which focused on specific adversaries and geographic locations, to a “capabilities-based” construct that seeks to prepare for a range of potential military operations against unknown enemies. According to the 2001 QDR report, the capabilities-based model focuses on how an adversary might fight rather than specifically who the adversary might be or where the war might occur. The Under Secretary of Defense (Policy) had the lead role in conducting the 2006 QDR. The Joint Staff played a supporting role in the process and had primary responsibility for leading the analytical work to support the Chairman of the Joint Chief of Staff’s risk assessment. In March 2005, the Secretary of Defense approved guidance, called the Terms of Reference, for the review. The Terms of Reference identified four focus areas and provided guidance to senior officials to develop capabilities and make investment decisions to shape the future force and reduce risks in these areas. The four focus areas were 1) defeating terrorist networks, 2) defending the homeland in depth, 3) shaping the choices of countries at strategic crossroads, and 4) preventing hostile states and nonstate actors from acquiring or using weapons of mass destruction. During the spring of 2005, DOD senior leaders held meetings on the focus areas with interagency partners from across the federal government and international allies to identify the potential threats and the types of capabilities needed to address the challenges associated with the focus areas. Officials from the intelligence community, such as the Defense Intelligence Agency, provided threat assessments for each of the focus areas. The Terms of Reference also established six study teams to assess capabilities associated with the QDR focus areas and directed the teams to develop options to reduce risk in these areas. Top-level civilian and military leaders from OSD and Joint Staff led the study teams, which included officials from the services and Combatant Commands. The Deputy Secretary of Defense and the Vice Chairman of the Joint Chiefs of Staff co-chaired a senior level group, which was eventually referred to as the Deputy’s Advisory Working Group, and this group reviewed the work of the study teams during the summer and fall of 2005. Other members of the review group included the Under Secretaries of Defense, the services’ Under Secretaries, the services’ Vice Chief of Staffs, and the Deputy Commander, U.S. Special Operations Command. The Deputy Secretary and his working group determined what information each study team would provide to the senior-level review group, which was led by the Secretary of Defense. Figure 1 shows the structure that OSD established to conduct the QDR. According to the 2006 QDR report, the foundation of this QDR is the National Defense Strategy, published in March 2005. The Secretary of Defense’s National Defense Strategy is implemented through the National Military Strategy, which is developed by the Chairman of the Joint Chiefs of Staff. The National Military Strategy provides focus for military activities by defining a set of interrelated military objectives from which the service chiefs and combatant commanders identify desired capabilities and against which the Chairman of the Joint Chiefs of Staff assesses risk. While DOD’s approach and methodology for the 2006 QDR had several strengths, several weaknesses significantly limited the review’s usefulness in addressing force structure, personnel requirements, and risk associated with executing the national defense strategy. On the positive side, the 2006 QDR benefited from the sustained involvement of key senior DOD officials, interagency and allied participation, and internal collaboration among the QDR’s participants. However, weaknesses in the assessment of three key areas—force structure, personnel requirements, and risk— hampered DOD’s ability to undertake a fundamental reassessment of the national defense strategy and U.S. military forces. As a result of these weaknesses, Congress lacks assurance that DOD has conducted the analysis needed to determine the force best suited to implement the defense strategy. Further, DOD is not well positioned to demonstrate to Congress how it considered risks and made difficult trade-offs among its capabilities to balance investments within future budgets, given the nation’s fiscal challenges. DOD’s approach for the 2006 QDR benefited from several strengths. First, key senior DOD leaders maintained sustained involvement throughout the review. As we have noted in previous reports, best practices clearly indicate that top-level leadership is crucial for engineering major changes in an organization. Top leaders establish the framework for change and provide guidance and direction to others to achieve change. During the 2006 QDR process, the Deputy Secretary of Defense and the Vice Chairman of the Joint Chiefs of Staff co-chaired a senior level review group, now referred to as the Deputy’s Advisory Working Group, to review and approve initiatives of varying complexity presented by the six study team leaders and leaders of specialized issue areas, such as special operations forces. According to an official in the Office of the Secretary of Defense, during most of the QDR process, this senior level group met several times a week to review the study teams’ options and provide guidance to the teams to ensure that the QDR’s strategic priorities were addressed. Since the QDR report was issued in February 2006, the Deputy’s Advisory Working Group continues to meet regularly to oversee implementation of the QDR’s strategic priorities, such as improving DOD’s management structures and business processes to support effective decision making. Second, DOD collaborated with interagency partners, such as the Department of Homeland Security, and U.S. international allies, such as the United Kingdom, to discuss potential strategic challenges and determine capabilities that are required to meet current and future challenges. According to DOD officials, senior officials from the Department of Homeland Security including the U.S. Coast Guard and the Departments of Energy, State, and other federal agencies participated in DOD’s discussions establishing the strategic direction of the QDR during the spring of 2005. U.S. agency officials discussed with DOD officials the types of capabilities and investments needed to reduce risk in the QDR’s four focus areas—defeating terrorist networks, defending the homeland in depth, shaping the choices of countries at strategic crossroads, and preventing hostile states and nonstate actors from acquiring or using weapons of mass destruction. For example, DOD officials who coordinated the QDR stated that U.S. Coast Guard officials identified current and planned maritime defense capabilities as part of DOD’s discussion on combating weapons of mass destruction. Further, officials from U.S. allies, such as the United Kingdom, participated in the discussions to share their perspectives about how DOD, its allies, and global partners could address the nontraditional, asymmetric warfighting challenges of the 21st century, such as preventing the acquisition or use of weapons of mass destruction by nonstate actors. As a result of contributions from the interagency partners and allies, DOD was in a better position to identify and develop the four focus areas that eventually shaped the scope of the QDR. Third, leaders of the six study teams collaborated with each other to avoid duplication of work as they developed options to address challenges associated with the focus areas. The study team leaders held weekly meetings to discuss whether their issues could be better addressed by another study team, the progress of their work plans, and whether they could provide each other with mutually supporting analysis. Further, a group of senior officials, led by an official in the Office of the Secretary of Defense for Policy, attended the study teams’ weekly meetings to ensure that the options addressed the capabilities associated with the four focus areas and helped identify overlaps or gaps in the development of options. For example, three study teams, which developed and identified options related to force structure, personnel requirements, and roles and missions respectively, coordinated their work to minimize any overlap and identify any gaps in the development of options to increase the number of military and civilian personnel proficient in key languages such as Arabic, Farsi, and Chinese. Fourth, following the release of the 2006 QDR, the Deputy Secretary of Defense requested that officials in OSD establish procedures to track the implementation of the 2006 QDR initiatives which encompassed a range of military capabilities, from implementing its new personnel management system to developing a new land-based, penetrating long-range strike capability by 2018. Senior officials from the Office of the Director, Administration and Management created a departmentwide database and established criteria to categorize the implementation status of each initiative. Specifically, implementation of an initiative was categorized as “completed” if the initiative was fully implemented or if DOD had taken actions that officials determined as having met the intent of the initiative, even though the initiative may take years to fully implement. OSD officials have provided periodic briefings on the status of QDR initiatives to the Deputy Secretary of Defense and his advisory group since the publication of the 2006 QDR report. DOD reported to Congress in January 2007 that it had completed implementation of about 90, or 70 percent, of the 130 initiatives. Further, in January 2006 at the end of the QDR process, the Deputy Secretary of Defense identified eight study areas and established a process to continue developing DOD’s approaches for the issues associated with these study areas. According to senior DOD officials, these areas identified for post-QDR study were generally complex and involved multiple organizations, such as developing interoperable strategic communications. The Deputy Secretary provided guidance for the teams that included requirements to (1) define objectives, timelines, and performance metrics and (2) establish an oversight process as part of an implementation plan to ensure the decisions made during the QDR were achieved. According to DOD officials, DOD plans to provide Congress with information about the status of the post-QDR study teams’ implementation in its quarterly reports. For example, in DOD’s January 2007 report to Congress, DOD reported that one of the Institutional Reform and Governance study team’s objectives is to continue developing concepts and overseeing initiatives related to reforming governance and management functions such as capabilities-based planning. Weaknesses in the assessment of three key areas—force structure, personnel requirements, and risk—significantly limited the review’s usefulness in reassessing the force structure best suited to implement the defense strategy at low-to-moderate level of risk, which is a key requirement of the review. Our previous reporting on DOD’s prior QDRs and other work has shown that weaknesses in establishing a substantive basis for force structure, personnel requirements, and risk have been long- standing issues for the department. Further, until DOD can demonstrate an analytical basis for its force structure and personnel requirements, it will not be well-positioned to balance capability needs within budgets that are likely to be constrained in the future, given the nation’s fiscal challenges. Although the 2006 QDR study guidance emphasized that DOD would use capabilities-based planning to focus on how a range of potential enemies might fight, DOD did not conduct a comprehensive, integrated assessment of alternative force structures during the QDR using a capabilities-based approach. Based on our discussions with DOD officials and our review of DOD documents and non-DOD published studies, a capabilities-based approach requires a common understanding of how a capability will be used, who will use it, when it is needed, and why it is needed. Further, each capability should be assessed based on the effects it seeks to generate and the associated operational risk of not having the capability. A capabilities-based approach also seeks to identify capability gaps or redundancies and make trade-offs among the capabilities in order to efficiently use fiscal resources. In table 1 we identify several key elements of a capabilities-based planning approach and provide descriptions of these elements. DOD’s primary basis for assessing the overall force structure best suited to implement the national defense strategy, according to several DOD officials, was a Joint Staff-led study known as Operational Availability 06. The study compared the number and types of units in DOD’s planned force structure to the operational requirements for potential scenarios to determine whether and to what extent the planned force structure would experience shortages. However, the Joint Staff’s Operational Availability 06 Study did not assess alternatives to planned force structures and evaluate trade-offs among capabilities. In conducting the Operational Availability 06 Study, the Joint Staff completed two different analyses. The first analysis, referred to as the base case, relied on a set of operational scenarios that created requirements for air, ground, maritime, and special operations forces. During this study, the Joint Staff examined requirements for a broad range of military operations over a 7-year time frame. Two overlapping conventional campaigns served as the primary demand for forces with additional operational demands created by 23 lesser contingency operations, some of which represented the types of operations that military forces would encounter while defending the homeland and executing the war on terrorism. The Joint Staff then compared the number of military units in DOD’s planned air, ground, maritime, and special operations forces to the operational demands of the scenarios. The Joint Staff made two key assumptions during the analysis. First, the Joint Staff assumed that reserve component units could not deploy more than once in 6 years. Second, the Joint Staff assumed that while forces within each service could be reassigned or retrained to meet shortfalls within the force structure, forces could not be substituted across the services. Results of the Joint Staff’s first analysis showed that maritime forces were capable of meeting operational demands and air, ground, and special operations forces experienced some shortages. In response to a tasking from top-level officials the Joint Staff performed a second analysis that developed a different set of operational demands reflecting the high pace of operations in Iraq. In this analysis, the Joint Staff used the same 2012 planned force structure that was examined in the first analysis. When it compared the operational demands that were similar to those experienced in Iraq with DOD’s planned force structure, the Joint Staff found that the air, ground, maritime, and special operations forces experienced shortages and they could only meet operational demands for a security environment similar to Iraq, one conventional campaign, and 11 of the 23 lesser contingency scenarios. While the Operational Availability 06 Study had some benefits, several weaknesses significantly limited the study’s usefulness for integrating a capabilities-based approach that assessed force structure options. On the positive side, top leaders maintained sustained involvement in the Operational Availability Study; for example, based on their guidance, the Joint Staff conducted a second analysis that depicted operational demands, which more accurately represented the current security environment. That study demonstrated that significant shortages in military forces exist when forces are not retrained or reassigned to meet operational demands. However, weaknesses in the study’s methodology to assess different levels of force structure and use a capabilities-based planning approach limited the study’s usefulness in reassessing the fundamental relationship between the national defense strategy and the force structure best suited to implement the strategy. First, the Joint Staff did not vary the number and types of units to demonstrate that it assessed different levels or mixes of air, ground, maritime, and special operations force structure in its second analysis. Second, the Joint Staff did not identify capabilities of the force structure and make recommendations about trade-offs among capabilities. Further, concurrent with the Operational Availability 06 Study, DOD conducted separate assessments of some segments of its force structure to inform decisions about investments for capabilities. For example, DOD conducted a departmentwide study that assessed options about different levels and types of tactical air assets, such as the Joint Strike Fighter. However, in this study DOD did not fully address whether and to what extent future investment plans are affordable within projected funding levels, and in April 2007, we reported that DOD does not have a single, comprehensive, and integrated investment plan for recapitalizing and modernizing fighter and attack aircraft. In another example, DOD also conducted a study to determine whether ground forces in the Army, Marine Corps, and Special Operations Command could meet operational demands for a broad range of scenarios without relying extensively on reserve personnel. However, options to increase ground forces were not part of the study’s scope, and the implications of the ongoing operations in Iraq, such as the number of active brigade combat teams that would be needed and their length of time in theater, were not fully considered. A key reason why DOD did not use an integrated capabilities-based approach to assessing force structure options is that DOD did not have a unified management approach to implement capabilities-based planning principles into the QDR assessment. At the time of the QDR, no one individual or office had been assigned overall responsibility and authority necessary for implementing an integrated capabilities-based planning approach. Further, DOD had not provided comprehensive written guidance to implement departmentwide methods for capabilities-based planning that specifies the need to identify capabilities at the appropriate level of detail, identify redundant or excess capabilities that could be eliminated, facilitate trades among capabilities, assess and manage risk, and balance decisions about trade-offs with near- and long-term costs. Currently, DOD is undertaking some initiatives related to capabilities- based planning. However, these select initiatives do not represent the type of comprehensive, unified management approach needed to assess the force structure requirements to address a range of potential military operations against unknown enemies. For example: The Joint Staff initiated the Joint Capabilities Integration and Development System in 2003 to assess gaps in joint capabilities and recommend solutions to resolve those gaps. Under this system, boards comprised of high-level DOD civilians and military officials are convened to identify future capabilities needed in key functional areas, such as battle space awareness, and to make recommendations about trade-offs among air, space, land, and sea platforms. While this process may be important to assess gaps in joint warfighting capabilities, we have reported that its focus is to review and validate the initial need for proposed capabilities. However, we have also reported that the process is not yet functioning as envisioned to define gaps and redundancies in existing and future military capabilities across the department and to identify solutions to improve joint capabilities. Further, we reported that programs assessed by the Joint Staff’s process build momentum and move toward starting product development with little if any early department-level assessment of the costs and feasibility. According to senior DOD officials, the Joint Staff’s process does not thoroughly link capabilities to the strategic priorities identified in the QDR. The Deputy Secretary of Defense tasked the Institutional Reform and Governance post-QDR study team to develop departmentwide approaches that would allow DOD to integrate and facilitate its capabilities-based planning initiatives. Based on the study team’s work, in March 2007 the Deputy Secretary of Defense tasked several DOD organizations to develop plans to facilitate a capabilities-based planning approach. For example, the Joint Requirements Oversight Council is tasked with developing a process for identifying capability priorities and gaps at the appropriate level of detail and ranking all capabilities from high to low priority by October 2007. Further, the Deputy Secretary of Defense has reaffirmed the department’s commitment to portfolio management and expanded the scope of responsibility for the four capability portfolio test case managers. Among their new responsibilities, each portfolio manager is required to provide the Deputy’s Advisory Working Group with an independent portfolio assessment to inform investment decisions during DOD’s fiscal year 2009 program review. DOD may establish more portfolios as the roles and responsibilities of the existing managers evolve and operate in DOD’s existing decision processes, such as the Deputy’s Advisory Working Group. DOD made some changes to the current force structure to address perceived gaps in capabilities based on the QDR review, although these did not represent major changes to the composition of the existing force structure. For example, among the key force-structure-related decisions highlighted in the QDR were to (1) increase Special Operations forces by 15 percent and the number of Special Forces Battalions by one-third; (2) expand Psychological Operations and Civil Affairs units by 3,700 personnel, a 33 percent increase; (3) develop a new land-based penetrating long-strike capability to be fielded by 2018 and fully modernize the current bomber force (B-52s, B-1s, and B-2s); and (4) decrease the number of active component brigade combat teams from 43 to 42 and the number of planned Army National Guard brigade combat teams from 34 to 28. In January 2007—about a year after the QDR was completed—DOD approved the Army’s plan to increase the number of active component brigade combat teams to 48. Since DOD did not conduct a comprehensive, data-driven assessment of force structure alternatives during the QDR, it is not in the best position to assure itself or Congress that it has identified the force best suited to execute the national defense strategy. Although DOD concluded in the 2006 QDR report that the size of today’s forces—both the active and reserve components across all four military services—was appropriate to meet current and projected operational demands, it did not provide a clear analytical basis for its conclusion. In January 2007, the Secretary of Defense announced plans to permanently increase the size of the active component Army and the Marine Corps by a total of 92,000 troops over the next 5 years. But again, DOD did not identify the analysis that it used to determine the size of the increase. In February 2005, we recommended that DOD review active personnel requirements as part of the QDR, and in doing so, discuss its conclusions about the appropriate personnel levels for each of the services and describe the key assumptions guiding the department’s analysis, the methodology used to evaluate requirements, and how the risks associated with various alternative personnel force levels were evaluated. While DOD agreed with our recommendation, it did not perform a comprehensive, data-driven analysis of the number of personnel needed to implement the defense strategy as part of its 2006 QDR. Until DOD performs a comprehensive review of personnel requirements, it cannot effectively demonstrate to Congress a sound basis for the level of military and civilian personnel it requests. Our prior work has shown that valid and reliable data about the number of personnel required to meet an agency’s needs are critical because human capital shortfalls can threaten an organization’s ability to perform missions efficiently and effectively. Data-driven decision making is one of the critical factors in successful strategic workforce management. High- performing organizations routinely use current, valid, and reliable data to inform decisions about current and future workforce needs, stay alert to emerging mission demands, and remain open to reevaluating their human capital practices. Further, federal agencies have a responsibility to provide thorough analytical support over significant decisions affecting requirements for federal dollars so that Congress can effectively evaluate the benefits, costs, and risks. Rather than conducting a comprehensive assessment of its personnel requirements, DOD’s approach to active and reserve military personnel and civilian personnel levels was to limit growth and initiate efforts to use current personnel levels more efficiently. Consequently, the study team that was assigned to review issues related to manning and balancing the force took the existing force size as a given. From that basis, the study team identified alternative courses of action for changing the mix of specific skills, such as civil affairs, in the active and reserve components to meet future operational requirements. The team also considered whether changes in the mix of skills would require more military and civilian personnel at headquarters staffs. While these reviews are important for understanding how to use the force more efficiently, they cannot be used to determine whether U.S. forces have enough personnel to accomplish missions successfully because these reviews did not systematically assess the extent to which different levels of end strength could fill DOD’s combat force structure and provide institutional support at an acceptable level of risk. Although DOD’s 2006 QDR concluded that the Army and Marine Corps should plan to stabilize their personnel levels at 482,400 and 175,000 active personnel respectively, by 2012, in February 2007 the President’s fiscal year 2008 budget submission documented a plan to permanently increase the size of the active components of the Army by 65,000 to 547,400 and the Marine Corps by 27,000 to 202,000 over the next 5 years; and the Army National Guard by 8,200 to 358,200 and the U.S. Army Reserve by 6,000 to 206,000 by 2013. Shortly after the increase was announced, we testified before Congress that DOD’s record in providing an analytically driven basis for requested military personnel levels needs to be improved and suggested that Congress should carefully weigh the long-term costs and benefits in evaluating DOD’s proposal for the increases. Both the Army and Marine Corps are coping with additional demands that were not fully reflected in the QDR. For example, the Marine Corps decided to initiate a new study to assess active military personnel requirements shortly after the 2006 QDR was completed due to its high pace of operations and the QDR-directed changes in force structure, such as establishing a Special Operations Command requiring about 2,600 military personnel. Without performing a comprehensive analysis of the number of personnel it needs, DOD cannot ensure that its military and civilian personnel levels reflect the number of personnel needed to execute the defense strategy. Further it cannot ensure that it has a sufficient basis for understanding the risks associated with different levels of military and civilian personnel. For example, while too many active military personnel could be inefficient and costly, having too few could result in other negative consequences, such as the inability to provide the capabilities that the military forces need to deter and defeat adversaries. During the 2006 QDR, the risk assessments conducted by the Secretary of Defense and the Chairman of the Joint Chiefs of Staff did not fully apply DOD’s risk management framework to demonstrate how risks associated with its proposed force structure were evaluated. DOD introduced its risk management approach in 2001; however, we have reported that it has faced difficulty implementing this approach. For example, we found that DOD faced challenges in integrating its risk management framework and reform initiatives into a unified management approach. We have reported that an emerging challenge for the federal government involves the need for completion of comprehensive national threat and risk assessments in a variety of areas. For example, evolving requirements from the changing security environment, coupled with increasingly limited fiscal resources across the federal government, emphasize the need for agencies to adopt a sound approach to establishing resource decisions. We have advocated that the federal government, including DOD, adopt a comprehensive risk management approach as a framework for decision making that fully links strategic goals to plans and budgets, assesses values and risks of various courses of actions as a tool for setting priorities and allocating resources, and provides for the use of performance measures to assess outcomes. A risk management approach represents a series of analytical and managerial steps that can be used to assess risk, evaluate alternatives for reducing risks, choose among those alternatives, implement the alternatives, monitor their implementation, and that incorporate new information to adjust and revise the assessments and actions, as needed. Further, such a data-driven risk assessment can provide a guide to help shape, focus, and prioritize investment decisions to develop capabilities. A key reason why DOD did not apply its risk framework during the QDR is that it had difficulty in developing department-level measures that would be necessary to assess risk and as a result, the assessment tools were not available for use during the QDR. The QDR’s study guidance tasked the QDR coordination group, led by officials in the Office of the Under Secretary of Defense (Policy), to review the QDR risk management guidelines and provide these guidelines to the QDR’s study teams for review. The guidelines were to provide some examples of how to measure performance related to DOD’s key areas identified in its framework— operational, force management, institutional, and future challenges. The QDR coordination group was to incorporate the study teams’ feedback about recommended changes. Lastly, the QDR coordination group was to issue the guidelines and monitor the application of performance measures during the QDR. According to an official in the Office of the Under Secretary of Defense (Policy), the QDR coordination group had difficulty developing the measures and thus did not issue guidelines. As a result, the study teams did not have the assessment tools to assess risk during the QDR. Since department-level measures for assessing risk were not available during the 2006 QDR, several of the study teams relied primarily on professional judgment to assess the risks of not investing in various capabilities. For example, the study team responsible for developing capabilities told us that they examined information about potential future threats and determined that DOD needed medical countermeasures to address the threat of genetically engineered biological agents. Members of the study team discussed the consequences of not developing the medical procedures and treatments that would be needed to increase survival rates if U.S. military personnel were to encounter the highly advanced genetic material. Further, the Chairman of the Joint Chiefs of Staff was not tasked to use the risk management framework in assessing risks and did not choose to use it in his assessment. Rather, the Chairman’s assessment examined the extent to which the 2006 QDR initiatives would address combatant commanders’ operational needs for potential future requirements. Without a sound analytical approach to assess risk during future QDRs, DOD will not have a sufficient basis to demonstrate how the risks associated with the capabilities of its proposed force structure were evaluated. Further, DOD may be unable to demonstrate how it will manage risk within current and expected resource levels. Without an analytically based risk assessment, DOD may not be able to prioritize and focus the nation’s investments to combat 21st century security threats efficiently and wisely. The security environment of the 21st century has been characterized by conflicts that are very different from traditional wars among states. This environment has created the need for DOD to reexamine the fundamental operations of the department and the capabilities needed to continue to execute its missions. In addition, DOD has created new organizations, such as the U.S. Northern Command and the Assistant Secretary of Defense for Homeland Defense, to counter new threats to the homeland and support the federal response to any potential catastrophic event, natural, or man-made. Through our discussions with defense analysts, we have identified options for modifying several QDR legislative requirements that could be considered in light of the changed security environment, to make the QDR process and report more useful to Congress and DOD. The QDR legislation contains numerous issues for DOD to address, some that require reporting on broad issues, such as the national defense strategy and the force structure needed to execute that strategy, and some that are more detailed, such as the requirement that DOD examine the appropriate ratio of combat forces to support forces under the national defense strategy. Many defense analysts we spoke with thought some of the strategic issues are of great importance and should remain for future QDRs. Further, they believe DOD should focus its efforts on providing more information on the analytic basis for its key assumptions and strategic planning decisions. However, they also asserted that several of the QDR’s detailed reporting elements detract attention from strategic issues, are already required and reported under other laws, or are no longer relevant in the new security environment. Options to improve the usefulness of future QDRs include (1) clarifying expectations for how the QDR should address the budget plan, (2) eliminating some reporting elements for the QDR legislation that could be addressed in different reports, (3) eliminating some reporting elements in the QDR legislation for issues that may no longer be as relevant due to changes in the security environment, and (4) establishing an independent advisory group to work with DOD prior to and during the QDR to provide alternative perspectives and analyses. Several defense analysts we spoke with asserted that the permanent requirement for DOD to conduct a comprehensive strategic review of the defense program every 4 years is important and that Congress should continue to require that DOD conduct future QDRs. Moreover, several defense analysts acknowledge that certain key requirements remain critical to the QDR’s purpose of fundamentally reassessing the defense strategy and program. Specifically, the requirements that task the Secretary of Defense to (1) delineate a defense strategy and (2) define sufficient force structure, force modernization, budget plan, and other elements of a defense program that could successfully execute the full range of missions called for by the defense strategy at low to medium risk over 20 years were seen as critical elements needed to ensure that Congress understands DOD’s strategies and plans. Several defense analysts told us that it is in the national interest to ensure that DOD conducts the kind of long-range strategic planning that can provide meaningful recommendations for meeting future national security challenges and that enables debate on the costs and benefits of requirements for future military and capabilities as well as risks in capability gaps in light of national fiscal challenges. The QDR legislation also directs DOD to define the nature and magnitude of the political, strategic, and military risks associated with executing the missions called for under the national defense strategy in the QDR and include a comprehensive discussion of the force structure best suited to implement that strategy at low-to-moderate level of risk. Analysts saw these areas as important for DOD to provide Congress the assurance that there is a sound analytical basis for its risk assessment that includes how DOD identified risks and evaluated alternatives for reducing risks. Additionally, analysts viewed this discussion as important in assuring that the department has incorporated a variety of perspectives in its risk assessments. Some analysts stated that the requirements to discuss the assumed or defined national security interests, the threats to the assumed or defined national security interests, and the scenarios developed in the examination of those threats are several key elements that should remain to enable the department to demonstrate that principles of risk assessment have been addressed. Similarly, analysts suggest that the Chairman of the Joint Chiefs of Staff’s requirement to assess the results of the QDR review, including an independent assessment of risk, is helpful to provide another assessment that DOD and Congress can use to understand the risks associated with the force structure and consider the courses of actions the department might want to take to reduce risks. Some DOD defense analysts told us that the QDR legislation includes numerous detailed requirements that may impede DOD’s focus on high- priority areas. Based on our discussions with analysts, we identified several options that Congress should consider to enhance the focus of future QDRs on high-priority issues and improve the thoroughness of DOD’s analysis: Clarify expectations for how the QDR should address the budget plan that supports the national defense strategy. The QDR legislation has several reporting elements that relate to budget planning to support the defense strategy. First, the QDR legislation requires DOD “to delineate a national defense strategy…” and “to identify the budget plan that would be required to provide sufficient resources to execute successfully the full range of missions called for in that national defense strategy at a low-to-moderate level of risk.” Second, the legislation requires DOD “to conduct a comprehensive examination…of the national defense strategy…with a view toward establishing a defense program for the next 20 years.” Third, based on recent changes to the legislation that will apply to the next QDR in 2010 as well as future QDRs, DOD is required to “make recommendations that are not constrained to comply with the budget submitted to Congress by the President.” Some defense analysts raised concerns about whether these reporting requirements provide sufficient and clear guidance for DOD to use in conducting QDRs. For example, they questioned whether the planning time frame of 20 years established by the QDR legislation is most useful in providing Congress with information to perform its oversight of the defense program. Although DOD officials and defense analysts acknowledged the benefits of forecasting threats and capabilities for a 20-year period, they stated it would be difficult to develop a detailed budget plan for a 20-year period given the uncertain nature of threats in the new security environment. Further, analysts asserted that rather than enabling DOD to set strategic priorities without regard to current budgets, the requirement to “make recommendations that are not constrained to comply with the budget..,” could lead the services and the capability portfolio managers to push for inclusion of every program in their plans. This could make it more difficult for DOD to prioritize investments to meet key capability needs and assess the affordability of new capabilities across the department. Moreover, DOD’s three QDR reports since 1997 have not fully described DOD’s methodology or approach for assessing its budget needs or budget plans that explained how DOD intended to fund the full range of missions called for in the national defense strategy. For the 2006 QDR, DOD included several QDR initiatives in the President’s fiscal year 2007 budget that was submitted to Congress at the same time as the QDR report but stated that it would continue to define a budget plan for the QDR by identifying the funding details in DOD’s future years defense program for fiscal years 2008 through 2013. In addition, the report did not provide information about the extent to which DOD considered the long-term affordability of the overall defense program. We have emphasized in previous reports that the federal government now faces increasing fiscal challenges, and DOD may face increasing competition for federal dollars. Further, in November 2005, we reported that DOD has not demonstrated discipline in its requirements and budgeting processes, and its costly plans for transforming military operations and expensive acquisitions may not be affordable in light of the serious budget pressures facing the nation. For example, we reported that DOD’s planned annual investment in acquisition programs it has already begun is expected to rise from $149 billion in fiscal year 2005 to $178 billion in fiscal year 2011. Given these pressures, Congress may want a clearer view of how DOD should budget for the capabilities associated with the proposed force structure, and how it evaluated the trade-offs in capabilities to maximize the effectiveness of future investments. If Congress decides that it needs additional budget-related information to carry out its oversight of future QDRs, then it might consider clarifying the reporting element relating to the required budget plan to specify what information DOD should include in the QDR. Further, Congress may want to consider clarifying its expectations for the information DOD provides in the QDR as to how it has addressed the long-term affordability challenges of transforming military operations. Eliminate some reporting elements in the QDR legislation for issues that could be addressed in different reports. According to some defense analysts, some requirements contained in the QDR legislation are not essential to the strategic purpose of the QDR and may divert DOD’s focus from that strategic purpose. While important, some reporting elements are already examined in other DOD reviews, and Congress has access to the results of these periodic reviews. These reporting elements include the following: An evaluation of “the strategic and tactical airlift, sealift, and ground transportation capabilities required to support the national defense strategy.” In November 2002 we reported that the QDR may not be the appropriate venue for addressing mobility issues because examination of this issue requires detailed analysis that can best be conducted after DOD decides on a defense strategy, identifies a range of planning scenarios consistent with the new strategy, and completes its detailed analysis of requirements for combat forces. Furthermore, DOD routinely conducts analyses of its mobility requirements outside of the QDR process, according to DOD officials. Since 1992, DOD has issued four major analyses of the U.S. military strategic lift requirements: the 1992 Mobility Requirements Study, Bottom Up Review; the 1995 Bottom Up Review Update; the 2001 Mobility Requirements Study— 2005, issued in 2001; and the Mobility Capability Study, issued in 2005. An assessment of the “advisability of revisions to the Unified Command Plan as a result of the national defense strategy.” DOD has a process for assessing the Unified Command Plan and is required to report changes to the plan to Congress under other legislation. Specifically, the Chairman of the Joint Chiefs of Staff is required to review periodically and not less than every 2 years the missions, responsibilities, and forces of each combatant command and recommend any changes to the President, through the Secretary of Defense. This legislation also requires that, except during times of hostilities or the imminent threat of hostilities, the President notify Congress not more than 60 days after either establishing a new combatant command or significantly revising the missions, responsibilities, or force structure of an existing command. As such, a major event or change in the political or security landscape could trigger the need for a change in the plan. For example, in the spring of 2007, the President announced that DOD intends to establish a U.S. Africa Command to oversee military operations on the African continent. According to an OSD official, DOD will revise the 2002 Unified Command Plan and report on the changes in the military command structure after plans for U.S. Africa Command are more fully developed. Eliminate some reporting elements in the QDR legislation for issues that may no longer be as relevant due to changes in the security environment. As we reported in our assessment of DOD’s 2001 QDR, a DOD official and some defense analysts said that two reporting elements should be eliminated because they are related to the allocation of forces under the old two-major-theater-war construct, which is more limited than DOD’s current force planning construct that includes a broader range of threats. These reporting elements include the following: A discussion of the “appropriate ratio of combat forces to support forces (commonly referred to as the ‘tooth-to-tail ratio’) under the national defense strategy.” DOD’s goal has been to reduce the number of personnel and costs associated with the support forces, or “tail.” However, during the 2006 QDR process and report DOD did not identify which units should be considered support and which should be considered combat. Given rapidly changing technologies, differentiating between support and combat troops has become increasingly irrelevant and difficult to measure. For example, as the United States moves toward acquiring greater numbers of unmanned aircraft piloted from remote computer terminals and relies increasingly on space-based assets operated by personnel in the United States, it will be more difficult to distinguish between combat and support personnel. Assessments of “the extent to which resources must be shifted among two or more theaters under the national defense strategy in the event of conflict in such theaters,” and the assumptions used regarding “warning time.” Both these reporting elements relate to the allocation of forces under the old two-major-theater-war planning construct. Under this construct, the amount of time that was assumed available for warning and the separation time between major theater wars were critical factors in planning the size and composition of U.S. forces and assessing operational risk, particularly for assets that might need to be shifted between theaters. However, under the new defense strategy, along with DOD’s new force planning construct, DOD assumes that it will continue to be involved in a wide range of military operations around the world. Given the full spectrum of threats that DOD is planning to address, it may be more useful for DOD’s force structure assessments to be tied to requirements for a broad range of potential threats. Establish an independent advisory group to work with DOD prior to or during the QDR to provide alternative perspectives and analyses. As part of our assessment of the 1997 QDR, we suggested that a congressionally mandated panel, such as the 1997 National Defense Panel, could be used to encourage DOD to consider a wider range of strategy, force structure, and modernization options. Specifically, we noted that such a review panel, if it preceded the QDR, could be important because it is extremely challenging for DOD to conduct a fundamental reexamination of defense needs, given that its culture rewards consensus building and often makes it difficult to gain support for alternatives that challenge the status quo. One of the recent additions to the QDR legislation requires the establishment of an independent panel to conduct an assessment of future QDRs after the process is completed; however, most defense analysts we spoke with agreed that an independent analysis of key issues for the Secretary of Defense either prior to or during the next review would complement a post-QDR assessment and strengthen DOD’s ability to develop its strategic priorities and conduct a comprehensive force structure and capabilities analysis. The analysts agreed that an advisory group established before or during the QDR process could function as an independent analytical team to challenge DOD’s thinking, recommend issues for DOD to review and review assumptions, and provide alternative perspectives in activities such as identifying alternative force structures and capabilities, and performing risk assessments. An independent group’s assessments could be useful to DOD in future QDRs to identify the capabilities of the nation’s current and future adversaries because potential enemies will likely be more difficult to target than the adversaries of the Cold War era. The 2006 QDR represented an opportunity for DOD to perform a comprehensive review of the national defense strategy for the first time since military forces have been engaged in the Global War on Terrorism. Sustained DOD leadership facilitated decision making, and extensive collaboration with interagency partners and allies provided a range of perspectives on threats and capabilities. However, weaknesses in DOD’s analysis of force structure, personnel requirements, and risk limited its reassessment of the national defense strategy and U.S. military forces. For example, by not fully incorporating capabilities-based planning into a comprehensive assessment of alternative force structures, DOD could not comprehensively identify capabilities gaps, associated operational risks, and trade-offs that must be made to efficiently use limited fiscal resources. Therefore, DOD was not in a good position to assure Congress that it identified the force best suited to execute the national defense strategy. Moreover, the Secretary of Defense’s announcement of plans to increase the sizes of the Army and Marine Corps in January 2007 calls into question the analytical basis of the QDR conclusion that the number of personnel and the size of the force structure for the services were appropriate to meet current and future requirements. Further, without a comprehensive approach to assessing risk, DOD’s 2006 QDR did not provide a sufficient basis to demonstrate how risks associated with its proposed force structure were evaluated. Unless DOD takes steps to provide comprehensive analytical support for significant decisions in future QDRs, the department will not be in the best position to distinguish between the capabilities it needs to execute the defense strategy versus those capabilities it wants but may not be able to afford at a time when the nation’s fiscal challenges are growing. Moreover, Congress will be unable to effectively evaluate the benefits, costs, and risks associated with decisions flowing from future QDRs. Opportunities exist for Congress to consider further changes to the QDR legislation that may encourage DOD to concentrate its efforts on high-priority matters such as developing a defense strategy and identifying the force structure best suited to execute the strategy. Unless Congress clearly identifies its expectations for DOD to develop a budget plan that supports the strategy, DOD may not thoroughly address the challenges it will face as it competes with other federal agencies and programs for taxpayers’ dollars and may spend considerable effort assessing options for capabilities that could be unaffordable given our nation’s fiscal challenges. Moreover, the large number of reporting elements in the QDR legislation presents DOD with a challenge in conducting data-driven comprehensive analyses of many significant complex issues. A reassessment of the QDR’s scope could provide greater assurances that DOD will thoroughly assess and report on the most critical security issues that the nation faces and could help it decide what actions it needs to take to establish the most effective military force to counter 21st century threats. Lastly, although Congress has established a new legislative requirement for an independent panel to conduct a post-QDR review, there is currently no mechanism for Congress and the Secretary of Defense to obtain an independent perspective prior to and during the QDR. Without an independent group of advisors that could provide comprehensive data-driven analyses to DOD prior to and during future QDR reviews, DOD may not consider a wider range of perspectives, such as force structure options, thus limiting the analytic basis of its QDR decisions. To enhance the usefulness of future QDRs and assist congressional oversight, we recommend that the Secretary of Defense take the following two actions: Develop appropriate methods for the department to use in a comprehensive, data-driven capabilities-based assessments of alternative force structures and personnel requirements during future QDRs. Develop appropriate methods for the department to use in conducting a comprehensive, data-driven approach to assess the risks associated with capabilities of its proposed force structure during future QDRs. To improve the usefulness of future QDRs, Congress should consider revisions to the QDR legislation, including (1) clarifying expectations on how the QDR should address the budget plan that supports the national defense strategy, (2) eliminating some detailed reporting elements that could be addressed in different reports and may no longer be relevant due to changes in the security environment, and (3) requiring an independent panel to provide advice and alternatives to the Secretary of Defense before and during the QDR process. The Principal Deputy Under Secretary of Defense for Policy provided written comments on a draft of this report. The department partially agreed with our recommendations and agreed with the matters we raised for congressional consideration regarding possible changes to the QDR legislative language. In addition, the comments provided information about steps the department is taking to update its methodologies for analyzing force structure requirements and assessing risks. DOD’s comments are reprinted in their entirety in appendix IV. DOD also provided technical comments which we incorporated as appropriate. In its comments, the department partially agreed with our recommendation that the Secretary of Defense develop appropriate methods for conducting comprehensive, data-driven capabilities-based assessments of alternative force structures and personnel requirements. DOD agreed with our conclusion that the 2006 QDR did not comprehensively assess alternatives to planned structure; rather, its analysis was limited to identifying shortfalls in current structure when compared to various illustrative operational scenarios. However, in its comments, the department noted that it has developed or is developing new illustrative security environments to use to demonstrate the demands associated with force structures and personnel requirements for each strategic environment. The department also pointed out the difficulty of undertaking an evaluation of the defense strategy and producing a defense program within the QDR process, as required under current QDR legislation. It said that as the department further develops the underlying assumptions for the force planning construct and refreshes the illustrative scenarios available for analysis, it will be in a better position to analyze overall needed capabilities, including personnel requirements. Finally, the department noted that the 2006 QDR was based on information available in 2005, which included a different demand than what military forces face today. At that time, the department’s collective decision, approved by the then Secretary of Defense, was that the size of the force was about right, although the force mix should be adjusted. As a result of this change in demand since the 2006 QDR, according to DOD’s comments, DOD has responded by increasing Army and Marine Corps end strength. We believe that the steps DOD outlined in its comments, such as revising the illustrative scenarios and developing force demands for new security environments, will help DOD to improve its force structure analyses. However, we believe that a comprehensive assessment that identifies and documents the basis for trade-off decisions across capability areas is critical to developing the force structure best suited to execute the defense strategy. Until DOD undertakes a comprehensive assessment of alternative force structure options that clearly documents how the department reached its force structure decisions, it will not be in the best position to determine the force structure best suited to execute the missions called for in the defense strategy at low-to-moderate risk. DOD also partially concurred with our recommendation to develop appropriate methods for conducting comprehensive, data-driven assessments of the risks associated with the capabilities of its proposed force structure during future QDRs. In its comments, the department agreed that improving the department’s risk methodology is necessary to appropriately assess risk. It noted that in addition to risks associated with capabilities, strategic, operational, force management, and institutional risks need to be addressed in a risk assessment methodology. The department cited several post-QDR initiatives the department is undertaking to improve how the department assesses risk, including new measures to help link strategic goals to plans and budgets and develop performance metrics. Also in its comments, the department described efforts to strengthen and integrate existing assessments to allow decision makers to better set priorities, allocate resources, and assess outcomes and risks and stated its intent to improve risk assessment methods to inform risk measurement in future QDRs. We agree that assessing risk associated with capabilities is only one type of risk facing the department and that the initiatives the department is undertaking to link strategic goals with plans and budgets and improve its risk assessment methodology can, when implemented, help it improve its ability to identify and manage risks. Until the department’s risk management framework is sufficiently developed that it can support comprehensive assessments of risk across domains, assess progress toward accomplishing strategic goals, and provide senior leaders reliable analysis to inform decisions among alternative actions, DOD will not be in the best position to identify or assess risks to establish investment priorities. DOD also provided its views on matters we raised for congressional consideration in a draft of this review regarding possible revisions to the QDR legislation. Specifically, DOD agreed with clarifying expectations for addressing the budget plan and eliminating some reporting requirements. In a draft of this report, we originally raised as a matter for congressional consideration broadening the QDR legislation by requiring the legislatively required independent advisory panel, which would provide a post-QDR critique of the results of the process, to provide DOD with alternative perspectives and analysis prior to or during the QDR. The department stated that having an independent panel that could provide advice and alternatives to the Secretary of Defense before and during the QDR process would be useful. However, it raised the concern that tasking the same independent panel that is required to provide a post-QDR critique to also perform an advisory function before and during the review could create mistrust between the department leadership and the independent advisory panel. To address DOD’s concerns we have modified the matter for consideration to suggest that an independent panel be required to provide advice and alternatives to the Secretary of Defense before and during the QDR. This change is intended to provide Congress with the flexibility to establish separate independent panels to provide advice prior to and following the next QDR. We are sending copies of this report to other appropriate congressional committees and the Secretary of Defense. We will also make copies available to other interested parties upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-4402. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix V. To assess the strengths and weaknesses of the Department of Defense’s (DOD) approach and methodology for the 2006 Quadrennial Defense Review (QDR), we examined the relevant documentation including the John Warner National Defense Authorization Act for Fiscal Year 2007; the National Defense Strategy of the United States of America (March 2005); the 1997, 2001, and 2006 QDRs; the QDR Terms of Reference (March 2005); the Under Secretary of Defense (Policy) issue papers for the QDR’s focus areas; and the 2006 QDR’s study teams’ briefings and other documentation for the DOD’s senior-level review group, as well as our reports on aspects of previous QDRs. We also examined documents identifying the methodology and results of the QDR’s key force structure analyses and risk assessments. We reviewed studies on capabilities-based planning and compared the key elements of capabilities-based planning identified in the studies to the QDR’s Terms of Reference and DOD’s documented methodology for the Operational Availability 06 Study to assess the extent to which capabilities-based planning concepts were used during the QDR. We also discussed these issues with officials from the Office of the Under Secretary of Defense (Policy); the Office of Program Analysis and Evaluation; the Joint Chiefs of Staff Directorate for Force Assessment; U.S. Special Operations Command; and officials from the Army, Air Force, and Marine Corps who participated in the QDR process. To understand how DOD established processes to ensure that QDR initiatives are implemented, we examined internal DOD documents, DOD’s January 2007 quarterly report to Congress on the status of implementation of the 2006 QDR, and post-QDR study teams’ reports to understand the methodology that was developed to oversee implementation. We discussed the implementation status of the QDR initiatives with officials from the Office of the Director, Administration and Management and the Under Secretary of Defense (Policy). We did not undertake an assessment of the effectiveness of implementation of the QDR initiatives because it was outside of scope of our review. We obtained and examined documents from the Deputy Secretary of Defense and the post-QDR study teams and discussed the status of the teams’ work with officials from the Under Secretary of Defense (Policy), the Institutional Reform and Governance team, and the Joint Command and Control and Battlespace Awareness capability portfolios. Moreover, we reviewed the internal controls on DOD’s tracking system for QDR initiatives and evaluated the reliability of that data for DOD’s use. We applied evidence standards from the generally accepted government auditing standards in our evaluation of DOD’s database. As a result, we determined the information we used meets these evidence standards and is sufficiently reliable for our purposes. To determine whether changes to the QDR legislation could improve the usefulness of future reviews including any changes needed to better reflect the security conditions of the 21st century, we examined a wide variety of studies that discussed the strengths and weaknesses of DOD’s 2006 QDR and prior reviews. Our review included studies from the RAND Corporation, the National Defense University, and the Center for Strategic and Budgetary Assessments. To obtain opinions and develop options to improve the usefulness of future QDRs, we interviewed several DOD officials who participated in the 2006 QDR from the services and the Joint Staff. Further, we met with 11 defense analysts who had detailed knowledge of DOD’s QDR process and/or participated in DOD’s 1997, 2001, or 2006 QDRs. We used a standard set of questions to interview each of these analysts to ensure we consistently discussed the reporting elements of the QDR legislation and DOD’s approach and methods for its three QDRs. To develop the questions, we reviewed the QDR legislation, DOD’s QDR reports, and our prior work on DOD’s strategic reviews. One of the defense analysts served in various positions within and outside of DOD such as the former Chairman of the Defense Science Board and the Chairman of the 1997 National Defense Panel. Other defense analysts were senior officials from the following organizations: the American Enterprise Institute, the Center for American Progress, the Center for Naval Analysis, the Center for a New American Security, the Center for Strategic and Budgetary Assessments, the Center for Strategic and International Studies, the Lexington Institute, the National Defense University’s Institute for National Strategic Studies, the RAND Corporation, and the Heritage Foundation. Based on our review of QDR literature and our discussions with DOD analysts, we developed a matrix summarizing these individuals’ concerns regarding the QDR legislative requirements and their views on the options to address them. Our work was conducted in the Washington, D.C., metropolitan area and Tampa, Florida. We performed our review from May 2006 through May 2007 in accordance with generally accepted government auditing standards. TITLE 10 U.S.C. §118. Quadrennial Defense Review (a) Review required.—The Secretary of Defense shall every four years, during a year following a year evenly divisible by four, conduct a comprehensive examination (to be known as a “quadrennial defense review”) of the national defense strategy, force structure, force modernization plans, infrastructure, budget plan, and other elements of the defense program and policies of the United States with a view toward determining and expressing the defense strategy of the United States and establishing a defense program for the next 20 years. Each such quadrennial defense review shall be conducted in consultation with the Chairman of the Joint Chiefs of Staff. (b) Conduct of review.—Each quadrennial defense review shall be conducted so as— (1) to delineate a national defense strategy consistent with the most recent National Security Strategy prescribed by the President pursuant to section 108 of the National Security Act of 1947 (50 U.S.C.404a); (2) to define sufficient force structure, force modernization plans, infrastructure, budget plan, and other elements of the defense program of the United States associated with that national defense strategy that would be required to execute successfully the full range of missions called for in that national defense strategy; (3) to identify (A) the budget plan that would be required to provide sufficient resources to execute successfully the full range of missions called for in that national defense strategy at a low-to-moderate level of risk, and (B) any additional resources (beyond those programmed in the current future-years defense program) required to achieve such a level of risk; and (c) Assessment of risk.—The assessment of risk for the purposes of subsection (b) shall be undertaken by the Secretary of Defense in consultation with the Chairman of the Joint Chiefs of Staff. That assessment shall define the nature and magnitude of the political, strategic, and military risks associated with executing the missions called for under the national defense strategy. (d) Submission of QDR to Congressional committees.—The Secretary shall submit a report on each quadrennial defense review to the Committees on Armed Services of the Senate and the House of Representatives. The report shall be submitted in the year following the year in which the review is conducted, but not later than the date on which the President submits the budget for the next fiscal year to Congress under section 1105(a) of title 31. The report shall include the following: (1) The results of the review, including a comprehensive discussion of the national defense strategy of the United States, the strategic planning guidance, and the force structure best suited to implement that strategy at a low-to-moderate level of risk. (2) The assumed or defined national security interests of the United States that inform the national defense strategy defined in the review. (3) The threats to the assumed or defined national security interests of the United States that were examined for the purposes of the review and the scenarios developed in the examination of those threats. (4) The assumptions used in the review, including assumptions relating to— (A) the status of readiness of United States forces; (B) the cooperation of allies, mission-sharing and additional benefits to and burdens on United States forces resulting from coalition operations; (C) warning times; (D) levels of engagement in operations other than war and smaller-scale contingencies and withdrawal from such operations and contingencies; and (E) the intensity, duration, and military and political end-states of conflicts and smaller-scale contingencies. (5) The effect on the force structure and on readiness for high-intensity combat of preparations for and participation in operations other than war and smaller-scale contingencies. (6) The manpower and sustainment policies required under the national defense strategy to support engagement in conflicts lasting longer than 120 days. (7) The anticipated roles and missions of the reserve components in the national defense strategy and the strength, capabilities, and equipment necessary to assure that the reserve components can capably discharge those roles and missions. (8) The appropriate ratio of combat forces to support forces (commonly referred to as the ‘tooth-to-tail’ ratio) under the national defense strategy, including, in particular, the appropriate number and size of headquarters units and Defense Agencies for that purpose. (9) The strategic and tactical air-lift, sea-lift, and ground transportation capabilities required to support the national defense strategy. (10) The forward presence, pre-positioning, and other anticipatory deployments necessary under the national defense strategy for conflict deterrence and adequate military response to anticipated conflicts. (11) The extent to which resources must be shifted among two or more theaters under the national defense strategy in the event of conflict in such theaters. (12) The advisability of revisions to the Unified Command Plan as a result of the national defense strategy. (13) The effect on force structure of the use by the armed forces of technologies anticipated to be available for the ensuing 20 years. (14) The national defense mission of the Coast Guard. (15) Any other matter the Secretary considers appropriate. (e) CJCS review.—(1) Upon the completion of each review under subsection (a), the Chairman of the Joint Chiefs of Staff shall prepare and submit to the Secretary of Defense the Chairman’s assessment of the review, including the Chairman’s assessment of risk. (2) The Chairman shall include as part of that assessment the Chairman’s assessment of the assignment of functions (or roles and missions) to the armed forces, together with any recommendations for changes in assignment that the Chairman considers necessary to achieve maximum efficiency of the armed forces. In preparing the assessment under this paragraph, the Chairman shall consider (among other matters) the following: (A) unnecessary duplication of efforts among the armed forces. (B) changes in technology that can be applied effectively to warfare. (3) The Chairman’s assessment shall be submitted to the Secretary in time for the inclusion of the assessment in the report. The Secretary shall include the Chairman’s assessment, together with the Secretary’s comments, in the report in its entirety. This appendix provides a summary of changes to the Quadrennial Defense Review (QDR) legislation (10 U.S.C. §118) as a result of the John Warner National Defense Authorization Act for Fiscal Year 2007. The new requirements will be in effect when the Department of Defense issues its next quadrennial review in 2010. The QDR should make recommendations that are not constrained to comply with the budget submitted to Congress by the President. The review shall include the following new reporting elements: the specific capabilities, including the general number and type of specific military platforms, needed to achieve the strategic and warfighting objectives identified in the review; and the homeland defense and support to civil authority missions of the active and reserve components, including the organization and capabilities required for the active and reserve components to discharge each such mission. The Chairman shall describe the capabilities needed to address the risk that he identified in his risk assessment. The Secretary of Defense shall establish an independent panel to conduct an assessment of the QDR not later than 6 months before the date on which the QDR will be submitted. Not later than 3 months after the date on which the QDR is submitted, the panel shall submit an assessment of the review, including the review’s recommendations, the stated and implied assumptions incorporated in the review, and the vulnerabilities of the strategy and force structure underlying the review. The panel’s assessment shall include analyses of the trends, asymmetries, and concepts of operations that characterize the military balance with potential adversaries, focusing on the strategic approaches of possible opposing forces. In addition to the contact name above, Margaret Morgan, Assistant Director; Deborah Colantonio; Alissa Czyz; Nicole Harms; Elizabeth Morris; Brian Pegram; Rebecca Shea; and John Townes made major contributions to this report. Tactical Aircraft: DOD Needs a Joint and Integrated Investment Strategy. GAO-07-415. Washington, D.C.: April 2, 2007. Best Practices: An Integrated Portfolio Management Approach to Weapon System Investments Could Improve DOD’s Acquisition Outcomes. GAO-07-388. Washington, D.C.: March 30, 2007. United States Government Accountability Office: Supporting the Congress through Oversight, Insight, and Foresight. GAO-07-644T. Washington, D.C.: March 21, 2007. Fiscal Stewardship and Defense Transformation. GAO-07-600CG. Washington, D.C.: March 8, 2007. Homeland Security: Applying Risk Management Principles to Guide Federal Investments. GAO-07-386T. Washington, D.C.: February 7, 2007. Military Personnel: DOD Needs to Provide a Better Link between Its Defense Strategy and Military Personnel Requirements. GAO-07-397T. Washington, D.C.: January 30, 2007. Force Structure: Joint Seabasing Would Benefit from a Comprehensive Management Approach and Rigorous Experimentation before Service Spend Billions on New Capabilities. GAO-07-211. Washington, D.C.: January 26, 2007. Force Structure: Army Needs to Provide DOD and Congress More Visibility Regarding Modular Force Capabilities and Implementation Plans. GAO-06-745.Washington, D.C.: September 6, 2006. Force Structure: DOD Needs to Integrate Data into Its Force Identification Process and Examine Options to Meet Requirements for High-Demand Support Forces. GAO-06-962. Washington, D.C.: September 5, 2006. DOD Acquisition Outcomes: A Case for Change. GAO-06-257T. Washington, D.C.: November 15, 2005. Defense Management: Additional Actions Needed to Enhance DOD’s Risk-Based Approach for Making Resource Decisions. GAO-06-13. Washington, D.C.: November 15, 2005. DOD’s High-Risk Areas: Successful Business Transformation Requires Sound Strategic Planning and Sustained Leadership. GAO-05-520T. Washington, D.C.: April 13, 2005. Military Personnel: DOD Needs to Conduct a Data-Driven Analysis of Active Military Personnel Levels Required to Implement the Defense Strategy. GAO-05-200. Washington, D.C.: February 1, 2005. 21st Century Challenges: Reexamining the Base of the Federal Government. GAO-05-325SP. Washington, D.C.: February 1, 2005. High-Risk Series: An Update. GAO-05-207. Washington, D.C.: January 1, 2005. Results-Oriented Cultures: Implementation Steps to Assist Mergers and Organizational Transformations. GAO-03-669. Washington, D.C.: July 2, 2003. Quadrennial Defense Review: Future Reviews Can Benefit from Better Analysis and Changes in Timing and Scope. GAO-03-13. Washington, D.C.: November 4, 2002. A Model of Strategic Human Capital Management. GAO-02-373SP. Washington, D.C.: March 15, 2002. Quadrennial Defense Review: Opportunities to Improve the Next Review.GAO/NSIAD-98-155. Washington, D.C.: June 25, 1998. Quadrennial Defense Review: Some Personnel Cuts and Associated Savings May Not Be Achieved. GAO/NSIAD-98-100. Washington, D.C.: April 30, 1998. Combating Terrorism: Threat and Risk Assessments Can Help Prioritize and Target Program Investments. GAO/NSIAD-98-74. Washington, D.C.: April 9, 1998. Bottom-Up Review: Analysis of DOD War Game to Test Key Assumptions. GAO/NSIAD-96-170. Washington, D.C.: June 21, 1996. Bottom-Up Review: Analysis of Key DOD Assumptions. GAO/NSIAD-95-56. Washington, D.C.: Jan. 31, 1995. | The Department of Defense (DOD) is required by law to conduct a comprehensive examination of the national defense strategy, force structure, modernization plans, infrastructure, and budget every 4 years including an assessment of the force structure best suited to implement the defense strategy at low-to-moderate level of risk. The 2006 Quadrennial Defense Review (QDR), completed in February 2006, represents the first comprehensive review that DOD had undertaken since the military forces have been engaged in operations in Iraq and Afghanistan. GAO was asked to assess (1) the strengths and weaknesses of DOD's approach and methodology for the 2006 QDR and (2) what changes, if any, in the QDR legislation could improve the usefulness of the report, including any changes that would better reflect 21st century security conditions. To conduct its review, GAO analyzed DOD's methodology, QDR study guidance, and results from key analyses and also obtained views of defense analysts within and outside of DOD. DOD's approach and methodology for the 2006 QDR had several strengths, but several weaknesses significantly limited the review's usefulness in addressing force structure, personnel requirements, and risk associated with executing the national defense strategy. Key strengths of the QDR included sustained involvement of senior DOD officials, extensive collaboration with interagency partners and allied countries, and a database to track implementation of initiatives. However, GAO found weaknesses in three key areas. First, DOD did not conduct a comprehensive, integrated assessment of different options for organizing and sizing its forces to provide needed capabilities. Without such an assessment, DOD is not well positioned to balance capability needs and risks within future budgets, given the nation's fiscal challenges. Second, DOD did not provide a clear analytical basis for its conclusion that it had the appropriate number of personnel to meet current and projected demands. During its review, DOD did not consider changing personnel levels and instead focused on altering the skill mix. However, a year after the QDR report was issued, DOD announced plans to increase Army and Marine Corps personnel by 92,000. Without performing a comprehensive analysis of the number of personnel it needs, DOD cannot provide an analytical basis that its military and civilian personnel levels reflect the number of personnel needed to execute the defense strategy. Third, the risk assessments conducted by the Secretary of Defense and the Chairman of the Joint Chiefs of Staff, which are required by the QDR legislation, did not fully apply DOD's risk management framework because DOD had not developed assessment tools to measure risk. Without a sound analytical approach to assessing risk, DOD may not be able to demonstrate how it will manage risk within current and expected resource levels. As a result, DOD is not in the best position to demonstrate that it has identified the force structure best suited to implement the defense strategy at low-to-moderate risk. Through discussions with DOD officials and defense analysts, GAO has identified several options for refining the QDR legislative language that Congress could consider to improve the usefulness of future QDRs, including changes to encourage DOD to focus on high priority strategic issues and better reflect security conditions of the 21st century. Congress could consider options to clarify its expectations regarding what budget information DOD should include in the QDR and eliminate reporting elements for issues that could be addressed in different reports. For example, the requirement to assess revisions to the unified command plan is also required and reported under other legislation. Further, some reporting elements such as how resources would be shifted between two conflicts could be eliminated in light of DOD's new planning approach that focuses on capabilities to meet a range of threats rather than on the allocation of forces for specific adversaries. GAO also presents an option to have an advisory group work with DOD prior to and during the QDR to provide DOD with alternative perspectives and analyses. |
GPRA is intended to shift the focus of government decisionmaking, management, and accountability from activities and processes to the results and outcomes achieved by federal programs. New and valuable information on the plans, goals, and strategies of federal agencies has been provided since federal agencies began implementing GPRA. Under GPRA, annual performance plans are to clearly inform the Congress and the public of (1) the annual performance goals for agencies’ major programs and activities, (2) the measures that will be used to gauge performance, (3) the strategies and resources required to achieve the performance goals, and (4) the procedures that will be used to verify and validate performance information. These annual plans, which are issued soon after transmittal of the president’s budget, provide a direct linkage between an agency’s longer-term goals and mission and day-to-day activities.Subsequent annual performance reports show the degree to which performance goals were met. The issuance of the agencies’ performance reports, due by March 31, represents a new and potentially more substantive phase in the implementation of GPRA—the opportunity to assess federal agencies’ actual performance for the prior fiscal year and to consider what steps are needed to improve performance and to reduce costs in the future. NRC is responsible for ensuring that those who use radioactive material in the generation of electricity, for experiments in universities, and for such medical uses as treating cancer do so in a manner that protects the public, the environment, and workers. NRC has issued licenses to 103 operating commercial nuclear power plants and 10 facilities that produce fuel for these plants. In addition, NRC or the 32 states that have agreements with NRC regulate almost 21,000 entities. In the medical field alone, licensees annually perform an estimated 10 million to 12 million procedures that involve radioactive material in the diagnosis or treatment of diseases. NRC is confronting a number of challenges to ensure the safe operation of commercial nuclear power plants, safe use of nuclear material, and safe disposal of radioactive waste. NRC has been moving from its traditional regulatory approach, which was largely developed without the benefit of quantitative estimates of risk, to a more risk-informed, performance-based approach. Under this approach, NRC will use risk assessment findings, engineering analysis, and performance history to focus attention on the most important safety- related activities, establish objective criteria to evaluate performance, develop measures to assess licensee's performance, and focus more on results as the primary basis for making regulatory decisions. This section discusses our analysis of NRC’s performance in achieving its selected key outcomes and existing strategies, particularly for strategic human capital management and information technology, for achieving these outcomes. In discussing these outcomes, we have also provided information drawn from our prior work on the extent to which NRC provided assurances that the performance information it is reporting is credible. In its fiscal year 2000 performance report, NRC said that it had met its goal and targets for the safety-related performance outcomes related to civilian nuclear reactor safety. Although NRC’s strategies to achieve its safety- related performance outcomes seem clear and reasonable, we could not assess its performance for the three nonsafety performance goals because NRC only recently reported measures to achieve them in its fiscal year 2002 performance plan. However, since NRC has had limited experience in applying the strategies and measures for the three nonsafety goals, it may need to revise them after it completes various planned program evaluations. Like other federal agencies, NRC faces strategic human capital management and other challenges that could affect achieving its future goals. In a highly technical, complex industry, NRC is facing the loss of a significant percentage of its senior managers and technical staff. For example, within the Office of Nuclear Reactor Regulation, about 22 percent of the technical staff and 16 percent of senior executive service staff are eligible to retire now; and by 2005, the number eligible for some type of retirement is about 42 percent and 77 percent, respectively. At the same time, NRC will need to rely on these staff to achieve its strategic and performance goals. To help resolve its strategic human capital management challenge, NRC identified such options as allowing it to rehire retired staff without jeopardizing their pension. In addition, for the nuclear reactor safety key outcome, NRC is implementing an intern program to attract and retain individuals with scientific, engineering, and other technical competencies. Another major challenge will be for NRC to demonstrate that it meets one of its four performance goals—increasing public confidence—for three reasons. First, to ensure its independence, NRC cannot promote nuclear power and must walk a fine line when communicating with the public. Second, NRC has not defined the public that it wants to target in achieving this goal. Third, NRC has not established a baseline to measure the “increase” in its performance goal. As we reported last year, the Commission did not approve a staff proposal to conduct a survey to establish a baseline. Instead, in October 2000, NRC began an 18-month pilot effort to use feedback at the conclusion of public meetings. NRC expects to semiannually evaluate the information received to enhance its public outreach efforts. NRC’s evaluation of feedback from public meetings will provide information on the extent of public awareness of the meeting and the clarity, completeness, and thoroughness of the information that NRC provided at the meetings. Over time, for a particular plant, NRC may find that the public better understands the issues of concern or interest. It is not clear, however, how this information will show that the public’s confidence in NRC as a regulator has increased. In addition, the Office of Nuclear Reactor Regulation began a 1-year effort in October 2000 to assess the effectiveness of NRC’s program that verifies allegations concerning regulated activities and the impact of the program on public confidence. NRC has been asking whether an individual’s experience with the program has increased his/her confidence in NRC as a regulator. NRC believes that such information will provide it a baseline to judge the contribution that the allegation program makes to meeting its public confidence goal. Like the feedback from public meetings discussed above, the feedback from those who participate in the allegation program will be limited. For example, in fiscal year 2000, NRC received 468 reactor- related allegations and estimates receiving 370 in fiscal year 2001. Therefore, the baseline data that NRC accumulates will be limited to a very small percentage of the public. Although program evaluations would help determine the validity and reasonableness of NRC’s key outcomes, goals, and strategies and identify the factors that are likely to affect their achievement, NRC did not complete any evaluations in the key outcome of nuclear reactor safety in fiscal year 2000. NRC would benefit from such evaluations because the actions of its licensees and industry organizations have a significant impact on the extent to which NRC will achieve its strategic and performance goals for this key outcome and because NRC cannot show a one-to-one relationship between the performance of its licensees and the impact that the agency’s programs have on safety. According to NRC staff, no one program evaluation will test its strategic direction for this and other key outcomes. Rather, NRC expects to conduct a number of evaluations that over time, should provide insights on whether a need exists to change its strategic direction. For example, by the end of June 2001, NRC expects to complete one program evaluation related to this key outcome—an assessment of its first year of implementing the new safety oversight process for commercial nuclear power plants. The new safety oversight process has been the centerpiece in NRC’s efforts to move to a risk-informed, performance- based regulatory approach. NRC believes that the evaluation will help determine whether it will meet its four performance goals, but as discussed earlier, we have doubts that the evaluation will determine whether NRC will meet its increasing public confidence goal because it will not have the baseline data needed for the evaluation. In addition, a NRC advisory panel concluded in May 2001 that the agency did not have the necessary data to evaluate the new safety oversight process against the performance goals. NRC’s strategies to ensure that the commercial nuclear power plants continue to operate safely appear clear and reasonable. For example, NRC expects to improve its inspection activities to better assess the safety performance of the nation’s 103 operating nuclear power plants. Other strategies include resolving such safety issues as age-related plant degradation, ensuring that plant operator licenses are issued to and renewed only for qualified individuals, and continuing to develop and incrementally use risk-informed, and where appropriate, less prescriptive performance-based regulatory approaches. For its newly developed strategies for the three nonsafety goals, NRC may need to revise them and/or specify how some strategies will help achieve its desired outcomes. For example, one strategy to make its activities more effective, efficient, and realistic is to anticipate challenges posed by the introduction of new technologies and changing regulatory demands. Without further amplification, it is difficult to see how this strategy will result in more effective, efficient, and realistic NRC activities and decisions. NRC reported that it had improved its performance in fiscal year 2000 compared with its performance fiscal year 1999 for the safety-related performance outcomes for this key outcome. However, NRC has concerns about the quality of its performance data for 10 measures related to this key outcome and noted that the actual data reported for some of the safety performance goal measures are subject to change on the basis of further analysis and the receipt of newly reported information. NRC’s strategies to achieve its safety-related performance goal outcomes seem clear and reasonable. But we could not assess its performance for the three nonsafety performance goals because NRC only recently reported the measures to achieve them in its fiscal year 2002 performance plan. As with the nuclear reactor safety key outcome, NRC has had limited experience in applying the strategies and measures for the three nonsafety goals. As a result, it may need to revise them after it completes various planned program evaluations. Although NRC has set more realistic performance targets for this key outcome, it continues to set others that are easily achievable and do not challenge or stretch its staff to improve their performance. On the basis of more complete historical data, NRC revised some of its performance targets. The same analysis showed that in some areas, actual nuclear material licensees’ performance was much better than NRC’s targeted performance. Table 1 shows some of NRC’s performance goal measures for the nuclear material safety key outcome and compares its actual performance in fiscal years 1999 and 2000 with the targets for fiscal year 2002. As noted above in the nuclear reactor safety key outcome, NRC faces strategic human capital management and other challenges that could impair accomplishing its goals. During this period of potentially very high attrition, the Office of Nuclear Material Safety and Safeguards will be challenged to implement a risk-informed regulatory approach for a large number of diverse licensees. As part of its strategy to address this challenge, NRC is implementing an intern program to attract and retain individuals with scientific, engineering, and other technical competencies. As it did with the nuclear reactor safety key outcome, NRC did not complete any program evaluations in fiscal year 2000 for the key outcome of nuclear material safety. NRC expects to complete one program evaluation in June 2001. The evaluation will address redefining NRC’s role in an environment where an increasing number of states are entering into agreements with NRC to regulate material licensees within their borders (agreement states). As of September 2000, 32 states had such agreements with NRC and by 2004, NRC anticipates that 35 states will have such agreements and that the states will oversee more than 80 percent of all material licensees. Such a large shift of responsibility over time from NRC to the agreement states could have significant budgetary and other implications for NRC. The program evaluation will consider such issues as the roles and legal responsibilities of NRC, the agreement states, and others; the need for statutory changes; and the resources needed. This program evaluation should help determine whether NRC will meet one of its four performance goals—maintain safety—but is not likely to provide information to assess the impact on NRC’s three nonsafety performance goals. For example, it is unlikely that a useful assessment can be made of the “improve the efficiency and effectiveness of NRC’s activities” performance goal when the evaluation will not address such questions as the following: Would NRC continue to need staff in all four of its regional offices as the number of agreement states increases? And what are the appropriate number, type, and skills needed for its headquarters staff? In commenting on a draft of this report, NRC said that program evaluations are to assess the manner and extent to which programs achieve their intended objectives and to assess program implementation policies, practices, and processes. NRC’s strategies to ensure that licensees use nuclear material safely appear clear and reasonable. For example, NRC will continue to focus on the relative risk of licensees' activities to determine the appropriate level of oversight, determine that licensees’ activities are consistent with regulatory requirements, and respond to operational events that have potential safety or safeguards consequences. For its newly developed strategies for the three nonsafety goals, NRC may need to revise them and/or specify how some strategies will help achieve its desired outcomes. For example, one strategy is to improve the regulatory framework to increase NRC’s effectiveness and efficiency. Without further amplification on how NRC expects to improve the regulatory framework, it is difficult to determine how this strategy will result in more effective and efficient NRC activities and decisions. NRC reported that it had met the safety-related performance outcomes for this key outcome in fiscal year 2000. Although NRC’s performance and strategies for achieving the safety-related goal for this key outcome appear reasonable, as with the other two key outcomes, we could not assess NRC’s performance relative to the three nonsafety goals for which NRC did not have performance measures. In addition, to ensure that NRC can meet the strategies, goals, and measures, it will have to follow through on its plans to attract and retain individuals with the competencies and skills needed to carry out its mission. On the basis of our prior work, we believe that NRC’s achieving some of its strategies and performance goals in this key outcome may be affected by such external factors as the standards that the Environmental Protection Agency (EPA) eventually issues on the level of residual radiation that can safely remain at a nuclear power plant site after licensees complete their decommissioning activities as well as the recently issued standards for the Department of Energy’s potential high-level waste repository at Yucca Mountain, Nevada. EPA started to develop residual radiation standards in 1984 but has not yet finalized them. Currently, licensees are using standards that NRC issued in 1997. If NRC’s licensees are ultimately required to comply with EPA standards, which are more restrictive than NRC’s, the licensees may have to perform additional cleanup activities and incur additional costs. Likewise, NRC's success may be affected by EPA's final rule on the environmental radiation protection standards for Yucca Mountain. The rule, published in the Federal Register on June 13, 2001, includes a separate limit for groundwater. NRC, along with such others as the National Academy of Sciences, does not believe that a scientific basis exists for establishing the separate limit. Nevertheless, in commenting on a draft of this report, NRC said that it will implement EPA's standards for Yucca Mountain. Although program evaluations are helpful and important, NRC did not complete any such evaluations related to the nuclear waste safety key outcome in fiscal year 2000. However, NRC expects to evaluate ongoing and planned changes related to its decommissioning program for nuclear power plants and other radioactively contaminated sites in fiscal year 2003. In doing so, NRC expects to assess its various decommissioning initiatives, determine whether it has achieved all four performance goals, identify deviations from its performance goals, and determine whether a need exists for NRC to change its goals, strategies, or measures related to this key outcome. If NRC meets these objectives, the information should help determine the validity and reasonableness of the agency’s goals and strategies for this key outcome. NRC’s strategies appear reasonable and clearly discuss how the agency plans to meet its fiscal year 2002 safety-related goals. For example, NRC expects to evaluate new research and safety information as well as international programs and licensees' operational experience to improve its regulation of nuclear waste activities. NRC says that it will also keep pace with the nation’s high-level waste program to ensure that it can meet the time frame established by legislation when deciding to license a geological repository. For its newly developed strategies for the three nonsafety goals, NRC may need to revise them and/or specify how some strategies will help achieve its desired outcomes. As with the nuclear material safety outcome, one strategy is to improve the regulatory framework to increase NRC’s effectiveness and efficiency. Again, however, without further amplification on how NRC expects to improve the regulatory framework, it is difficult to determine how this strategy will result in more effective and efficient NRC activities and decisions. For the selected key outcomes, this section describes major improvements or remaining weaknesses in NRC’s (1) fiscal year 2000 performance report in comparison with its fiscal year 1999 report and (2) fiscal year 2002 performance plan in comparison with its fiscal year 2001 plan. It also discusses the degree that NRC’s fiscal year 2000 report and fiscal year 2002 plan address concerns and recommendations by the Congress, GAO, the Inspectors General, and others. NRC made a number of improvements to its fiscal year 2000 performance report. For example, NRC used final and finite data for its performance measures rather than preliminary data as it had for some measures last year. In its fiscal year 1999 performance report, NRC used preliminary data for three nuclear reactor safety measures: no more than one event that could lead to a severe accident, no significant radiation exposures resulting for nuclear power plants, and no deaths resulting from radiation or radiation releases from nuclear plant operations. NRC designated the data as preliminary because the Commission had not approved their release to the public. In its fiscal year 2000 report, NRC used final data. According to NRC staff, they would be aware of an event, release, or death by the end of the fiscal year and before the Commission approved releasing the data. Therefore, NRC concluded that it did not need to show this information as “preliminary” in the fiscal year 2000 performance report. In addition, NRC previously used a combined 5-year average as its target for some performance measures. NRC now uses an annual value, which will better allow the Congress and others to assess its performance in a particular fiscal year. In addition, NRC included information to address the requirements of the Reports Consolidation Act of 2000. The act requires agency heads to assess the completeness and reliability of the data used in their fiscal year 2000 performance reports. The Office of Management and Budget (OMB) issued draft guidance describing how agencies should assess the completeness and reliability of data. NRC’s performance report discusses these two data related issues. In its fiscal year 2000 performance report, NRC says that its performance data are complete, noting that it has reported actual or preliminary data for every strategic and performance measure, and reliable because its managers and decision makers use the data in the normal course of their duties. NRC discusses data quality in its fiscal year 2000 performance report and refers to its fiscal year 2002 performance plan for details on its efforts to ensure that its performance data are credible. NRC’s performance plan for fiscal year 2002 differs in several significant ways from its predecessor. First, NRC followed through on its commitment to establish measures for three of its performance goals. In its fiscal year 2001 performance plan, NRC established measures for the “maintain safety” performance goal only, saying that it would develop measures for the three nonsafety performance goals—increase public confidence; reduce unnecessary regulatory burden; and enhance the effectiveness, efficiency, and realism of its activities and decisions—for the fiscal year 2002 plan. NRC has done so and now shows measures for all four performance goals. NRC also links each performance measure to a specific performance goal. Second, NRC provided greater details on how it ensures the credibility of the data used to assess its performance in achieving its strategic and performance goal measures. As noted in prior reports on NRC’s performance plans, the credibility of its performance data is an issue that has concerned us for several years. Now, NRC links each strategic and performance goal measure to the data source and the automated system in which the data are collected and stored. NRC also described its process to ensure that the data were valid and reliable. For example, to verify the data used to determine whether it has achieved the “no more than one event per year identified as a significant precursor of a nuclear accident” performance measure, NRC evaluates nuclear power plants' operating experience and identifies those events that were the most safety significant. NRC describes each step taken in its evaluation process. In those cases where NRC identified data limitations, it described the actions it had taken to address the limitations. For example, NRC highlighted its concerns with the credibility of the data used to assess its achievements in the key outcome of nuclear material safety. In commenting on a draft of this report, NRC noted that this key outcome includes over 15,000 licensees administered by the agreement states and that NRC relies on the agreement states to collect performance data related to them. NRC also said that it has provided training for the states and its own staff on the database used to collect the information and data collection procedures. It is also developing an internal policy to ensure continued improvements in the performance data reported to the Congress. Third, NRC described the actions it has taken to address the management challenges that we and its Office of the Inspector General identified. NRC’s fiscal year 2002 performance plan includes an appendix that describes its ongoing and planned actions to address these management challenges. NRC also relates each challenge to its strategic and performance goals and strategies. NRC did not include comparable information in its fiscal year 2001 performance plan. Finally, NRC addressed three governmentwide performance goals as directed by OMB in March 2001: (1) the use of performance-based contracts for at least 20 percent of all service contracts over $25,000; (2) expanding the use of on-line procurement methods by posting acquisitions of over $25,000 to www.FedBizOpps.gov; and (3) completing studies to determine whether it is more cost-effective to have commercial activities performed in-house by its staff or outsourced. In September 2000, we reported that NRC identified 783 full-time equivalent employees performing activities that are exempt from OMB’s cost comparison requirements. NRC discusses its efforts to meet the three governmentwide reforms and believes that it has satisfied OMB’s requirements for various reasons. For example, its management strategy to “employ innovative and sound business practices” includes efforts to make greater use of performance-based contracts. NRC participated in a task group that developed the Best Practices Guide on Performance- Based Service Contracting, which the Office of Federal Procurement Policy published for use by other federal agencies. In addition, NRC believes that the same management strategy will help it increase the use of competition and ensure more accurate Federal Activities Inventory Reform Act inventories. Despite these enhancements over its fiscal year 2001 performance plan, we identified an area warranting improvement and additional attention. NRC says it will provide proactive information management and information technology services by working with its program and support offices and by providing reliable and easy-to-use systems for internal and external stakeholders. Although NRC’s fiscal year 2002 performance plan sets targets to meet its information technology objectives, it does not address how it expects to verify and validate the data. As a result, we have no assurance that the measures can be used reliably to gauge the effectiveness of NRC’s information technology performance or as a basis for making program decisions and revisions. According to its staff, NRC only describes how it verifies and validates performance goal data that are reported to the Congress. Since it only has output measures (that are not reported to the Congress) for information technology, NRC does not describe how it verifies and validates the data related to them. For the three major management challenges that GAO identified, NRC’s fiscal year 2000 performance report discussed its progress in resolving two challenges, but it did not discuss the agency’s progress in resolving the challenge related to managing the agency—strategic human capital management, financial management, and information technology. However, in its fiscal year 2002 performance plan, NRC identified management strategies to address this challenge. GAO identified two governmentwide high-risk areas: strategic human capital management and information security. Regarding strategic human capital management, NRC’s performance report for fiscal year 2000 did not explain its progress in resolving this challenge and its performance plan for fiscal year 2002 did not have goals and measures related to it. However, in its fiscal year 2002 performance plan, NRC included a management strategy to sustain a high-performing, diverse workforce. To achieve this strategy, NRC says that it will base human resource decisions on sound workforce planning and analysis. In this regard, in January 2001, the staff provided the Commission with a suggested action plan—a 5-year, $2.4 million effort to maintain the core competencies, knowledge, and skills needed by NRC. NRC has also taken the initiative and identified options to attract new employees with critical skills, developed training programs to meet its changing needs, and identified legislative options to help resolve its aging staff issue. As we recently testified, continued oversight of NRC’s multiyear effort is needed to ensure that it is being properly implemented and is effective in achieving its goals. With respect to information security, NRC has no goal, strategy, or measure to resolve this challenge agencywide, and its fiscal year 2000 performance report did not explain its progress in resolving it. NRC staff acknowledged the lack of an agencywide goal, strategy, or measure but noted that the support office responsible for information security has developed a management strategy and output measure for its own use in addressing this issue. Since the output measure is not applicable to the entire agency and NRC did not include one that is in its fiscal year 2002 performance plan, the Congress will have no assurance that NRC is effectively addressing this challenge. In addition, NRC’s plan did not address contingency planning to respond to the loss or degradation of essential services because of a problem in an automated system. In general, a contingency plan describes the steps that NRC would take, including the activation of manual processes, to ensure the continuity of its core business processes in the event of a system failure. According to NRC staff, the agency has processes to ensure continuity in the event of a system failure and did not believe that it needed to disclose this information in the fiscal year 2002 performance plan. For two other major management challenges that GAO identified— resolving numerous issues to implement a risk-informed approach for commercial nuclear power plants and overcoming inherent difficulties to apply a risk-informed approach to nuclear material licensees—NRC established strategies or performance measures that specifically address them. For example, one strategy is to develop and incrementally use risk- information and, where appropriate, less prescriptive performance-based regulatory approaches to maintain safety. As agreed, our evaluation was generally based on the requirements of GPRA, the Reports Consolidation Act of 2000, guidance to agencies from OMB for developing performance plans and reports (OMB Circular A-11, Part 2), previous reports and evaluations by us and others, our knowledge of NRC’s operations and programs, GAO’s identification of best practices concerning performance planning and reporting, and our observations on NRC’s other GPRA-related efforts. We also discussed our review with NRC staff in the Office of the Chief Financial Officer, Office of the Executive Director for Operations, and Office of the Inspector General. The agency outcomes that were used as the basis for our review were identified by the Ranking Minority Member, Senate Governmental Affairs Committee, as important mission areas for NRC and generally reflect the outcomes for almost all of NRC’s programs or activities. The major management challenges confronting NRC, including the governmentwide high-risk areas of strategic human capital management and information security, were identified by GAO in our January 2001 performance and accountability series and high-risk update, and were identified by NRC’s Office of the Inspector General in December 2000. We did not independently verify the information contained in the performance report and plan, although we did draw from other GAO work in assessing the validity, reliability, and timeliness of NRC’s performance data. We conducted our review from April 2001 through June 2001 in accordance with generally accepted government auditing standards. We provided copies of a draft of this report to NRC for its review and comment. NRC provided a number of specific comments, which are presented in appendix II. Although NRC generally agreed with the information presented in the report, it does not agree that its fiscal year 2000 performance report showed mixed progress in achieving the three key outcomes. We revised the report to make it clear that we concluded that NRC's performance was mixed because it did not have measures for three performance goals until it issued its fiscal year 2002 performance plan. NRC also provided technical clarifications, which we incorporated as appropriate. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to appropriate congressional committees; the Chairman, Nuclear Regulatory Commission; the Commissioners, Nuclear Regulatory Commission; and the Director, Office of Management and Budget. Copies will also be made available to others on request. The following table identifies the major management challenges that confront the Nuclear Regulatory Commission (NRC), including the governmentwide high-risk areas of strategic human capital management and information security. The first column lists the management challenges that we and/or NRC’s Office of the Inspector General (OIG) have identified. The second column discusses the progress, as discussed in its fiscal year 2000 performance report, that NRC has made in resolving its challenges. The third column discusses the extent to which NRC’s fiscal year 2002 performance plan includes performance goals and measures to address the challenges that we and its OIG identified. Overall, we found that NRC’s performance report discussed the agency’s progress in resolving some of its challenges. However, it did not discuss its progress in resolving the following challenges: coping with strategic human capital management, improving its financial management activities, and ensuring that its information technology acquisitions perform as intended; information security; intra-agency communication (up, down, and across agency organizational lines); and regulatory processes that are integrated and continue to meet NRC’s safety mission in a changing external environment. In its fiscal year 2000 performance report, NRC says that the Reports Consolidation Act of 2000 required an assessment by the Inspector General of the agency’s management challenges. As a result, NRC staff said they did not discuss each of the management challenges in its performance report but that specific actions and milestones related to the challenges are included in NRC’s fiscal year 2002 performance plan. Of its nine major management challenges, NRC has strategic and performance goals and measures directly related to four; management strategies for four others; but no goal, strategy, or output for one— information security. One GAO management challenge includes three issues—strategic human capital management, financial management, and information technology. For ease of presentation, we discuss each of these issues separately. Table 2 provides information on how NRC addresses the two governmentwide high-risk areas and its major management challenges. The following are GAO's comments on the Nuclear Regulatory Commission's letter dated June 19, 2001. 1. We agree that NRC's fiscal year 2000 performance report shows that it achieved its goals and targets for the safety-related performance goal for the three key outcomes. We revised the report to make it clear that we concluded that NRC's performance was mixed because it did not have measures for three performance goals until it issued its fiscal year 2002 performance plan. Therefore, we could not fully assess NRC's progress in meeting the three key outcomes. 2. We did not make the change suggested by NRC since we wanted to distinguish between safety- and nonsafety performance goals. 3. We revised the report to show the Office of Nuclear Reactor Regulation staff that are eligible to retire now. 4. We did not include the additional information that NRC suggested because the final standards that the Environmental Protection Agency issued on June 13, 2001, relate to the Department of Energy's proposed high-level waste repository at Yucca Mountain, Nevada—not to the residual radiation that can safely remain at a commercial nuclear power plant site after decommissioning. 5. The information presented on pages 3 and 14 are not inconsistent and do not need to be changed. In our comparison of performance plans for fiscal year 2001 and 2002 (p. 14), we note that NRC's fiscal year 2002 performance plan describes the actions that NRC has taken to address the management challenges that we and its Office of the Inspector General identified. This is not inconsistent with our discussion related to the information that was not included in NRC's fiscal year 2000 performance report (p. 3). The information that we discuss relates to two different NRC documents. 6. We did not make the change that NRC recommended because in fiscal year 2001, about 16 percent of NRC staff are eligible to retire, and by the end of fiscal year 2005, about 33 percent will be eligible. In our opinion, NRC's replacing such a large number of staff qualifies as a crisis. In addition, last year, one NRC Commissioner said, "There is a crisis looming in government" because an entire generation of employees is going to retire or will be eligible to retire in the near future. Finally, in January 2001, we identified strategic human capital management as a high-risk area governmentwide. 7. See comment 3. 8. NRC has correctly portrayed one of the overall conclusions of its advisory panel concerning the new safety oversight process; that is, that the process has made progress toward achieving NRC's four performance goals. The advisory panel also concluded that NRC has the necessary elements to evaluate the new oversight process against the performance goals. But the panel concluded, as we noted in this report, that NRC did not have the necessary data to evaluate the new safety oversight process against the performance goals. As a result, we did not change the report as NRC suggested. 9. NRC says that the strategy—anticipate the introduction of new technologies and changing regulatory demands—focuses on making NRC's activities more realistic. We will add "realistic" to the information presented. However, NRC provided no further amplification about how the strategy will make NRC's activities more realistic. 10. As NRC noted, its fiscal year 2000 performance report discussed data limitations related to the nuclear material safety key outcome. Since NRC's efforts to provide greater details on how it ensures the credibility of the data used to assess its performance are discussed later in the report, we made no change here as NRC suggested. However, we included some of the information that NRC suggested in the section of the report that compares the fiscal years 2001 and 2002 performance plans. 11. We revised the report to include a broader description of how program evaluations can be used. It should be noted that NRC's programs contribute to achieving its performance measures and ultimately its performance and strategic goals. Therefore, we do not believe that NRC's views and our views are inconsistent. 12. We revised the report to show that the Environmental Protection Agency issued the final rule for the Department of Energy's high-level waste repository at Yucca Mountain on June 13, 2001. We also included the information that NRC recommended. 13. We did not revise the report as NRC suggested because we included the information earlier in the report. (See comment 11.) 14. We corrected the typographical error that NRC identified. 15. We did not delete the information as NRC suggested because we believe it clarifies the conditions under which NRC verifies and validates performance data. 16. We corrected the typographical error that NRC identified. 17. We revised the report as NRC recommended. 18. We revised the report as NRC recommended and included some of the additional information it provided. 19. We revised the report to show that NRC staff have offered recommendations for improving internal communications to the Executive Director for Operations who will determine the actions that NRC staff should pursue. | This report reviews the Nuclear Regulatory Commission's (NRC) fiscal year 2000 performance report and fiscal year 2002 performance plan required by the Government Performance and Results Act of 1993 to assess its progress in achieving selected key outcomes that are important mission areas for the agency. NRC reports mixed progress in achieving the three outcomes GAO reviewed. To measure performance for the three outcomes, NRC established the same four goals: one relates to safety and three relate to such nonsafety issues as public confidence, regulatory burden, and organizational enhancements. Although NRC's strategies for the safety-related performance goal outcomes seem clear and reasonable, GAO could not assess NRC's performance for the three nonsafety performance goals because NRC only recently developed and reported strategies for them in its fiscal year 2002 performance plan. Because NRC has had little experience in applying the strategies and measures for the three nonsafety goals, it may need to revise them after it completes various planned evaluations during the next three years. |
DHS and FEMA have streamlined application and award processes, enhanced the use of risk management principles in its grant programs, and proposed consolidation of its various grant programs to address grant management concerns. In February 2012, we reported that better coordination and improved data collection could help FEMA identify and mitigate potential unnecessary duplication among four overlapping grant programs—the Homeland Security Grant Program, the Urban Areas Security Initiative, the Port Security Grant Program, and the Transit Security Grant Program. FEMA has proposed changes to enhance preparedness grant management, but these changes may create new challenges. Since its creation in April 2007, FEMA’s Grant Programs Directorate (GPD) has been responsible for the program management of DHS’s preparedness grants. GPD consolidated the grant business operations, systems, training, policy, and oversight of all FEMA grants and the program management of preparedness grants into a single entity. GPD works closely with other DHS entities to manage grants, as needed, through the grant life cycle, shown in figure 1. For example, GPD works with the U.S. Coast Guard for the Port Security Grant Program and the Transportation Security Administration for the Transit Security Grant Program. Since 2006, DHS has taken a number of actions to improve its risk-based grant allocation methodology. Specifically, in March 2008, we reported that DHS had adopted a more sophisticated risk-based grant allocation approach for the Urban Areas Security Initiative to (1) determine both states’ and urban areas’ potential risk relative to other areas that included empirical analytical methods and policy judgments, and (2) assess and score the effectiveness of the proposed investments submitted by the eligible applicants and determine the final amount of funds awarded. We also reported that DHS’s risk model for the Urban Areas Security Initiative could be strengthened by measuring variations in vulnerability. Specifically, we reported that DHS had held vulnerability constant, which limited the model’s overall ability to assess risk and more precisely allocate funds. Accordingly, we recommended that DHS and FEMA formulate a method to measure vulnerability in a way that captures variations in vulnerability, and apply this vulnerability measure in future iterations of this risk-based grant allocation model. DHS concurred with our recommendations and FEMA took actions to enhance its approaches for assessing and incorporating vulnerability into risk assessment methodologies for this program. Specifically, FEMA created a risk assessment that places greater weight on threat and calculates the contribution of vulnerability and consequence separately. In June 2009, we reported that DHS used a risk analysis model to allocate Transit Security Grant Program funding and awarded grants to higher-risk transit agencies using all three elements of risk—threat, vulnerability, and consequence. Accordingly, we recommended that DHS formulate a method to measure vulnerability in a way that captures variations in vulnerability, and apply this vulnerability measure in future iterations of this risk-based grant allocation model. DHS concurred with our recommendations and FEMA took actions to enhance its approach for assessing and incorporating vulnerability into risk assessment methodologies for this program. In November 2011, we reported that DHS had made modifications to enhance the Port Security Grant Program’s risk assessment model’s vulnerability element for fiscal year 2011. Specifically, DHS modified the vulnerability equation to recognize that different ports have different vulnerability levels. We also reported that FEMA had taken actions to streamline the Port Security Grant Program’s management efforts. For example, FEMA shortened application time frames by requiring port areas to submit specific project proposals at the time of grant application. According to FEMA officials, this change was intended to expedite the grant distribution process. Further, we reported that to speed the process, DHS took actions to reduce delays in environmental reviews, increased the number of GPD staff working on the Port Security Grants, revised and streamlined grant application forms, and developed time frames for review of project documentation. Despite these continuing efforts to enhance preparedness grant management, we identified multiple factors in our February 2012 report that contributed to the risk of FEMA potentially funding unnecessarily duplicative projects across the four grant programs we reviewed—the Homeland Security Grant Program, the Urban Areas Security Initiative, the Port Security Grant Program, and the Transit Security Grant Program.and geographic locations, combined with differing levels of information that FEMA had available regarding grant projects and recipients. We also reported that FEMA lacked a process to coordinate application reviews across the four grant programs. These factors include overlap among grant recipients, goals, Overlap among grant recipients, goals, and geographic locations exist. The four grant programs we reviewed have similar goals and fund similar activities, such as equipment and training in overlapping jurisdictions, which increases the risk of unnecessary duplication among the programs. For instance, each state and eligible territory receives a legislatively-mandated minimum amount of State Homeland Security Program funding to help ensure that geographic areas develop a basic level of preparedness, while the Urban Areas Security Initiative grants explicitly target urban areas most at risk of terrorist attack. However, many jurisdictions within designated Urban Areas Security Initiative regions also apply for and receive State Homeland Security Program funding. Similarly, port stakeholders in urban areas could receive funding for equipment such as patrol boats through both the Port Security Grant Program and the Urban Areas Security Initiative, and a transit agency could purchase surveillance equipment with Transit Security Grant Program or Urban Areas Security Initiative funding. While we understand that some overlap may be desirable to provide multiple sources of funding, a lack of visibility over grant award details around these programs increases the risk of unintended and unnecessary duplication. FEMA made award decisions for all four grant programs with differing levels of information. In February 2012, we reported that FEMA’s ability to track which projects receive funding among the four grant programs varied because the project information FEMA had available to make award decisions—including grant funding amounts, grant recipients, and grant funding purposes—also varied by program due to differences in the grant programs’ administrative processes. For example, FEMA delegated some administrative duties to stakeholders for the State Homeland Security Program and the Urban Areas Security Initiative, thereby reducing its administrative burden. However, this delegation also contributed to FEMA having less visibility over some grant applications. FEMA recognized the trade-off between decreased visibility over grant funding in exchange for its reduced administrative burden. Differences in information requirements also affected the level of information that FEMA had available for making grant award decisions. For example, for the State Homeland Security Program and Urban Areas Security Initiative, states and eligible urban areas submit investment justifications for each program with up to 15 distinct investment descriptions that describe general proposals in wide-ranging areas such as “critical infrastructure protection.” Each investment justification encompasses multiple specific projects to different jurisdictions or entities, but project-level information, such as a detailed listing of subrecipients or equipment costs, is not required by FEMA. In contrast, Port Security and Transit Security Grant Program applications require specific information on individual projects such as detailed budget summaries. As a result, FEMA has a much clearer understanding of what is being requested and what is being funded by these programs. FEMA has studied the potential utilization of more specific project-level data for making grant award decisions, especially for the State Homeland Security Program and Urban Areas Security Initiative. However, while our analysis of selected grant projects determined that this additional information was sufficient for identifying potentially unnecessary duplication for nearly all of the projects it reviewed, the information did not always provide FEMA with sufficient detail to identify and prevent the risk of unnecessary duplication. While utilizing more specific project-level data would be a step in the right direction, at the time of our February 2012 report, FEMA had not determined the specifics of future data requirements. FEMA lacked a process to coordinate application reviews across the four grant programs. In February 2012, we reported that grant applications were reviewed separately by program and were not compared across each other to determine where possible unnecessary duplication may occur. Specifically, FEMA’s Homeland Security Grant Program branch administered the Urban Areas Security Initiative and State Homeland Security Program while the Transportation Infrastructure Security branch administered the Port Security Grant Program and Transit Security Grant Program. We and the DHS Inspector General concluded that coordinating the review of grant projects internally would give FEMA more complete information about applications across the four grant programs, which could help FEMA identify and mitigate the risk of unnecessary duplication across grant applications. In our February 2012 report, we note that one of FEMA’s section chiefs said that the primary reasons for the current lack of coordination across programs are the sheer volume of grant applications that need to be reviewed and FEMA’s lack of resources to coordinate the grant review process. She added that FEMA reminds grantees not to duplicate grant projects; however, due to volume and the number of activities associated with grant application reviews, FEMA lacks the capabilities to cross-check for unnecessary duplication. We recognize the challenges associated with reviewing a large volume of grant applications, but to help reduce the risk of funding duplicative projects, FEMA could benefit from exploring opportunities to enhance its coordination of project reviews while also taking into account the large volume of grant applications it must process. Thus, we recommended that FEMA take actions to identify and mitigate any unnecessary duplication in these programs, such as collecting more complete project information as well as exploring opportunities to enhance FEMA’s internal coordination and administration of the programs. In commenting on the report, DHS agreed and identified planned actions to improve visibility and coordination across programs and projects. We also suggested that Congress consider requiring DHS to report on the results of its efforts to identify and prevent duplication within and across the four grant programs, and consider these results when making future funding decisions for these programs. In the President’s Fiscal Year 2013 budget request to Congress, FEMA has proposed consolidating its various preparedness grant programs— with the exception of the Emergency Management Performance Grants and Assistance to Fire Fighters Grants—into a single, comprehensive preparedness grant program called the National Preparedness Grant Program (NPGP) in fiscal year 2013. FEMA also plans to enhance its preparedness grants management through a variety of proposed initiatives to implement the new consolidated program. According to FEMA, the new NPGP will require grantees to develop and sustain core capabilities outlined in the National Preparedness Goal rather than work to meet mandates within individual, and often disconnected, grant programs. NPGP is intended to focus on creating a robust national response capacity based on cross-jurisdictional and readily deployable state and local assets. According to FEMA’s policy announcement, consolidating the preparedness grant programs will support the recommendations of the Redundancy Elimination and Enhanced Performance for Preparedness Grants Act, and will streamline the grant application process. This will, in turn, enable grantees to focus on how federal funds can add value to their jurisdiction’s unique preparedness needs while contributing to national response capabilities. To further increase the efficiency of the new grant program, FEMA plans to issue multi-year guidelines, enabling the agency to focus its efforts on measuring progress towards building and sustaining national capabilities. The intent of this consolidation is to eliminate administration redundancies and ensure that all preparedness grants are contributing to the National Preparedness Goal. For fiscal year 2013, FEMA believes that the reorganization of preparedness grants will allow for a more targeted grants approach where states build upon the capabilities established with previous grant money and has requested $1.54 billion for the National Preparedness Grant Program. FEMA’s Fiscal Year 2013 Grants Drawdown Budget in Brief also proposes additional measures to enhance preparedness grant management efforts and expedite prior years’ grant expenditures. For example, to support reprioritization of unobligated prior year funds and focus on building core capabilities, FEMA plans to: allow grantees to apply prior years’ grant balances towards more urgent priorities, promising an expedited project approval by FEMA’s Grant Programs Directorate; expand allowable expenses under the Port Security Grant Program and Transit Security Grant Program, for example, by allowing maintenance and sustainment expenses for equipment, training, and critical resources that have previously been purchased with either federal grants or any other source of funding to support existing core capabilities tied to the five mission areas contained within the National Preparedness Goal. The changes FEMA has proposed for its fiscal year 2013 National Preparedness Grants program may create new management challenges. As noted by Chairman Bilirakis in last month’s hearing by the House Homeland Security Committee’s Subcommittee on Emergency Preparedness, Response, and Communications, allocations under the new grant program would rely heavily on a state’s Threat and Hazard Identification and Risk Assessment (THIRA). However, nearly a year after the THIRA concept was first introduced as part of the fiscal year 2011 grant guidance, grantees have yet to receive guidance on how to conduct the THIRA process. As we reported in February 2012, questions also remain as to how local stakeholders would be involved in the THIRA process at the state level. In March 2012, FEMA’s GPD announced that FEMA has established a website to solicit input from stakeholders on how best to implement the new program. According to Chairman Bilirakis, it is essential that the local law enforcement, first responders, and emergency managers who are first on the scene of a terrorist attack, natural disaster, or other emergency be involved in the THIRA process. They know the threats to their local areas and the capabilities needed to address them. Finally, according to FEMA’s plans, the new National Preparedness Grant Program will require grantees to develop and sustain core capabilities; however, the framework for assessing capabilities and prioritizing national preparedness grant investments is still not complete. As we noted in our February 2012 report, FEMA’S efforts to measure the collective effectiveness of its grants programs are recent and ongoing and thus it is too soon to evaluate the extent to which these initiatives will provide FEMA with the information it needs to determine whether these grant programs are effectively improving the nation’s security. DHS and FEMA have had difficulty in implementing longstanding plans to develop and implement a system for assessing national preparedness capabilities. For example, DHS first developed plans in 2004 to measure preparedness by assessing capabilities, but these efforts have been repeatedly delayed and are not yet complete. FEMA’s proposed revisions to the new NPGP may help the agency overcome these continuing challenges to developing and implementing a national preparedness assessment. Since 2004, DHS and FEMA have initiated a variety of efforts to develop a system of measuring preparedness. From 2005 until September 2011, much of FEMA’s efforts focused on developing and operationalizing a list of target capabilities that would define desired capabilities and could be used in a tiered framework to measure their attainment. In July 2005, we reported that DHS had established a draft Target Capabilities List that provides guidance on the specific capabilities and levels of capability at various levels of government that FEMA would expect federal, state, local, and tribal first responders to develop and maintain.to organize classes of jurisdictions that share similar characteristics— such as total population, population density, and critical infrastructure— into tiers to account for reasonable differences in capability levels among groups of jurisdictions and to appropriately apportion responsibility for development and maintenance of capabilities among levels of government and across these jurisdictional tiers. According to DHS’s Assessment and Reporting Implementation Plan, DHS intended to implement a capability assessment and reporting system based on target capabilities that would allow first responders to assess their preparedness by identifying gaps, excesses, or deficiencies in their existing capabilities or capabilities they will be expected to access through mutual aid. In addition, this information could be used to (1) measure the readiness of federal civil response assets, (2) measure the use of federal assistance at the state and local levels, and (3) assess how federal assistance programs are supporting national preparedness. DHS planned DHS’s efforts to implement these plans were interrupted by the 2005 hurricane season. In August 2005, Hurricane Katrina—the worst natural disaster in our nation’s history—made final landfall in coastal Louisiana and Mississippi, and its destructive force extended to the western Alabama coast. Hurricane Katrina and the following Hurricanes Rita and Wilma—also among the most powerful hurricanes in the nation’s history—graphically illustrated the limitations at that time of the nation’s readiness and ability to respond effectively to a catastrophic disaster; that is, a disaster whose effects almost immediately overwhelm the response capabilities of affected state and local first responders and require outside action and support from the federal government and other entities. In June 2006, DHS concluded that target capabilities and associated performance measures should serve as the common reference system for preparedness planning. In September 2006, we reported that numerous reports and our work suggested that the substantial resources and capabilities marshaled by federal, state, and local governments and nongovernmental organizations were insufficient to meet the immediate challenges posed by the unprecedented degree of damage and the resulting number of hurricane victims caused by Hurricanes Katrina and Rita. We also reported that developing the capabilities needed for catastrophic disasters should be part of an overall national preparedness effort that is designed to integrate and define what needs to be done, where it needs to be done, how it should be done, how well it should be done, and based on what standards. FEMA’s National Preparedness Directorate within its Protection and National Preparedness organization was established in April 2007 and is responsible for developing and implementing a system for measuring and assessing national preparedness capabilities. Figure 2 provides an illustration of how federal, state, and local resources provide capabilities for different levels of “incident effect” (i.e., the extent of damage caused by a natural or manmade disaster). In October 2006, Congress passed the Post-Katrina Act that required FEMA, in developing guidelines to define target capabilities, to ensure that such guidelines are specific, flexible, and measurable. In addition, the Post-Katrina Act calls for FEMA to ensure that each component of the national preparedness system, which includes the target capabilities, is developed, revised, and updated with clear and quantifiable performance metrics, measures, and outcomes. We recommended in September 2006, among other things, that DHS apply an all-hazards, risk management approach in deciding whether and how to invest in specific capabilities for a catastrophic disaster. DHS concurred with this recommendation and FEMA said it planned to use the Target Capabilities List to assess capabilities to address all hazards. In September 2007, FEMA issued an updated version of the Target Capabilities List to provide a common perspective in conducting assessments that determine levels of readiness to perform critical tasks and identify and address any gaps or deficiencies. According to FEMA, policymakers need regular reports on the status of capabilities for which they have responsibility to help them make better resource and investment decisions and to establish priorities. In April 2009, we reported that establishing quantifiable metrics for target capabilities was a prerequisite to developing assessment data that can be compared across all levels of government. At the time of our review, FEMA was in the process of refining the target capabilities to make them more measurable and to provide state and local jurisdictions with additional guidance on the levels of capability they need. Specifically, FEMA planned to develop quantifiable metrics—or performance objectives—for each of the 37 target capabilities that are to outline specific capability targets that jurisdictions (such as cities) of varying size should strive to meet, recognizing that there is not a “one size fits all” approach to preparedness. In October 2009, in responding to congressional questions regarding FEMA’s plan and timeline for reviewing and revising the 37 target capabilities, FEMA officials said they planned to conduct extensive coordination through stakeholder workshops in all 10 FEMA regions and with all federal agencies with lead and supporting responsibility for emergency support-function activities associated with each of the 37 target capabilities. The workshops were intended to define the risk factors, critical target outcomes, and resource elements for each capability. The response stated that FEMA planned to create a Task Force comprised of federal, state, local, and tribal stakeholders to examine all aspects of preparedness grants, including benchmarking efforts such as the Target Capabilities List. FEMA officials have described their goals for updating the list to include establishing measurable target outcomes, providing an objective means to justify investments and priorities, and promoting mutual aid and resource sharing. In November 2009, FEMA issued a Target Capabilities List Implementation Guide that described the function of the list as a planning tool and not a set of standards or requirements. Finally, in 2011, FEMA announced that the Target Capabilities List would be replaced by a new set of national Core Capabilities. However, it is not clear how the new approach will help FEMA overcome ongoing challenges to assessing national preparedness capabilities discussed below. FEMA has not yet fully addressed ongoing challenges in developing and implementing a system for assessing national preparedness capabilities. For example, we reported in July 2005 that DHS had identified potential challenges in gathering the information needed to assess capabilities, including determining how to aggregate data from federal, state, local, and tribal governments and others and integrating self-assessment and external assessment approaches. In analyzing FEMA’s efforts to assess capabilities, we further reported in April 2009 that FEMA faced methodological challenges with regard to (1) differences in data available, (2) variations in reporting structures across states, and (3) variations in the level of detail within data sources requiring subjective interpretation. As noted above, FEMA was in the process of refining the target capabilities at the time of our review to make them more measurable and to provide state and local jurisdictions with additional guidance on the levels of capability they need. We recommended that FEMA enhance its project management plan to include milestone dates, among other things, a recommendation to which DHS concurred. In October 2010, we reported that FEMA had enhanced its project management plan by providing milestone dates and identifying key assessment points throughout the project to determine whether project changes are necessary. Nonetheless, DHS and FEMA have had difficulty overcoming the challenges we reported in July 2005 and April 2009 in establishing a system of metrics to assess national preparedness capabilities. As we reported in October 2010, FEMA officials said that, generally, evaluation efforts they used to collect data on national preparedness capabilities were useful for their respective purposes but that the data collected were limited by data reliability and measurement issues related to the lack of standardization in the collection of data. FEMA officials reported that one of its evaluation efforts, the State Preparedness Report, has enabled FEMA to gather data on the progress, capabilities, and accomplishments of the preparedness program of a state, the District of Columbia, or a territory. However, they also said that these reports included self-reported data that may be subject to interpretation by the reporting organizations in each state and not be readily comparable to other states’ data. The officials also stated that they have taken actions to address these limitations by, for example, creating a Web-based survey tool to provide a more standardized way of collecting state preparedness information that will help FEMA officials validate the information by comparing it across states. We reported in October 2010 that FEMA had an ongoing effort to develop measures for target capabilities that would serve as planning guidance, not requirements, to assist in state and local capability assessments. FEMA officials had not yet determined how they planned to revise the Target Capabilities List and said they were awaiting the completed revision of Homeland Security Presidential Directive 8, which was to address national preparedness. That directive, called Presidential Policy Directive 8 on National Preparedness (PPD-8), was issued on March 30, 2011. In March 2011, we reported that FEMA’s efforts to develop and implement a comprehensive, measurable, national preparedness assessment of capability and gaps were not yet complete and suggested that Congress consider limiting preparedness grant funding until FEMA completes a national preparedness assessment of capability gaps at each level based on tiered, capability-specific performance objectives to enable prioritization of grant funding. In April 2011, Congress passed the fiscal year 2011 appropriations act for DHS, which reduced funding for FEMA preparedness grants by $875 million from the amount requested in the President’s fiscal year 2011 budget. The consolidated appropriations act for fiscal year 2012 appropriated $1.7 billion for FEMA Preparedness grants, $1.28 billion less than requested. The House committee report accompanying the DHS appropriations bill for fiscal year 2012 stated that FEMA could not demonstrate how the use of the grants had enhanced disaster preparedness. According to FEMA’s testimony in a hearing on the President’s Fiscal Year 2013 budget request before the House Committee on Homeland Security’s Subcommittee on Emergency Preparedness, Response, and Communications, FEMA became the federal lead for the implementation of PPD-8 in 2011. The new presidential policy directive calls for the development of both a National Preparedness Goal and a National Preparedness System (both of which were required by the Post-Katrina Act in 2006). FEMA issued the National Preparedness Goal in September 2011, which establishes core capabilities for prevention, protection, response, recovery, and mitigation that are to serve as the basis for preparedness activities within FEMA, throughout the federal government, and at the state and local levels. These new core capabilities are the latest evolution of the Target Capabilities List. According to FEMA officials, they plan to continue to organize the implementation of the National Preparedness System and will be working with partners across the emergency management community to integrate activities into a comprehensive campaign to build and sustain preparedness. According to FEMA, many of the programs and processes that support the components of the National Preparedness System exist and are currently in use, while others will need to be updated or developed. For example, FEMA has not yet developed national preparedness capability requirements based on established metrics for the core capabilities to provide a framework for national preparedness assessments. As I testified last year, until such a framework is in place, FEMA will not have a basis to operationalize and implement its conceptual approach for assessing federal, state, and local preparedness capabilities against capability requirements to identify capability gaps for prioritizing investments in national preparedness. Chairman Bilirakis, Ranking Member Richardson, and Members of the Committee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. For further information about this statement, please contact William O. Jenkins Jr., Director, Homeland Security and Justice Issues, at (202) 512- 8777 or jenkinswo@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. In addition to the contact named above, the following individuals from GAO’s Homeland Security and Justice Team also made major contributions to this testimony: Chris Keisling, Assistant Director; Allyson Goldstein, Dan Klabunde, Tracey King, and Lara Miklozek. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | From fiscal years 2002 through 2011, the federal government appropriated over $37 billion to the Department of Homeland Securitys (DHS) preparedness grant programs to enhance the capabilities of state and local governments to prevent, protect against, respond to, and recover from terrorist attacks. DHS allocated $20.3 billion of this funding to grant recipients through four of the largest preparedness grant programsthe State Homeland Security Program, the Urban Areas Security Initiative, the Port Security Grant Program, and the Transit Security Grant Program. The Post-Katrina Emergency Management Reform Act of 2006 requires the Federal Emergency Management Agency (FEMA) to develop a national preparedness system and assess preparedness capabilitiescapabilities needed to respond effectively to disasters. FEMA could then use such a system to help it prioritize grant funding. This testimony addresses the extent to which DHS and FEMA have made progress in managing preparedness grants and measuring preparedness by assessing capabilities and addressing related challenges. GAOs comments are based on products issued from April 2002 through February 2012 and selected updates conducted in March 2012. DHS and FEMA have taken actions with the goal of enhancing management of preparedness grants, but better project information and coordination could help FEMA identify and mitigate the risk of unnecessary duplication among grant applications. Specifically, DHS and FEMA have taken actions to streamline the application and award processes and have enhanced their use of risk management for allocating grants. For example, in November 2011, GAO reported that DHS modified its risk assessment model for the Port Security Grant Program by recognizing that different ports have different vulnerability levels. However, in February 2012, GAO reported that FEMA made award decisions for four of its grant programsthe State Homeland Security Grant Program, the Urban Area Security Initiative, the Port Security Grant Program, and the Transit Security Grant Programwith differing levels of information, which contributed to the risk of funding unnecessarily duplicative projects. GAO also reported that FEMA did not have a process to coordinate application reviews across the four grant programs. Rather, grant applications were reviewed separately by program and were not compared across each other to determine where possible unnecessary duplication may occur. Thus, GAO recommended that (1) FEMA collect project information with the level of detail needed to better position the agency to identify any potential unnecessary duplication within and across the four grant programs, weighing any additional costs of collecting this data and (2) explore opportunities to enhance FEMAs internal coordination and administration of the programs to identify and mitigate the potential for any unnecessary duplication. DHS agreed and identified planned actions to improve visibility and coordination across programs and projects. FEMA has proposed consolidating the majority of its various preparedness grant programs into a single, comprehensive preparedness grant program called the National Preparedness Grant Program (NPGP) in fiscal year 2013; however, this may create new challenges. For example, allocations under the NPGP would rely heavily on a states risk assessment, but grantees have not yet received guidance on how to conduct the risk assessment process. FEMA has established a website to solicit input from stakeholders on how best to implement the program. DHS and FEMA have had difficulty implementing longstanding plans and overcoming challenges in assessing capabilities, such as determining how to validate and aggregate data from federal, state, local, and tribal governments. For example, DHS first developed plans in 2004 to measure preparedness by assessing capabilities, but these efforts have been repeatedly delayed. In March 2011, GAO reported that FEMAs efforts to develop and implement a comprehensive, measurable, national preparedness assessment of capability and gaps were not yet complete and suggested that Congress consider limiting preparedness grant funding until FEMA completes a national preparedness assessment of capability gaps based on tiered, capability-specific performance objectives to enable prioritization of grant funding. In April 2011, Congress passed the fiscal year 2011 appropriations act for DHS that reduced funding for FEMA preparedness grants by $875 million from the amount requested in the Presidents fiscal year 2011 budget. For fiscal year 2012, Congress appropriated $1.28 billion less than requested in the Presidents budget. GAO has made recommendations to DHS and FEMA in prior reports to strengthen their management of preparedness grants and enhance their assessment of national preparedness capabilities. DHS and FEMA concurred and have actions underway to address them. |
When first conceived, the LCS program represented an innovative approach for conducting naval operations, matched with a unique acquisition strategy that included two nontraditional shipbuilders and two different ships based on commercial designs—Lockheed Martin’s Freedom variant and Austal USA’s Independence variant, respectively. The Navy planned to experiment with these ships to determine its preferred design variant. However, in relatively short order, this experimentation strategy was abandoned in favor of a more traditional acquisition of over 50 ships. More recently, the Secretary of Defense has questioned the appropriate capability and quantity of the LCS. The purpose of the program has evolved from concept experimentation, to LCS, and more recently, to an LCS that will be upgraded to a frigate. The strategy for contracting and competing for ship construction has also changed. This evolution is captured in figure 1. While one could argue that a new concept should be expected to evolve over time, the LCS evolution has been complicated by the fact that major commitments have been made to build large numbers of ships before proving their capabilities. Whereas acquisition best practices embrace a “fly before you buy” approach, the Navy has subscribed to a buy before you fly approach for LCS. Consequently, the business imperatives of budgeting, contracting, and ship construction have outweighed the need to demonstrate knowledge, such as technology maturation, design, and testing, resulting in a program that has delivered 8 ships and has 14 more in some stage of the construction process (includes LCS 21, with a planned December 2016 construction start) despite an unclear understanding of the capability the ships will ultimately be able to provide and with notable performance issues discovered among the few ships that have already been delivered. The Navy’s vision for the LCS has evolved significantly over time, with questions remaining today about the program’s underlying business case. In its simplest form, a business case requires a balance between the concept selected to satisfy warfighter needs and the resources— technologies, design knowledge, funding, and time—needed to transform the concept into a product, in this case a ship. In a number of reports and assessments since 2005, we have raised concerns about the Navy’s business case for LCS, noting risks related to cost, schedule, and technical problems, as well as the overall capability of the ships. Business case aside, the LCS program deviated from initial expectations, while continuing to commit to ship and mission package purchases. The LCS acquisition was challenging from the outset. The Navy hoped to deliver large numbers of ships to the fleet quickly at a low cost. In an effort to achieve its goals, the Navy deviated from sound business practices by concurrently designing and constructing the two lead ship variants while still determining the ship’s requirements. The Navy believed it could manage this approach because it considered LCS to be an adaptation of existing commercial ship designs. However, transforming a commercial ship into a capable, survivable warship was an inherently complex undertaking. Elements of the business case further eroded— including initial cost and schedule expectations. Table 1 compares the Navy’s initial expectations of the LCS business case with the present version of the program. Our recent work has shown that the LCS business case continues to weaken. LCS ships under construction have exceeded contract cost targets, with the government responsible for paying for a portion of the cost growth. This growth has prompted the Navy to request $246 million in additional funding for fiscal years 2015-2017 largely to address cost overruns on 12 LCS seaframes. Similarly, deliveries of almost all LCS under contract (LCS 5-26) have been delayed by several months, and, in some cases, closer to a year or longer. Navy officials recently reported that, despite having had 5 years of LCS construction to help stabilize ship delivery expectations, the program would not deliver four LCS in fiscal year 2016 as planned. Whereas the program expected to deliver all 55 ships in the class by fiscal year 2018, today that expectation has been reduced to 17 ships. LCS mission packages, in particular, lag behind expectations. The Navy has fallen short of demonstrating that the LCS with its mission packages can meet the minimum level of capability defined at the beginning of the program. As figure 2 shows, 24 LCS seaframes will be delivered by the time all three mission packages achieve a minimum capability. Since 2007, delivery of the total initial mission package operational capability has been delayed by about 9 years (from 2011 to 2020) and the Navy has lowered the level of performance needed to achieve the initial capability for two packages—surface warfare and mine countermeasures. In addition to mission package failures, the Navy has not met several seaframe objectives, including speed and range. For example, Navy testers estimate that the range of one LCS variant is about half of the minimum level identified at the beginning of the program. As the Navy continues to concurrently deliver seaframes and develop mission packages, it has become clear that the seaframes and mission package technologies were not mature and remain largely unproven. In response, the Navy recently designated the first four LCS as test ships to support an aggressive testing schedule between fiscal years 2017 and 2022. Additional deficiencies discovered during these tests could further delay capability and require expensive changes to the seaframes and mission packages that have already been delivered. As the cost and schedule side of the business case for LCS has grown, performance and capabilities have declined. Changes in the LCS concept of operations are largely the consequence of less than expected lethality and survivability, which remain mostly unproven 7 years after delivery of the lead ships. LCS was designed with reduced requirements as compared to other surface combatants, and over time the Navy has lowered several survivability and lethality requirements further and removed some design features—making the ships less survivable in their expected threat environments and less lethal than initially planned. This has forced the Navy to redefine how it plans to operate the ships. Our previous work highlighted the changes in the LCS’s expected capability, as shown in table 2. Further capability changes may be necessary as the Navy continues to test the seaframes and mission packages, as well as gain greater operational experience. For example, the Navy has not yet demonstrated that LCS will achieve its survivability requirements and does not plan to complete survivability assessments until 2018—after more than 24 ships are either in the fleet or under construction. The Navy has identified unknowns related to the Independence variant’s aluminum hull, and conducted underwater explosion testing in 2016 but the Navy has yet to compile and report the results. Both variants also sustained some damage in trials in rough sea conditions, but the Navy has not completed its analytical report of these events. Results from air defense and cybersecurity testing also indicate capability concerns. The Navy elected to pursue a frigate concept based on a minor modified LCS. The frigate, as planned, will provide multi-mission capability that is an improvement over LCS and offers modest improvements to some other capabilities, such as the air search radar. Still, many questions remain to be settled about the frigate’s design, cost, schedule, and capabilities—all while the Navy continues to purchase additional LCS. Despite the uncertainties, the Navy’s acquisition strategy involves effectively demonstrating a commitment to buy all of the planned frigates—12 in total—before establishing realistic cost, schedule, and technical parameters—because the Navy will ask Congress to authorize the contracting approach for the 12 frigates (what the Navy calls a block buy contract) in 2017. Further, the frigate will inherit many of the shortcomings or uncertainties of the LCS, and does not address all the priorities that the Navy had identified for its future frigate. The costs for the frigate are still uncertain. Navy officials have stated that the frigate is expected to cost no more than 20 percent—approximately $100 million—more per ship than the average LCS seaframes. However, the Navy will not establish its cost estimate until May 2017— presumably after the Navy requests authorization from Congress in its fiscal year 2018 budget request for the block buy contracting approach for 12 frigates—raising the likelihood that the budget request will not reflect the most current costs for the program moving forward. In addition to the continued cost uncertainty, the schedule and approach for the frigate acquisition have undergone substantial changes in the last year, as shown in table 3. According to frigate program officials, under the current acquisition approach the Navy will award contracts in fiscal year 2017 to each of the current LCS contractors to construct one LCS with a block buy option for 12 additional LCS—not frigates. Then, the Navy plans to obtain proposals from both LCS contractors in late 2017 that would upgrade the block buy option of LCS to frigates using frigate-specific design changes and modifications. The Navy will evaluate the frigate upgrade packages and then exercise the option—now for frigates—on the contract that provides the best value based on tradeoffs between price and technical factors. This downselect will occur in summer 2018. Figure 3 illustrates how the Navy plans to modify the fiscal year 2017 LCS contract to convert the ships in the block buy options to frigates. The Navy’s current plan, which moves the frigate award forward from fiscal year 2019 to fiscal year 2018, is an acceleration that continues a pattern of committing to buy ships in advance of adequate knowledge. Specifically, the Navy has planned for its downselect award of the frigate to occur before detail design of the ship begins. As we previously reported, awarding a contract before detail design is completed—though common in Navy ship acquisitions—has resulted in increased ship prices. Further, in the absence of a year of frigate detail design, the Navy plans to rely on a contractor-driven design process that is less prescriptive. This approach is similar to that espoused by the original LCS program, whereby the shipyards were given performance specifications and requirements, selecting the design and systems that they determined were best suited to fit their designs in a producible manner. Program officials told us that this new approach should yield efficiencies; however, history from LCS raises concern that this approach for the frigate similarly could lead to the ships having non-standard equipment, with less commonality with the other design and the rest of the Navy. As LCS costs grew and capabilities diluted, the Secretary of Defense directed the Navy to explore alternatives to the LCS to address key deficiencies. In response, the Navy created the Small Surface Combatant Task Force and directed it to consider new and existing frigate design options, including different types of modified LCS designs. The task force concluded that the Navy’s desired capability requirements could not be met without major modifications to an LCS design or utilizing other non- LCS designs. When presented with this conclusion, senior Navy leadership directed the task force to explore what capabilities might be more feasible on a minor modified LCS. This led the task force to develop options with diminished capabilities, such as reduced speed or range, resulting in some capabilities becoming equal to or below expected capabilities of the current LCS. Ultimately, the department chose a frigate concept based on a minor modified LCS in lieu of more capable small surface combatant options because of LCS’s relatively lower cost and quicker ability to field, as well as the ability to upgrade remaining LCS. Table 4 presents an analysis from our past work, which found that the Navy’s proposed frigate will offer some improvements over LCS. For example, the Navy plans to equip the frigates with the mission systems from both the surface and anti-submarine mission packages simultaneously instead of just one at a time like LCS. However, the Navy’s planned frigate upgrades will not result in significant improvements in survivability areas related to vulnerability—the ability to withstand initial damage effects from threat weapons—or recoverability— the ability of the crew to take emergency action to contain and control damage. Further, the Navy sacrificed capabilities that were prioritized by fleet operators. For example, fleet operators consistently prioritized a range of 4,000 nautical miles, but the selected frigate concept is as much as 30 percent short of achieving such a range. The Director, Operational Test and Evaluation has noted that the Navy’s proposed frigate design is not substantially different from LCS and does not add much more redundancy or greater separation of critical equipment or additional compartmentation, making the frigate likely to be less survivable than the Navy’s previous frigate class. Further, the Navy plans to make some similar capability improvements to existing and future LCS, narrowing the difference between LCS and the frigate. We found that the proposed frigate does not add any new offensive anti-submarine or surface warfare capabilities that are not already part of one of the LCS mission packages, so while the frigate will be able to carry what equates to two mission packages at once, the capabilities in each mission area will be the same as LCS. While specific details are classified, there are only a few areas where there are differences in frigate warfighting capability compared to the LCS. Since it will be based on the LCS designs, the frigate will likely carry forward some of the limitations of the LCS designs. For example, LCS was designed to carry a minimally-sized crew of approximately 50. The Navy has found in various studies that the crew is undersized and made some modest increases in crew size. A frigate design based on LCS may not be able to support a significant increase in crew size due to limited space for berthing and other facilities. Additionally, barring Navy-directed changes to key mechanical systems, the frigate will carry some of the more failure-prone LCS equipment, such as some propulsion equipment, and will likely carry some of the non-fleet-standard, LCS-unique equipment that has challenged the Navy’s support and logistics chain. Uncertainties or needs that remain with the surface and anti-submarine warfare mission packages, such as demonstrating operational performance of the surface-to-surface missile and the anti-submarine warfare package, also pose risk for the frigate. The Navy’s plans for fiscal years 2017 and 2018 involve significant decisions for the LCS and the frigate programs, including potential future commitments of approximately $14 billion for seaframes and mission packages. First, the Navy plans to buy the last two LCS in fiscal year 2017, even though DOD and the Navy recognize that the LCS does not meet needs. Second, the Navy is planning to seek congressional authorization for a block buy of all planned frigates and funding for the lead frigate as soon as next year—2017—despite significant unknowns about the cost, schedule, and capability of the vessel. The Navy’s acquisition approach for the frigate raises concerns about overcommitting to the future acquisition of ships for which significant cost, schedule, and technical uncertainty remains. Similar to what we previously have advised about LCS block buy contracting, a frigate block buy approach could reduce funding flexibility. For example, the LCS contracts provide that a failure to fully fund the purchase of a ship in a given year would make the contract subject to renegotiation. Following this reasoning, such a failure to fund a ship in a given year could result in the government paying more for remaining ships under the contract, which provides a notable disincentive to take any action that might delay procurement, even when a program is underperforming. The Navy requested funding for two LCS in its fiscal year 2017 budget request. We previously suggested that Congress consider not funding any requested LCS in fiscal year 2017 because of unresolved concerns with lethality and survivability of the LCS design, the Navy’s ability to make needed improvements, and the lagging construction schedule of the shipyards. As figure 4 depicts, even if no ships were funded in fiscal year 2017, delays that have occurred for previously funded ships have resulted in a construction workload that extends into fiscal year 2020. In all, 8 ships have been delivered (LCS 1-8) and 14 are in various phases of construction (LCS 9-22), with 3 more (LCS 23, 24, and 26) set to begin construction later in fiscal year 2017. Although the Navy has argued that pausing LCS production would result in loss of production work and start-up delays to the frigate program, the schedule suggests that the shipyards in Marinette, Wisconsin, and Mobile, Alabama, will have sufficient workload remaining from prior LCS contract awards that offsets the need to award additional LCS in fiscal year 2017. The Navy’s concern also does not account for any other work that the shipyards may have from other Navy or commercial contracts and the possibility of continued delays in the delivery of LCS. On the heels of the decision to fund fiscal year 2017 LCS will be the decision on whether to authorize the frigate contracting approach and fund the lead frigate. As I noted above, the current acquisition plans for the frigate have been accelerated during the past year. If these plans hold, Congress will be asked in a few months to consider authorizing a block buy of 12 frigates and funding the lead frigate when the fiscal year 2018 budget is proposed—before detail design has begun and the scope and cost of the design changes needed to turn an LCS into a frigate are well understood. The frigate acquisition strategy also reflects a proclivity by the Navy to use contracting approaches such as block buys and multiyear procurement for acquisition programs, which may have the cumulative effect of inuring the programs against changes—such as in quantities bought. If you or your staff has any questions about this statement, please contact Paul L. Francis at (202) 512-4841 or francisp@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Michele Mackin (Director), Diana Moldafsky (Assistant Director), Pete Anderson, Jacob Leon Beier, Laurier Fish, Kristine Hassinger, C. James Madar, Sean Merrill, LeAnna Parkey, and Robin Wilson. Littoral Combat Ship: Need to Address Fundamental Weaknesses in LCS and Frigate Acquisition Strategies. GAO-16-356. Washington, D.C.: June 9, 2016. Littoral Combat Ship: Knowledge of Survivability and Lethality Capabilities Needed Prior to Making Major Funding Decisions. GAO-16-201. Washington, D.C.: December 18, 2015. Littoral Combat Ship: Additional Testing and Improved Weight Management Needed Prior to Further Investments. GAO-14-827. Washington, D.C.: September 25, 2014. Littoral Combat Ship: Navy Complied with Regulations in Accepting Two Lead Ships, but Quality Problems Persisted after Delivery. GAO-14-749. Washington, D.C.: July 30, 2014. Littoral Combat Ship: Deployment of USS Freedom Revealed Risks in Implementing Operational Concepts and Uncertain Costs. GAO-14-447. Washington, D.C.: July 8, 2014. Navy Shipbuilding: Opportunities Exist to Improve Practices Affecting Quality. GAO-14-122. Washington, D.C.: November 19, 2013. Navy Shipbuilding: Significant Investments in the Littoral Combat Ship Continue Amid Substantial Unknowns about Capabilities, Use, and Cost. GAO-13-738T. Washington, D.C.: July 25, 2013. Navy Shipbuilding: Significant Investments in the Littoral Combat Ship Continue Amid Substantial Unknowns about Capabilities, Use, and Cost. GAO-13-530. Washington, D.C.: July 22, 2013. Defense Acquisitions: Realizing Savings under Different Littoral Combat Ship Acquisition Strategies Depends on Successful Management of Risks. GAO-11-277T. Washington, D.C.: December 14, 2010. National Defense: Navy’s Proposed Dual Award Acquisition Strategy for the Littoral Combat Ship Program. GAO-11-249R. Washington, D.C.: December 8, 2010. Defense Acquisitions: Navy’s Ability to Overcome Challenges Facing the Littoral Combat Ship Will Determine Eventual Capabilities. GAO-10-523. Washington, D.C.: August 31, 2010. Littoral Combat Ship: Actions Needed to Improve Operating Cost Estimates and Mitigate Risks in Implementing New Concepts. GAO-10-257. Washington, D.C.: February 2, 2010. Best Practices: High Levels of Knowledge at Key Points Differentiate Commercial Shipbuilding from Navy Shipbuilding. GAO-09-322. Washington, D.C.: May 13, 2009. Defense Acquisitions: Overcoming Challenges Key to Capitalizing on Mine Countermeasures Capabilities. GAO-08-13. Washington, D.C.: October 12, 2007. Defense Acquisitions: Plans Need to Allow Enough Time to Demonstrate Capability of First Littoral Combat Ships. GAO-05-255. Washington, D.C.: March 1, 2005. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Navy envisioned a revolutionary approach for the LCS program: dual ship designs with interchangeable mission packages intended to provide mission flexibility at a lower cost. This approach has fallen short, with significant cost increases and reduced expectations about mission flexibility and performance. The Navy has changed acquisition approaches several times. The latest change involves minor upgrades to an LCS design—referred to now as a frigate. Yet, questions persist about both the LCS and the frigate. GAO has reported on the acquisition struggles facing LCS and now the frigate, particularly in GAO-13-530 and GAO-16-356 . This statement discusses: (1) the evolution of the LCS acquisition strategy and business case; (2) key risks in the Navy's plans for the frigate based on the LCS program; and (3) remaining oversight opportunities for the LCS and small surface combatant programs. This statement is largely based on GAO's prior reports and larger work on shipbuilding and acquisition best practices. It incorporates limited updated audit work where appropriate. The Navy's vision for Littoral Combat Ship (LCS) program has evolved significantly over the last 15 years, reflecting degradations of the underlying business case. Initial plans to experiment with two different prototype ships adapted from commercial designs were abandoned early in favor of an acquisition approach that committed to numerous ships before proving their capabilities. Ships were not delivered quickly to the fleet at low cost. Rather cost, schedule, and capability expectations degraded over time. In contrast, a sound business case would have balanced needed resources—time, money, and technical knowledge—to transform a concept into the desired product. Concerned about the LCS's survivability and lethality, in 2014 the Secretary of Defense directed the Navy to evaluate alternatives. After rejecting more capable ships based partly on cost, schedule, and industrial base considerations, the Navy chose the existing LCS designs with minor modifications and re-designated the ship as a frigate. Much of the LCS's capabilities are yet to be demonstrated and the frigate's design, cost, and capabilities are not well-defined. The Navy proposes to commit quickly to the frigate in what it calls a block buy of 12 ships. Congress has key decisions for fiscal years 2017 and 2018 that have significant funding and oversight implications. First, the Navy has already requested funding to buy two more baseline LCS ships in fiscal year 2017. Second, early next year, the Navy plans to request authorization for a block buy of all 12 frigates and funding in the fiscal year 2018 budget request for the lead frigate. Making these commitments now could make it more difficult to make decisions in the future to reduce or delay the program should that be warranted. A more basic oversight question today is whether a ship that costs twice as much yet delivers less capability than planned warrants an additional investment of nearly $14 billion. GAO has advised Congress to consider not funding the two LCS requested in 2017 given its now obsolete design and existing construction backlogs. Authorizing the block buy strategy for the frigate appears premature. The decisions Congress makes could have implications for what aspiring programs view as acceptable strategies. GAO is not making any new recommendations in this statement but has made numerous recommendations to the Department of Defense (DOD) in the past on LCS and frigate acquisition, including strengthening the program's business case before proceeding with acquisition decisions. While DOD has, at times, agreed with GAO's recommendations, it has taken limited action to implement them. |
The military regime in Burma has routinely restricted freedom of speech, religion, and movement, and committed other serious human rights violations against the Burmese people. Prior to 1990, Burma was ruled by a military regime known as the State Law and Order Restoration Council. In May 1990, national parliamentary elections were held that resulted in an overwhelming victory for the National League for Democracy party, led by Aung San Suu Kyi. However, the State Law and Order Restoration Council failed to yield power and honor the results of the election and maintained its policies of autocratic rule and repression of democratic opposition. Since 1990, Burma has continued to suffer under this repressive rule; the military regime has since changed its name to the State Peace and Development Council. The military regime in Burma has detained or held Aung San Suu Kyi under house arrest for the majority of the past 20 years, continues to imprison democratic political activists in favor of democracy, and commits other serious human rights violations. In response to the behavior of the military regime in Burma, the United States has taken a number of actions aimed at pressuring the regime and promoting democratic reform in the country. Since 1990, the U.S. Mission in Burma has been headed by a chargé d’affaires. In 1997, the United States prohibited new investment in Burma and later imposed countermeasures on Burma due to its inadequate measures to eliminate money laundering. In 2003, Congress passed the Burmese Freedom and Democracy Act of 2003, which required the President to ban the importation of any Burmese product into the United States in order to strengthen Burma’s democratic opposition and support the National League for Democracy as the legitimate representative of the Burmese people. However, despite the 2003 law, some Burmese gemstones were being cut, polished, and treated in third countries, such as Thailand, and exported as products of those third countries to the United States and other countries. To increase pressure on the military regime to end its human rights violations and restore democracy, Congress passed the JADE Act in 2008. Among its provisions are measures (1) imposing a ban against current and former leaders of the Burmese regime, their immediate family members, and supporters from traveling to the United States; (2) imposing targeted financial sanctions against those same persons; and (3) granting authority to the Secretary of the Treasury to impose additional banking sanctions against those persons. In addition, the JADE Act amends the 2003 law to prohibit the importation into the United States of jadeite and rubies mined or extracted from Burma and jewelry containing jadeite and rubies mined or extracted from Burma (fig. 1). It also calls on the Administration to pursue international actions to prevent the global trade in Burmese-origin gemstones. The JADE Act also calls on the Administration to develop an international arrangement—-similar to the Kimberley Process Certification Scheme (KPCS) for conflict diamonds—-to prevent global trade in Burmese-origin jadeite and rubies and jewelry containing Burmese-origin jadeite and rubies. In November 2002, diamond-producing and diamond-trading countries launched the KPCS, a voluntary global system to control the trade of rough diamonds and to assure consumers that the diamonds they purchase have not helped to finance violent conflicts. The United States and other KPCS participants are responsible for ensuring that the integrity of the certification scheme is upheld and that the Kimberley Process works toward preventing conflict diamonds from entering the legitimate trade of rough diamonds. In instances of noncompliance, KPCS can expel or suspend a participant. According to foreign jewelry industry representatives, jadeite generally is mined in the northern Kachin state of Burma (fig. 2), where rough jadeite rocks can range in size from that of a small egg up to a 1-ton boulder. According to agency officials and jewelry industry representatives, virtually all of the jadeite exported out of Burma is sold at government auctions in Rangoon. Jewelry industry representatives said that the vast majority of this jadeite is purchased by buyers from China (both the mainland and Hong Kong); a very small amount is bought by buyers from the rest of the world. According to agency officials, the flow of jadeite generally goes from Burma to South China for processing, cutting, carving, and manufacture into finished jadeite products that are consumed domestically in China. According to foreign jewelry industry representatives, the price of jadeite continues to rise because the consumer demand for articles made of jadeite in China continues to grow. According to foreign jewelry industry and government officials, the Chinese have a long history of buying and using jade for jewelry, ornaments, and decorations, including statues. Jadeite, a relatively rare and high-quality form of jade that is believed to come mostly from Burma, has positive cultural significance to the Chinese. Figure 3 shows examples of statues carved out of white jadeite. Foreign jewelry industry representatives said that a widely held belief among the Chinese is that jadeite, including wearing jewelry made from jadeite, such as the pieces shown in figure 4, brings good fortune and health. According to foreign jewelry industry representatives, there are millions of consumers of jadeite in China; Burmese jadeite products processed and finished in China are consumed directly by China’s internal market. According to estimates by these representatives, approximately 100,000 people carve and polish jadeite in China, and as many as 3 million people work in the jade industry overall. In contrast to jadeite, rough Burmese ruby stones are relatively small and thus easy to transport and smuggle. They are typically found through small-scale mining techniques, such as panning for stones in riverbeds, digging small pits or trenches, and locally quarrying ruby-bearing marble. The major areas for ruby mining are in Mogok and Mong Hsu in northeast Burma. According to U.S. agency officials, small-scale Burmese miners generally collect rubies and hope to avoid paying taxes to the Burmese government. They sell their stones to small and midlevel mom-and-pop operations that sell the stones at the Thailand-Burma border. Burmese migrants carry the stones over the border to sell to Thai traders. Although some of the highest-quality rough rubies might be sold at the government auctions, according to foreign government and industry officials, most rough Burmese rubies processed in Thailand have been smuggled over the border and probably result in little or no revenue for the regime in Burma. Figure 5 shows some Burmese-origin rubies that have been processed and are for sale in Thailand. The very nature of smuggling prevents the Burmese regime from extracting much revenue from these stones. U.S. agency officials said that the Burmese regime probably derives much more revenue from gemstones (such as jadeite) that are almost exclusively sold at government auctions in Rangoon. Jewelry industry representatives told us that most rough stones found in Burma are of uncertain value until they are heat treated and then cut and polished in Thailand. Once Burmese rubies in rough form enter Thailand, the rubies typically find their way to Chanthaburi or Bangkok for cutting, heat treatment, polishing, and setting in jewelry. Thai government officials claim that as rough Burmese rubies go through these processes, they undergo substantial transformation (fig. 6), and become products of Thailand. Thai officials claim that the value of a rough ruby is only 10 percent of the value of an average piece of finished Thai ruby jewelry exported to the United States. The rest is Thai value-added processing. Generally, a high- quality finished Burmese ruby is known for its special character, such as its translucent and brilliant color scheme, known as “pigeon-blood” red. According to agency officials, rubies of Burmese origin have historically commanded a price premium that is recognized in industry price guides. This premium has created an incentive for traders to try to pass high- quality non-Burmese stones as being of Burmese origin. Thai gemstone industry representatives said they are seeking to use rubies from non-Burmese sources such as Madagascar, Tanzania, and Kenya, but said that there are challenges to using stones from these sources. According to these representatives, Madagascar has some high-quality rubies, but requires that the value-added processes on these rubies be performed within its borders. Working with Burmese rubies is a competitive advantage for Thailand because it has easy access to the stones, the stones are generally cheaper because they are smuggled, and Thai traders have developed social and cultural networks with the Burmese ruby traders. According to Thai government and jewelry industry officials, the United States is one of Thailand’s top five overall trading partners, and jewelry remains one of the top three Thai exports to the United States. The United States and Europe are Thailand’s main export markets for finished ruby jewelry. Based on information from Thai jewelry industry representatives, in 2008 Thailand’s jewelry exports to the United States were valued at $8 billion. Also in 2008, according to these representatives, Thailand exported approximately $12.6 million (429 million Thai baht) worth of rubies to the United States. Thai jewelry industry representatives stated that from October to December 2008, Thai jewelry exports to the United States, on average, declined by 30 percent. Moreover, they stated that roughly 1.2 million Thais worked in the jewelry industry in October 2008 and estimated that 100,000 to 120,000 jewelry industry jobs had been lost by March 2009. Thai jewelry industry representatives claim that these declines are due, in large part, to the import restrictions under the JADE Act. According to agency officials, the National Security Council convened an interagency working group and tasked the Department of Commerce (Commerce) with holding meetings with group members and U.S. jewelry industry and laboratory representatives to obtain information about the trade in jadeite and rubies. In response, Commerce arranged four meetings, one with U.S. agency gemstone experts in October 2008, and three with U.S. jewelry industry representatives and agency officials in October and November 2008. In addition to Commerce, agency officials from Treasury, DHS, the United States Geological Survey, the United States Trade Representative, State, and the International Trade Commission attended. According to agency officials, some industry representatives who attended these meetings have spoken broadly in favor of U.S. import restrictions on Burmese-origin rubies, jadeite, and related jewelry because they believe the restrictions will negatively impact the Burmese regime. Other representatives have expressed doubts about the government’s ability to impose an import restriction regime that will not compromise legitimate non-Burmese ruby imports, noting that the nature of the ruby trade is decentralized and informal, which complicates definitive certification of country-of-origin determination. Some representatives of the U.S. and foreign jewelry industries we interviewed also expressed concern about U.S. import restrictions under the JADE Act. They said that U.S. import restrictions have little impact on the military regime in Burma and negatively impact small-scale miners and traders in Burma and jewelry workers in Thailand. According to jewelry industry representatives, as a result of the act, some U.S. dealers have become reluctant to deal in rubies, jadeite, and related jewelry whether or not they were from Burma. Although the import restrictions in the JADE Act allow trade in non-Burmese-origin rubies, jadeite, and related jewelry, some industry representatives said that the restrictions imposed by the act have reduced trade in rubies overall, not just trade in Burmese rubies. For example, one gemological lab director noted that since the inception of the law, few dealers have submitted rubies for testing, suggesting that dealers are less inclined to trade in rubies overall. Some representatives of colored gemstone dealers expressed concern that CBP agents may not have the ability to differentiate between Burmese and non-Burmese rubies, jadeite, or related jewelry and this could lead to wrongful seizures. For example, a ruby dealer we met with said he wanted to purchase a 5-carat, reportedly non-Burmese, ruby during an overseas trip. The dealer paid a gemological laboratory to have the stone tested for a country-of-origin determination. However, he decided not to purchase the ruby because (1) as a result of testing, the stone was judged as originating from one of six possible countries, one of which was Burma, and (2) with no way to definitively prove the stone was not from Burma, the dealer was concerned CBP officials might mistakenly or arbitrarily seize the stone. The President issued Presidential Proclamation 8294 on September 26, 2008, implementing prohibitions and conditions in the JADE Act and authorizing U.S. agencies to take actions called for in the act. Specifically, the proclamation modified the Harmonized Tariff Schedule (HTS) of the United States to prohibit the importation of certain goods of Burma. In addition, according to agency officials, in October 2008, the HTS was amended to include new HTS subheadings that identify and can be used to track the import of non-Burmese-origin rubies, jadeite, and related jewelry. Acting to implement the JADE Act and Presidential Proclamation 8294, in January 2009, Treasury and CBP published an interim final rule detailing requirements and responsibilities for U.S. importers and foreign exporters on imports of non-Burmese-origin rubies, jadeite, and related jewelry into the United States. The interim final rule detailed conditions on the importation of rubies, jadeite, and related jewelry and offered guidance and details about importer and exporter responsibilities, including the exporter written certification process, importer certification scheme with a verifiable evidence standard, and importer record-keeping requirements. According to the amended regulations, if an importer brings rubies, jadeite, or related jewelry into the United States, using the non-Burmese- origin HTS codes serves as a certification by the importer that these goods were not mined or extracted from Burma. CBP officers do not verify the authenticity of the certification. Agency officials said that to determine rubies, jadeite, and related jewelry are not from Burma, CBP officers rely on (1) the non-Burmese-origin HTS codes under which the items are shipped, and (2) statements on accompanying commercial invoices that the items did not originate in Burma. Agency officials stated that CBP officers will likely have difficulty authenticating country-of-origin statements on commercial invoices. U.S. jewelry industry officials also expressed concern over the reliability of statements on foreign invoices from exporters, because of the potential for fraud or abuse by exporters trying to access the U.S. market, and U.S. agency officials acknowledged this possibility. Further, agency officials said that there is little CBP officers can do to challenge such exporter statements unless there is obvious and clear conflicting evidence on the commercial invoice or in the shipment itself. Agency officials and U.S. jewelry representatives said that a CBP officer has no way to discern where a stone is from, or even if a stone is authentic or not. CBP officials could have problems distinguishing one colored gemstone, such as garnet, from another, such as ruby. Jadeite jade and nephrite jade also share similar characteristics, including colors and texture. The interim final rule also mandates that at the time of importation, the importer must have in his possession written certification from the exporter certifying that the jadeite or rubies were not mined or extracted from Burma, with verifiable evidence that tracks the stones from the mine to exportation or the place of final finishing. The importer is required to maintain these records of certification for at least 5 years. However, U.S. jewelry industry representatives have expressed concern that the rule does not offer practical instruction on what constitutes verifiable evidence. Agency officials stated that there are no set standards to evaluate verifiable evidence provided by importers. Agency officials acknowledged that currently, CBP officers rely on experience to determine what is valid, verifiable evidence. U.S. jewelry representatives said that without clear guidance as to what constitutes verifiable evidence, the importer would have to rely on whatever documentation the exporter can provide as evidence of the articles’ origin. Though the interagency working group discussed the option of developing further guidance on what constitutes verifiable evidence, this has not yet occurred. As a result, there is no further guidance from U.S. agencies to importers as to what constitutes acceptable verifiable evidence to maintain records on articles of non-Burmese origin. Since the implementation of the interim final rule, DHS has not developed specific guidance to conduct postentry reviews of importers’ records of verifiable evidence nor has it conducted any such postentry reviews. As a result, U.S. agencies have no process for validating that Burmese-origin rubies, jadeite, or related jewelry have been effectively restricted from entering the United States. According to CBP officials, CBP is in the process of developing a postentry review operation to verify that the recordkeeping requirements laid out in the interim final rule are being met. Agency officials acknowledge that it will be difficult to track a ruby’s movement from the mine to the final marketplace to properly authenticate a gemstone’s country of origin because of the decentralized, informal, and fractured nature of the ruby-trading business. CBP officials said that they plan to conduct postentry reviews of importer information on rubies, jadeite, and related jewelry entering the country in order to verify that a certification from the exporter is on file as required by the interim final rule. However, to date, CBP has not executed this plan. The JADE Act does not require, and the interim final rule does not provide, any mechanism to test gemstones for country-of-origin determination, and there are impediments to accurate and definitive determination of whether a ruby is of Burmese origin. Specialized gemological laboratories in the United States and abroad offer a variety of testing services for gemstones such as rubies. According to jewelry industry representatives, country-of- origin testing involves the use of various tests to identify distinguishing characteristics of a gemstone and compare those characteristics with those of reference stones from other sources around the world. In some cases, the tests may identify characteristics that are unique to a particular source and provide a high degree of certainty that a stone is from a certain location. In other cases, the tests may indicate that the gemstone could have originated from multiple sources around the world. In particular, rubies from some African countries often have characteristics similar to those of rubies from Burma, which reduces the certainty of the results and complicates the use of country-of-origin testing for the purposes of the act. As a result, making a definitive determination as to the country of origin of a particular gemstone or group of gemstones will be challenging for a U.S. importer or agencies attempting to enforce U.S. import restrictions. In addition, according to agency and jewelry industry officials, for relatively low-value jewelry that may include multiple, low-value rubies, the cost of testing the rubies could be significantly higher than the value of the jewelry. During our visit to the gemstone trading marketplace in Chanthaburi, Thailand, we saw fully finished and processed Madagascar- origin rubies being sold for 10 cents apiece. However, a gemological lab we visited in New York charged $50 per ruby to produce a country-of- origin determination. Industry representatives and agency officials confirmed that testing may be cost-prohibitive. In addition, industry representatives said that it can be extremely difficult to test large numbers of rubies or jadeite set in jewelry because some stones are too small to test or are obscured by their settings. DHS has tracked the imports of reportedly non-Burmese rubies, jadeite, and related jewelry into the United States since October 2008, but there are insufficient data to fully assess the impact of import restrictions under the JADE Act. Our analysis of the limited data from DHS on imports from October 2008 to May 2009 shows that ruby, jadeite, and related jewelry, reportedly of non-Burmese origin, are continuing to be imported into the United States. (See table 1.) According to DHS data, Thailand, China, Pakistan, and India reportedly exported more than $70 million dollars worth of non-Burmese-origin rubies, jadeite, and related jewelry into the United States from October 2008 to May 2009. The data show that the United States imported more rubies, jadeite, and related jewelry from Thailand, in terms of value, than from any other country. There were also a number of reportedly non-Burmese-origin jadeite imports (excluding jewelry), worth approximately $2.6 million, although U.S. and foreign jewelry industry officials told us almost all jadeite is of Burmese origin. During this time period, no shipments were seized by CBP on the grounds that they were suspected to be of Burmese origin. However, even with the data generated from the recent tracking of reportedly non-Burmese-origin imports, it is still difficult to determine whether or not the import restrictions imposed under the JADE Act are effectively targeting Burmese-origin items. First, DHS did not track data on the import of non-Burmese-origin rubies, jadeite, and related jewelry prior to September 2008. As a result, it is not possible to compare the amount of rubies, jadeite, or related jewelry that entered the United States prior to the implementation of the law, in order to analyze the effect of the law. Second, using current HTS codes, CBP does not separately track imports of ruby jewelry and jadeite jewelry because both are currently categorized under the same HTS codes. Without this information, CBP cannot determine the amount of (1) rubies, plus ruby jewelry, and (2) jadeite, plus jadeite jewelry, imported over time and cannot analyze the effect of the law on U.S. importation of these distinct gemstone products. Third, less than 2 percent of shipments (41 out of more than 2,500) had quantities of rubies, jadeite, or related jewelry listed. Without accurate data on the quantity of products imported, and the corresponding per unit value of items in such shipments, CBP cannot assess the total quantity of reportedly non-Burmese-origin rubies, jadeite, and related jewelry imported into the United States. Without this information or the corresponding per unit value of items in such shipments, CBP cannot analyze patterns of trade, such as whether higher- or lower-value items are coming from particular countries, exporters, and so forth. Such information could be useful in the event that CBP considers testing on some subset of the stones, such as those with a high per unit value. In addition, agencies lack accurate and reliable data on gemstone exports from Burma to other countries, which hinders efforts to analyze the impact U.S. import restrictions are having on Burmese production and supply of gemstones. According to agency officials, the Burmese regime’s records on ruby and jadeite production and exports cannot be verified. Agency officials stated that data on Burmese ruby and jadeite production is limited to official Burmese government sources and may be unreliable. A leading economic scholar on Burma noted that Burma’s official economic statistics are incomplete, internally contradictory, and frequently subject to dramatic revision. China’s support and cooperation are critical if the United States government hopes to prohibit international trade in Burmese jadeite and jadeite jewelry. However, according to U.S. agency officials and foreign jewelry industry representatives, it is nearly impossible to secure Chinese cooperation to stop the trade in Burmese jadeite because of the strong demand for jadeite in China and because finishing and processing it is a source of employment. According to U.S. agency officials and foreign jewelry industry representatives, because almost all Burmese jadeite is imported by China and the Chinese have a very strong internal demand for it and economic interests in processing it, as discussed earlier, the United States has very little leverage in attempting to gain China’s active support and cooperation in order to stem the international trade of Burmese jadeite. In addition, agency officials said China would not support implementing a ban on Burmese jadeite because they oppose all sanctions against Burma; China fundamentally opposes the concept of sanctions against Burma as a policy tool. Thailand’s support and cooperation are also critical if the United States hopes to effectively prohibit international trade in Burmese rubies and ruby jewelry. However, it is highly unlikely that Thailand will support measures under the JADE Act given the act’s impact on Thailand’s economy. In addition, U.S. agency officials, foreign government officials, and Thai jewelry industry representatives told us that import controls under the JADE Act are hurting small-scale miners and traders in Burma and dealers and laborers in Thailand, not the military regime in Burma. Thai government officials also said they are concerned that European governments could follow the United States and adopt JADE Act-style restrictions on Burmese-origin rubies and ruby jewelry. According to U.S. agency officials, the fact that Thai government officials are willing to publicly voice their concerns about the import restrictions in the JADE Act can be taken as evidence that the act is having a negative impact on the Thai jewelry industry. Agency officials said, however, that it is difficult to determine whether this impact is caused by the global economic downturn, the JADE Act, or some combination of both. U.S. agency officials stated that Thailand already felt challenged by the United States on other trade issues, such as intellectual property rights and accusations of questionable labor practices regarding shrimp production. The JADE Act required the President to transmit a report to Congress describing actions the United States has taken during the first 60 days after the enactment of the act to seek (1) the issuance of a draft WTO waiver, (2) the adoption of a UN resolution, and (3) the negotiation of an international arrangement—-similar to the Kimberley Process Certification Scheme for conflict diamonds. Presidential Proclamation 8294 authorized the Secretary of State, in consultation with the United States Trade Representative, to transmit the report to Congress. According to State, this report was sent to Congress on February 23, 2009. The report states that during the 60-day period following enactment of the act, the Administration was engaged in developing regulations to implement the act’s provisions. However, the report submitted is just a page and a half long and only provides basic statements on actions taken, such as a Presidential Proclamation being issued and the HTS being amended. According to the report, State informed two countries engaged in processing and trading rubies and jadeite of the new import restrictions under the JADE Act. The report also states that State engaged European Union counterparts to discuss the act and how to best harmonize respective sanctions against Burma. Beyond these basic statements covering actions taken during the first 60 days, the report has little information on the overall progress agencies have made or the challenges they face in gaining international support. According to the President’s Proclamation, USTR is responsible for taking all appropriate actions to seek the issuance of a draft waiver decision by the WTO Council for Trade in Goods waiving applicable WTO obligations with respect to the import restrictions of the JADE Act. However, USTR officials said that they have taken no formal steps to initiate a waiver request at the WTO. The Administration has not indicated that it considers U.S. import restrictions under the JADE Act to be inconsistent with U.S. WTO obligations. According to USTR officials, a WTO member requests a waiver when it implements a measure that it acknowledges is inconsistent with its WTO obligations but believes that other WTO members would not be opposed to the measure’s continued application. Thus, requesting a waiver for the JADE Act would represent an acknowledgment that the import restrictions on Burmese articles are inconsistent with U.S. WTO obligations. According to agency officials, if the Administration decides that a WTO waiver is necessary, USTR would submit a waiver request to the WTO Council for Trade in Goods, which would then consider the request. Agency officials also said if the council approves the waiver request, it would submit a report to the WTO General Council, which would then formally endorse the Council for Trade in Goods’ report, and the waiver would be approved. The Council for Trade in Goods and the General Council would make these decisions based on a consensus of the entire WTO membership. An approved WTO waiver would allow the United States to apply import restrictions on Burmese jadeite, rubies, and related jewelry entering the United States without running afoul of its WTO obligations. If consensus is not reached in the Council for Trade in Goods, the waiver request does not move forward and the continued use of the measure by the United States or other country may be challenged by a WTO member under the WTO dispute settlement mechanism. Because of the WTO’s consensus decision-making process, any WTO member—-including Thailand, Burma, or any country that feels directly or indirectly negatively affected by the law—-could effectively veto a WTO waiver request by the United States. Agency officials said WTO members rarely initiate a waiver request unless they believe that a consensus exists for its approval. In addition, agency officials said the process of seeking and acquiring a WTO waiver request may take a long time because the decision to get a waiver is predicated on a consensus decision. USTR officials noted that past WTO waiver requests submitted by the United States were held up for a number of years; even very small countries have held up waiver requests. State has not introduced a UN resolution, noting that a number of countries at the UN are generally opposed to resolutions against Burma. Officials said that the Administration is currently undertaking a review of the U.S. government’s Burma policy that, when completed, would enable it to determine how to best integrate the objectives of the JADE Act into its overall diplomatic strategy. State has also taken some basic steps to gather information about ruby markets in certain countries. For example, according to agency officials, as the JADE Act was being developed, the embassy in Bangkok produced several reporting cables on the ruby industry in Thailand. Agency officials said these cables provided background information on topics such as where the rubies were from, the process of finishing rubies, the sales of rubies and ruby jewelry to the United States and Europe, and the extent of Burmese ruby smuggling into Thailand. In December 2008, State headquarters sent a cable to posts in response to the JADE Act. According to State officials, State headquarters sought information on the import and processing of Burmese rubies in countries thought to be involved in the ruby trade, including Thailand and China. In response, State staff at overseas posts met with jewelry industry representatives or government officials in their respective countries to discuss the ruby trade. However, State headquarters did not ask posts about the importation or processing of Burmese jadeite. According to agency officials, posts were not asked about the importation or processing of Burmese jadeite because the United States is not an export market for jadeite. In addition, the cable did not explicitly ask staff to conduct diplomatic outreach to secure their host governments’ support for the JADE Act. According to agency officials, the cable was intended to solicit information on the trade in rubies rather than to seek to secure host government support for the JADE Act. On their own, officials at some posts we visited reported that they conducted outreach, particularly with those governments that wished to discuss their positions on U.S. import restrictions under the JADE Act with State officials. According to State officials, a number of countries at the UN are generally opposed to resolutions targeted at Burma. State has approached a couple of other countries about the feasibility of a UN resolution, but has not yet introduced one. Agency officials said a proposal calling for the creation of a workable certification scheme for Burmese rubies and jadeite will likely be challenged by some countries at the UN on the basis of its technical feasibility. According to these officials, even with a reliable technical process to establish which rubies, jadeite, and related jewelry are of Burmese origin, it is doubtful such a resolution would pass. State officials said they have had working-level discussions with like- minded countries about the possibility of negotiating a Kimberley-like framework to prevent the global trade in Burmese-origin rubies, jadeite, and related jewelry. State officials said they would, to the extent possible, continue discussions with like-minded states to develop a framework that could win sufficient international support to trace and document the origin of these products. However, to date, there have been no international meetings convened among relevant countries, private industry, and nongovernmental organizations to negotiate a Kimberley-like framework, as occurred for conflict diamonds. Agency officials said there are serious impediments to achieving this objective, such as lack of international support and the inherent difficulty in identifying the country of origin for rubies and ruby jewelry, as discussed earlier. In addition, agency officials said there are key differences between the global diamond and Burmese ruby industries that could complicate establishing a Kimberley-like framework. According to agency officials, the Burmese ruby industry mostly consists of decentralized small-scale operations, is subject to significant smuggling, lacks documentation, and is often carried out through cash exchanges. In contrast, while some diamonds are extracted using small-scale mining techniques, the deep diamond mining industry tends to be more centralized, with large, highly capitalized resource extractors like DeBeers that can more effectively control and track mining through all stages of production. Further, according to agency officials, low-quality rubies are much more common than low-quality diamonds. There are very large quantities of small, low-value rubies; much smaller quantities of large, high-value rubies; and a tremendous variety of rubies in between. For example, based on our analysis of DHS’s data on ruby shipments (that had a declared quantity) coming into the United States between October 2008 and May 2009, the average value of a ruby article was less than 1 cent. Moreover, according to these same data, a single shipment of rough rubies arrived in Charleston, South Carolina, in October 2008 that included 89 million rubies worth a total of only $35,600 (on average, each ruby was worth less than 1/10 cent). Rubies and jadeite can be used in products unimaginable in the diamond context; while conducting our audit in Rangoon, we saw numerous examples of “paintings” composed entirely of various colors of ruby, jadeite, and other colored gemstones glued to canvases (fig. 7). Such products are composed of literally thousands or tens of thousands of stones; each stone could have come from one of any number of sources. In addition, according to agency officials, the effort to restrict the trade of Burmese-origin rubies and jadeite has, thus far, been a unilateral effort by the U.S. government led by Congress. In contrast, the effort to develop the Kimberley Process for conflict diamonds was a multilateral effort. For conflict diamonds, the diamond-producing countries and the international diamond industry supported efforts to restrict trade in conflict diamonds. Because of this broad international support, other countries, the international diamond industry, and nongovernmental organizations have worked to establish and maintain the Kimberley Process and its mechanisms to restrict the flow of conflict diamonds. One of the purposes of the JADE Act is to promote a coordinated international effort to restore civilian democratic rule to Burma, and recent events further demonstrate the human rights violations of the regime. U.S. measures to exert pressure on the regime include two types of trade measures: one designed to restrict U.S. imports of jadeite and ruby originating from Burma and the other to utilize mechanisms such as the UN to restrict worldwide trade in Burmese jadeite and rubies. However, as the evidence in this report indicates, U.S. agencies have not shown that they are effectively targeting imports of Burmese-origin rubies, jadeite, and related jewelry while allowing imports of non-Burmese-origin goods. While agencies have published an interim final rule, they have not developed specific audit guidance or initiated any postentry reviews of importers’ records. In addition, there is little guidance to importers on what constitutes verifiable evidence. Additional steps to implement this rule, along with further improvements in the data collected on imported rubies and jadeite, could contribute to an understanding of whether import restrictions are effectively targeting Burmese-origin goods. With regard to the goal of restricting worldwide trade, the Department of State submitted a required 60-day report to Congress, but the report had little information on progress or the challenges involved in gaining international support. Since that report, USTR has not requested a WTO waiver and State has made no discernible progress in introducing a UN resolution or negotiating a Kimberley-like process. Although agencies cite a number of factors that could impede progress, the current status has not been communicated to Congress. In order for Congress to provide oversight and assist in the design of an effective set of policies to exert pressure on the regime in Burma, it needs accurate and complete information about the effectiveness of existing policies and challenges in implementing those policies, and our review demonstrates that this information has not been provided. In order to effectively implement the sections of the JADE Act prohibiting the importation of Burmese-origin rubies, jadeite, and related jewelry while allowing imports of non-Burmese-origin goods, we recommend that DHS, in consultation with relevant agencies, develop and implement guidance to conduct postentry reviews of importers’ records and provide improved guidance to importers on the standards of verifiable evidence needed to certify articles are of non-Burmese origin. To enhance the effectiveness of U.S. policy against the military regime in Burma, we recommend that State, in consultation with DHS and Treasury, analyze the efficacy, challenges, and difficulties faced in implementing measures to restrict trade in Burmese-origin rubies, jadeite, and related jewelry in the context of the broader U.S. sanctions provisions in the JADE Act, and report to Congress how these measures will contribute to its efforts to influence the military regime in Burma. We provided a draft of this report to USTR and the Secretaries of State, Homeland Security, the Treasury, and Commerce for their review and comment. We received written comments from DHS, Commerce, and State that are reprinted in appendixes V, VI, and VII; we also received technical comments from USTR, DHS, and the Treasury, which we incorporated as appropriate. DHS concurred with our first recommendation and State concurred with our second recommendation. DHS concurred with our first recommendation and noted in its comments that its work to implement the recommendation will rely on consultation with other relevant agencies of the U.S. government. DHS also stated that it believes a U.S. governmentwide effort to establish international agreement and standards to restrict trade in Burmese rubies, jadeite, and related jewelry will offer a more comprehensive and realistic solution to achieving the goals of the JADE Act, beyond DHS’s enforcement of U.S. import restrictions. We acknowledge that the Kimberley Process for conflict diamonds could offer a model for establishing such an international agreement but, as we have noted in this report, there are serious challenges that could make the establishment of such a process difficult for preventing international trade in Burmese rubies, jadeite, and related jewelry. In its comments, Commerce expressed concern about language in our report characterizing its involvement in the interagency working group to develop further guidance on what constitutes verifiable evidence. In response, we modified this sentence in our report to more accurately portray the entities involved. State concurred with our second recommendation, to analyze the efficacies, challenges, and difficulties in implementing measures under the JADE Act. State noted that it would include the findings from this analysis in its Semi-Annual Report on Conditions in Burma, due to Congress on November 15, 2009. State also noted it would include our second recommendation in the Administration’s overall policy review of Burma. We are sending copies of this report to interested congressional committees. In addition, this report will be available on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4347 or yagerl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VIII. To examine U.S. agencies’ efforts in response to the Tom Lantos Block Burmese JADE (Junta’s Anti-Democratic Efforts) Act of 2008 (JADE Act) (Pub. L. No. 110-286), we assessed (1) the key characteristics of the trade of Burmese-origin jadeite and Burmese-origin rubies; (2) the progress U.S. agencies have made to restrict imports of Burmese-origin jadeite, rubies, and related jewelry into the U.S. market; and (3) the progress U.S. agencies have made in pursuing international actions, including (a) seeking a World Trade Organization (WTO) waiver, (b) securing a United Nations (UN) resolution, and (c) working to negotiate an international arrangement—similar to the Kimberley Process—to prevent global trade in Burmese-origin jadeite, rubies, and related jewelry. To address these objectives, we reviewed and analyzed documents, memos, reports, guidance, papers, and cables from the Department of State in Washington, D.C.; embassies in Rangoon and Bangkok; consulates in Chiang Mai and Hong Kong; the United States Geological Survey (USGS); the Office of the United States Trade Representative; the Departments of Homeland Security, the Treasury, and Commerce; U.S. and foreign jewelry industries; nongovernmental organizations; and academics. We reviewed the JADE Act; Presidential Proclamation 8294—To Implement Amendments to the Burmese Freedom and Democracy Act of 2003; U.S. Customs and Border Protection (CBP) guidance, including 19 Code of Federal Regulations (CFR) Parts 12 and 163—the interim final rule; other documents, cables, reports, and memos from relevant U.S. agencies; and documents from gemological laboratories, such as country- of-origin reports. In addition, we reviewed and analyzed Department of Homeland Security/U.S. Customs and Border Protection data on the source and value of reportedly non-Burmese-origin rubies, jadeite, and related jewelry entering the United States from October 2008 to May 2009. During the course of our review, we interviewed officials from the Department of State in Washington, D.C.; embassies in Rangoon and Bangkok; consulates in Chiang Mai and Hong Kong; USGS; the Office of the United States Trade Representative; and the Departments of Homeland Security, the Treasury, and Commerce, as well as foreign government officials in Thailand and Burma to gather information on the Burmese jadeite and ruby trades and the impact of the JADE Act on these trades. To understand the perspectives of ruby and jadeite industry traders, dealers, and association members, we interviewed U.S. jewelry industry officials in New York City and foreign jewelry industry officials in Hong Kong, Thailand, and Burma. In addition, to assess the validity of and collect information on gemstone testing processes, we interviewed gemologists and gemstone testing experts at three major U.S. laboratories in New York City, and interviewed foreign gemstone testing experts at laboratories in Bangkok. To collect detailed qualitative and contextual information about Burma, including economic, political, social, and geopolitical variables, we interviewed academic scholars from Harvard University, Georgetown University, Macquarie University (Sydney, Australia), and the Brookings Institution. We selected officials with a wide range of views and experiences on the subject, including scholars with a range of expertise on Burma’s economy and political situation. The scope of our review was set by the JADE Act. The act has several provisions, one part of which amends the Burmese Freedom and Democracy Act of 2003 to prohibit the import of Burmese-origin jadeite and rubies and jewelry containing Burmese-origin jadeite and rubies into the United States and calls on the Administration to pursue certain international actions to prevent the global trade in Burmese gemstones. The JADE Act also requires GAO to submit a report to Congress assessing the effectiveness of the implementation of this section of the act, including any recommendations for improving its administration. We conducted this performance audit from December 2008 to September 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. (a) Pursuant to section 3A of the Burmese Freedom and Democracy Act of 2003 (Public Law 108-61; 50 U.S.C. 1701 note), as amended by the Tom Lantos Block Burmese JADE Act of 2008 (Public Law 110-286), for purposes of goods provided for in headings 7103, 7113, and 7116, except as set forth in subdivisions (c) and (d) of this note, the importation of any of the following goods shall be prohibited: i) jadeite mined or extracted from Burma and classifiable in heading 7103 of the tariff schedule, ii) rubies mined in or extracted from Burma and classifiable in heading 7103 of the tariff schedule, iii) articles of jewelry containing jadeite described in subdivision (a)(i) of this note, whether classifiable in heading 7113 or 7116 of the tariff schedule; and iv) articles of jewelry containing rubies described in subdivision (a)(ii) of this note, whether classifiable in heading 7113 or 7116 of the tariff schedule. With respect to goods entered or withdrawn from warehouse for consumption, on or after September 27, 2008, should an importer choose to enter any good under heading 7103, 7113 or 7116, the presentation of such entry shall be deemed to be a certification by the importer that any jadeite or rubies contained in such good were not mined in or extracted from Burma. (b) Notwithstanding the deemed certification under subdivision (a) above, the importation of the following goods- i) jadeite mined in or extracted from a country other than Burma and classifiable in heading 7103 of the tariff schedule, ii) rubies mined in or extracted from a country other than Burma and classifiable in heading 7103 of the tariff schedule, iii) articles of jewelry containing jadeite described in subdivision (b)(i) or rubies described in subdivision (b)(ii) of this note, whether classifiable in heading 7113 or 7116 of the tariff schedule, is not permitted unless such goods comply with the terms of any regulations issued by the Secretary of the Treasury to implement section 3A(c)(1) of the Burmese Freedom and Democracy Act of 2003, as amended, or are covered by any waiver or certification scheme that may be established pursuant to the provisions of sections 3(b) and 3A of the act, as amended. (c) The provisions of this note shall not apply to Burmese covered articles and non-Burmese covered articles that were previously exported from the United States, including those that accompanied an individual outside the United States for personal use, if they are reimported into the United States by the same person, without having been advanced in value or improved in condition by any process or other means while outside the United States. (d) The certification established under subdivision (a) of this note shall not apply to the importation of non-Burmese covered articles by or on behalf of an individual for personal use and accompanying an individual upon entry into the United States, with a proper claim under subheading 9804.00.20, 9804.00.45 or other appropriate provision of chapter 98 of the tariff schedule. The following table shows Department of Homeland Security (DHS) data on exporter-reported imports of non-Burmese rubies, jadeite, and related jewelry into the United States from October 2008 to May 2009 (ranked according to U.S. dollar value). ppendix IV: United States Geological Survey imates of Ruby Production 2000-2005 (by eight Measured in Kilograms) In addition to the individual named above, Godwin Agbara, Assistant Director; Ian Ferguson; Rajneesh Verma; Mary Moutsos; Etana Finkler; and Karen Deans made key contributions to this report. | Congress passed the Tom Lantos Block Burmese JADE Act in 2008 prohibiting the import of Burmese-origin jadeite, rubies, and related jewelry and calling for certain international actions. The act also requires GAO to assess the effectiveness of the implementation of this section of the act. This report assesses (1) key characteristics of the trade of Burmese-origin jadeite and rubies; (2) progress agencies have made to restrict imports of Burmese-origin jadeite, rubies, and related jewelry; and (3) the progress agencies have made in pursuing international actions. GAO reviewed and analyzed policy guidance, reports, and trade data and interviewed officials from the Departments of State (State), Homeland Security (DHS), other U.S. agencies, as well as U.S. and foreign jewelry industry representatives and foreign government officials. The Burmese jadeite and ruby trades are very different from one another and significantly involve China and Thailand. Burmese-origin jadeite is primarily purchased, processed, and consumed by China. Burmese-origin rubies are reportedly largely smuggled into Thailand, yielding little revenue to the Burmese regime, and are significantly processed there. U.S. agencies have taken some steps but have not shown that they are effectively restricting imports of Burmese-origin rubies, jadeite, and related jewelry while allowing imports of non-Burmese-origin goods. Some U.S. jewelry representatives said import restrictions constrain legitimate ruby imports. Agencies published an interim final rule, but DHS has not developed specific audit guidance or conducted any postentry reviews of importers' records. In addition, there is little guidance to importers on what constitutes verifiable evidence of non-Burmese-origin. Although agencies have begun to collect data on ruby and jadeite imports, further efforts could contribute to an understanding of whether restrictions are effectively targeting Burmese-origin imports. Agencies sent a required 60-day report to Congress, but it had little information on progress and challenges related to gaining international support to prevent trade in Burmese-origin rubies, jadeite, and related jewelry. Agencies have made no discernible progress in gaining such international support. Strong support and the cooperation of China and Thailand are important to restrict trade in these items, but highly unlikely. The Office of the United States Trade Representative has not requested a World Trade Organization waiver and State has not introduced a United Nations resolution, noting a number of countries would likely oppose a resolution. Finally, there have been no international meetings to negotiate a global arrangement restricting trade in Burmese rubies and jadeite similar to the Kimberley Process for restricting trade in conflict diamonds. Agency officials cited serious impediments to establishing such a framework. |
The vast majority of Recovery Act funding for transportation programs goes to the Federal Highway Administration (FHWA), the Federal Railroad Administration, and the Federal Transit Administration for the construction, rehabilitation, or repair of highway, road, bridge, transit, and rail projects. The remaining funds are allocated among other DOT administrations. Over half of these funds are for highway infrastructure investments. (See table 1). Of the $27.5 billion provided for highway and related infrastructure investments, $26.7 billion is provided to the states for restoration, repair, construction, and other activities allowed under FHWA’s Surface Transportation Program and for other eligible surface transportation projects, which apportions money to states for construction and preventive maintenance of eligible highways and for other surface transportation projects. The Act requires that 30 percent of these funds be suballocated to metropolitan and other areas. The Recovery Act generally requires that funds be invested in projects that can be started and completed expeditiously and identifies several specific deadlines for investing funds provided through several transportation programs. For example, 50 percent of state-administered Federal-aid Highway formula funds (excluding suballocated funds) must be obligated within 120 days of apportionment (apportioned on March 2) and all must be obligated within 1 year of apportionment. Although highway funds are being apportioned to states and localities through existing mechanisms, Recovery Act funding for highway infrastructure investment differs from the usual practice in the Federal-aid Highway Program in a few important ways. Most significantly, for projects funded under the Recovery Act, the federal share is up to 100 percent while the federal share under the Federal-aid Highway Program is usually 80 percent. Priority is also to be given to projects that are projected to be completed within 3 years and are within economically distressed areas. Furthermore, the governor must certify that the state will maintain its current level of transportation spending with regard to state funding (called maintenance of effort), and the governor or other appropriate chief executive must certify that the state or local government to which funds have been made available has completed all necessary legal reviews and determined that the projects are an appropriate use of taxpayer funds. Any amount of the funding that was apportioned on March 2 and is not obligated within deadlines established by the Act (excluding suballocated funds) will be withdrawn by DOT and redistributed to other states that have obligated their funds in a timely manner. Both the President and Congress have emphasized the need for accountability, efficiency, and transparency in the allocation and expenditure of Recovery Act funds. Accordingly, the Office of Management and Budget (OMB) has called on federal agencies to (1) award and distribute funds in a timely and fair manner, (2) ensure the funding recipients and uses are transparent, and the resulting benefits are clearly and accurately reported, (3) ensure funds are used for authorized purposes, (4) avoid unnecessary project delays and cost overruns, and (5) achieve specific program outcomes and improve the economy. For transportation programs, DOT is required to report on the number of direct and indirect jobs created or sustained by the Act’s funds for each program and to the extent possible estimate of the number of indirect jobs created or sustained by project or activity in the associated supplying industries, including the number of job-years created and the total increase in employment since the date of enactment of this Act. In order to coordinate DOT’s efforts and help ensure accountability and transparency, DOT established a team of senior officials across the department—the Transportation Investment Generating Economic Recovery (TIGER) team. According to DOT, this leadership team will coordinate consistent implementation of the Act, exchange information, provide guidance, and track transportation dollars spent. DOT established individual stewardship groups as part of the TIGER team to gather expertise from across the department to address common issues and identify coordinated and appropriate actions. According to DOT, these groups include financial stewardship, data collection, procurement and grant management, job measurement, information technology and communication, and accountability. The accountability stewardship group meets biweekly with the department’s Office of the Inspector General and us to improve transparency and provide an efficient forum for sharing information between management and the auditing entities. As of April 16, DOT reported that nationally $6.4 billion in Recovery Act highway infrastructure investment funding apportioned to the states had been obligated—meaning that DOT and the states had executed agreements on projects worth this amount. For the locations that we reviewed, approximately $3.3 billion in highway funding has been obligated with the percent of apportioned funds obligated to the states and the District of Columbia, ranging from 0 to 65 percent. (See table 2.) For two of the states, DOT had obligated over 50 percent of the states’ apportioned funds, for four states it had obligated 30 to 50 percent of the funds, for eight states it had obligated fewer than 30 percent of the funds, and for three states it had not obligated any funds. Most states we visited, while they had not yet expended significant funds, were planning to solicit bids in April or May. They also stated that they planned to meet statutory deadlines for obligating the highway funds. A few states had already executed contracts. As of April 1, the Mississippi Department of Transportation, for example, had signed contracts for 10 projects totaling approximately $77 million. These projects include the expansion of State Route 19 in eastern Mississippi into a four-lane highway. This project fulfills part of the state’s 1987 Four-Lane Highway Program which seeks to link every Mississippian to a four-lane highway within 30 miles or 30 minutes. Most often however, we found that highway funds in the states and the District of Columbia have not yet been spent because highway projects were at earlier stages of planning, approval, and competitive contracting. For example, the Florida Department of Transportation plans to use the Recovery Act funds to accelerate road construction programs in its preexisting 5-year plan. This resulted in some projects being reprioritized and selected for earlier completion. On April 15, the Florida Legislative Budget Commission approved the Recovery Act- funded projects that the Florida Department of Transportation had submitted. As required by the Act, states have used existing planning processes and plans to quickly identify and obligate funds for projects. For example, as of April 16, FHWA had obligated $261 million of Recovery Act transportation funding for 20 projects from California’s State Highway Operation and Protection Program. These projects involve rehabilitating roadways, pavement, and rest areas as well as upgrading median barriers and guardrails. Some states reported that the use of existing plans has enabled them to quickly distribute transportation funds. As of April 16, FHWA had obligated about $277million to New York state for 108 transportation projects. Officials reported that the state was able to move quickly on these projects largely because New York State Department of Transportation, as required by federal surface transportation legislation, has a planning mechanism that routinely identifies needed transportation projects and performs preconstruction activities, such as completing environmental permitting requirements. Selected states reported that they targeted transportation projects that can be started and completed expeditiously, in accordance with Recovery Act requirements. Several selected states have generally focused on initiating preventive maintenance projects, because these projects require less environmental review or design work and can be started quickly. For example, the New Jersey Department of Transportation selected 40 projects and concentrated mainly on replacement projects that require little environmental clearance or extensive design work, such as highway and bridge painting and deck replacement. Officials from the New York State Department of Transportation reported that they will target most Recovery Act transportation funds to infrastructure rehabilitation, including preventive maintenance and reconstruction, such as bridge repairs and replacement, drainage improvement, repaving, and roadway construction. State officials emphasized that these projects extend the life of infrastructure and can be contracted for and completed relatively easily within the 3-year time frame required by the Act. The state will also target some Recovery Act highway dollars to more typical “shovel ready” highway construction projects for which there were previously insufficient funds. Some states also reported targeting funds toward projects with an emphasis on job creation and consideration of economically distressed areas. For example, the North Carolina Department of Transportation plans to award 70 highway and bridge stimulus projects between March and June, which are estimated to cost $466 million (of an expected $735 million). According to North Carolina Department of Transportation officials, these projects were identified based on Recovery Act criteria that priority be given to projects that are expected to be completed within 3 years and are located in economically distressed areas, among other factors. According to Colorado Department of Transportation officials, they are emphasizing construction projects rather than projects in planning or design phases, in order to maximize job creation. These projects include resurfacing and highway bridge replacements in the Denver metropolitan area, as well as improvements to mountain highways. The Illinois Department of Transportation reported that it is planning to spend a large share of its estimated $655 million in Recovery Act funds for highway and bridge projects in economically distressed areas. In March 2009, FHWA directed its field offices to ensure that states give adequate consideration to economically distressed areas in selecting projects. Specifically, field offices were directed to discuss this issue with the states and to document FHWA oversight. We plan to review states’ consideration of economically distressed areas and FHWA’s oversight in our subsequent reports on the Recovery Act. Several of the locations that we are reviewing have submitted certifications that they have maintained their level of state funding of projects (maintenance-of-effort certifications) with explanations or conditions attached. Seven states and the District of Columbia submitted “explanatory” certifications—certifications that used language that articulated assumptions or stated the certification was based on the best information available at the time. Six states submitted “conditional” certifications because their certifications were subject to conditions or assumptions, future legislative action, future revenues, or other conditions. The remaining three states—Arizona, Michigan, and New York—submitted certifications free of explanatory or conditional language. On April 22, DOT informed governors that the Recovery Act does not authorize the use of conditional or qualified certifications. The Secretary of Transportation provided the states the opportunity to amend their maintenance-of-effort certifications by May 22, 2009, as needed. In future bimonthly reports, we expect to report on FHWA’s oversight of states’ efforts to comply with the maintenance of effort requirements and why states indicated that they believe that conditions in their states may change such that they may not be able to maintain their levels of effort. States’ and localities’ tracking and accounting systems are critical to the proper execution and accurate and timely recording of transactions associated with the Recovery Act. Officials from all 16 states and the District of Columbia told us they have established or are establishing methods and processes to separately identify (i.e., tag), monitor, track, and report on the use of the Recovery Act funds they receive. The states and localities generally plan on using their current accounting systems for recording Recovery Act funds, but many are adding identifiers to account codes to track Recovery Act funds separately. Many said this involved adding digits to the end of existing accounting codes for federal programs. In California, for instance, officials told us that while their plans for tracking, control, and oversight are still evolving, they intend to rely on existing accountability mechanisms and accounting systems, enhanced with newly created codes, to separately track and monitor Recovery Act funds that are received by and pass through the state. The Pennsylvania Department of Transportation issued an administrative circular in March 2009 that established specific Recovery Act program codes to track highway and bridge construction spending, including four new account codes for Recovery Act fund reimbursements to local governments. Several officials told us that the state’s accounting system should be able to track Recovery Act funds separately. State officials reported a range of concerns on the federal requirements to identify and track Recovery Act funds going to subrecipients, localities and other non-state entities. These concerns include their inability to track these funds with existing systems, uncertainty regarding state officials’ accountability for the use of funds which do not pass through state government entities, and their desire for additional federal guidance to establish specific expectations on sub-recipient reporting requirements. Additionally, FHWA has identified eight major risks in implementing the Recovery Act, including states’ oversight of local public agencies and these agencies’ lack of experience in handling federal-aid projects. Officials from many of the 16 selected states and the District of Columbia told us that they had concerns about the ability of subrecipients, localities, and other nonstate entities to separately tag, monitor, track, and report on the Recovery Act funds they receive. Given that governors have certified the use of funds in their states, officials in many states also expressed concern about being held accountable for funds flowing directly from federal agencies to localities or other recipients. For example, officials in Colorado expressed concern that they will be held accountable for all Recovery Act funds flowing to the state, including those flowing directly to nonstate entities, such as transportation districts, for which they do not have oversight or information about. Officials in several states indicated that either their states would not be tracking Recovery Act funds going to the local levels or that they were unsure how much data would be available on the use of these funds. For example, Pennsylvania officials said that the state will rely on subrecipients to meet reporting requirements at the local level. Recipients and subrecipients can be local governments or other entities such as transit agencies. For example, about $367 million in Recovery Act money for transit capital assistance and fixed guideway (such as commuter rails and trolleys) modernization was apportioned directly to areas such as Philadelphia, Pittsburgh, and Allentown. State officials also told us that the state would not track or report Recovery Act funds that go straight from the federal government to localities and other entities. We will discuss these issues with local governments and transit entities as we conduct further work. OMB and FHWA continue to develop guidance and communication strategies for Recovery Act implementation as it relates to non-state recipients. To mitigate risks, such as local public agencies’ lack of experience in handling federal-aid projects, FHWA outlined eight mitigation strategies, including (1) providing Recovery Act guidance and monitoring strategies for risk areas, such as sub-recipient guidance and checklists to assist local monitoring and oversight, and (2) sharing risks through agreement and contract modifications to help ensure oversight and reporting of funds. To foster efficient and timely communications, in our first bimonthly report on the Recovery Act, we recommended that OMB develop an approach that provides dependable notification to (1) prime recipients in states and localities when funds are made available for their use, (2) states, where the state is not the primary recipient of funds, but has a statewide interest in this information, and (3) all non-federal recipients, on planned releases of federal agency guidance and, if known, whether additional guidance or modifications are expected. Some states also expressed concerns about the Recovery Act reporting requirements. State officials and others are uncertain about the ability of reporting systems to roll up data from multiple sources and synchronize state level reporting with Recovery.gov. Some officials are concerned that too many federal requirements will slow distribution and use of funds and others have expressed reservations about the capacity of smaller jurisdictions and nonprofit organizations to report data. Even those who are confident about their own systems are uncertain about the cost and speed of making any required modifications needed for Recovery.gov reporting or any further data collection requirements. Some state transportation agencies also noted concerns about the burden and redundancy of Recovery Act reporting, including reporting for the state, DOT and its modal offices, and Congress. In response to states’ concerns about Recovery Act reporting requirements, in our first bimonthly report we recommended that OMB, in consultation with the Recovery Accountability and Transparency Board and states, evaluate current information and data collection requirements to determine whether sufficient, reliable, and timely information is being collected before adding further data collection requirements. We also recommended that OMB consider the cost and burden of additional reporting on states and localities against expected benefits. States vary in how they plan to assess the impact of Recovery Act funds. Some states will use existing federal program guidance or performance measures to evaluate impact, particularly for ongoing programs, such as FHWA’s Surface Transportation Program. Other states are waiting for additional guidance on how and what to measure to assess impact. Some states indicated that they have not determined how they will assess impact. A number of states have expressed concerns about definitions of jobs created and jobs retained under the Act, as well as methodologies that can be used for the estimation of each. Officials from several of the states we met with expressed a need for clearer definitions of “jobs retained” and “jobs created.” Officials from a few states expressed the need for clarification on how to track indirect jobs, while others expressed concern about how to measure the impact of funding that is not designed to create jobs. Some of the questions that states and localities have raised about the Recovery Act implementation may have been answered in part via the guidance provided by OMB for the data elements, as well as by guidance issued by federal departments. For example, OMB provided draft definitions for employment, as well as for jobs retained and jobs created via Recovery Act funding. However, OMB did not specify methodologies such as some states have sought for estimating jobs retained and jobs created. Data elements were presented in the form of templates with section-by-section data requirements and instructions. OMB provided a comment period during which it is likely to receive many questions and requests for clarification from states, localities, and other entities that can directly receive Recovery Act funding. OMB plans to update this guidance again in the next 30 to 60 days. Given questions raised by many state and local officials about how best to determine both direct and indirect jobs created and retained under the Recovery Act, we recommended in our first bimonthly report that OMB continue its efforts to identify appropriate methodologies that can be used to assess jobs created and retained from projects funded by the Recovery Act, determine the Recovery Act spending when job creation is indirect, and identify those types of programs, projects, or activities that in the past have demonstrated substantial job creation or are considered likely to do so in the future. Some states are also pursuing a number of different approaches for measuring the effects of Recovery Act funding for transportation projects. For example, the Iowa Department of Transportation tracks the number of worker hours by highway project based on contractor reports and will use these reports to estimate jobs created. New Jersey Transit is using an academic study that examined job creation from transportation investment to estimate the number of jobs that are created by contractors on its Recovery Act-funded construction projects. In addition, Mississippi hired a contractor to conduct an economic impact analysis of transportation projects. As previously mentioned, we will be reporting further on states’ and localities’ use of Recovery Act funds, including maintenance of effort and projects in economically distressed areas. In addition, we plan to undertake or are already conducting these other assessments of Recovery Act activities that fall within the Committee’s interests: Supplementary discretionary grants: The Act provides $1.5 billion to be awarded competitively to state and local governments and transit agencies for surface transportation projects that will have a significant impact on the nation, a metropolitan area, or a region. This is a new program and the Act requires that DOT publish its grant selection criteria by mid-May. We expect to assess how DOT developed its criteria and plan to report several weeks after the criteria are published. High-speed rail: The Act provides about $8 billion for projects that support intercity high-speed rail service. This is also a new program. Our work will likely focus on assessing how DOT is developing a program that will increase the chances of viable high-speed rail projects, consistent with recommendations we recently made on the development of high-speed rail. We expect to start this work later this year. Federal buildings: The Act provides about $5.6 billion for the General Services Administration (GSA) to spend on projects related to its federal buildings, primarily to convert existing buildings to high-performance green buildings. As a part of our ongoing work to report on agencies’ implementation of the Energy Independence and Security Act of 2007, which among other things calls for agencies to increase the energy efficiency and the availability of renewable energy in federal buildings, we plan to assess the impact of Recovery Act funding on GSA’s ability to meet the 2007 energy act’s high-performance federal building requirements. In addition, in coordination with GSA’s Office of Inspector General, this summer, we plan to review GSA’s conversion of existing federal buildings to high-performance green buildings. We will work with this Committee as we begin work in these areas and in other areas in which the Committee might be interested. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Committee might have. For further information regarding this statement, please contact Katherine Siggerud at (202) 512-2834 or siggerudk@gao.gov. Contact points for our Congressional Relations and Public Affairs offices may be found on the last page of this statement. Individuals who made key contributions to this statement are Daniel Cain, Steven Cohen, Heather Krause, Heather Macleod, and James Ratzenberger. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The American Recovery and Reinvestment Act of 2009 (Recovery Act) provided $48.1 billion in additional spending at the Department of Transportation (DOT) for investments in transportation infrastructure, including highways, passenger rail, and transit. This statement provides a general overview of (1) selected states' use of Recovery Act funds for highway programs, (2) the approaches taken by these states to ensure accountability for these funds, and (3) the selected states' plans to evaluate the impact of the Recovery Act funds that they receive for highway programs. This statement is based on work in which GAO examined the use of Recovery Act funds by a core group of 16 states and the District of Columbia, representing about 65 percent of the U.S. population and two-thirds of the intergovernmental federal assistance available through the Act. GAO issued its first bimonthly report on April 23, 2009. According to DOT, as of mid-April, the 17 locations that GAO reviewed had obligated $3.3 billion of the over $15 billion (21 percent) in highway investment funds that DOT had apportioned to them. These funds will be used in about 900 projects. States are using existing statewide plans to quickly identify and obligate funding for Recovery Act transportation projects. Several states have generally focused on rehabilitation and repair projects, because these projects require lessenvironmental review or design work. For example, the New Jersey Department of Transportation selected 40 projects and concentrated mainly on projects that require little environmental clearance or extensive design work, such as highway and bridge painting and deck replacement. Some states also reported targeting funds toward projects with an emphasis on job creation and consideration of economically distressed areas. For example, Colorado Department of Transportation officials are emphasizing construction projects, such as highway bridge replacements, rather than projects in planning or design phases, in order to maximize job creation. The Illinois Department of Transportation reported that it is planning to spend a large share of its estimated $655 million in Recovery Act funds for highway and bridge projects in economically distressed areas. States are modifying systems to track Recovery Act funds but are concerned about tracking funds distributed directly to nonstate entities. Officials from all 16 of the states which GAO is reviewing and the District of Columbia stated that they have established or are establishing ways to identify, monitor, track, and report on the use of the Recovery Act funds. However, officials from many of these states and the District of Columbia have concerns about the ability of subrecipients, localities, and other non-state entities to separately monitor, track, and report on the Recovery Act funds these nonstate entities receive. Officials in several states also expressed concern about being held accountable for funds flowing directly to localities or other recipients and indicated that either their states would not be tracking Recovery Act funds going to the local levels or that they were unsure how much data would be available on the use of these funds. Our April 23rd report recommended that the OMB evaluate current reporting requirements before adding further data collection requirements. States vary in their responses to determining how to assess the impact of Recovery Act funds. For programs such as the Federal-aid Highway Surface Transportation Program, some states will use existing federal program guidance or performance measures to evaluate impact. However, a number of states have expressed concerns about definitions of "jobs retained" and "jobs created" under the act, as well as methodologies that can be used for the estimation of each. Given these concerns, GAO recommended in its first bimonthly report that the OMB continue to identify methodologies that can be used to determine jobs retained and created from projects funded by the Recovery Act. |
Distance education is a growing force in postsecondary education, and its rise has implications for the federal student aid programs. Studies by Education indicate that enrollments in distance education quadrupled between 1995 and 2001. By the 2000-2001 school year, nearly 90 percent of public 4-year institutions were offering distance education courses, according to Education’s figures. Entire degree programs are now available through distance education, so that a student can complete a degree without ever setting foot on campus. Students who rely extensively on distance education, like their counterparts in traditional campus-based settings, often receive federal aid under Title IV of the Higher Education Act, as amended, to cover the costs of their education, though their reliance on federal aid is somewhat less than students who are not involved in any distance education. We previously reported that 31 percent of students who took their entire program through distance education received federal aid, compared with 39 percent of students who did not take any distance education courses. There is growing recognition among postsecondary officials that changes brought about by the growing use of distance education need to be reflected in the process for monitoring the quality of schools’ educational programs. Although newer forms of distance education—such as videoconferencing or Internet courses—may incorporate more elements of traditional classroom education than older approaches like correspondence courses, they can still differ from a traditional educational experience in many ways. Table 1 shows some of the potential differences. The Higher Education Act focuses on accreditation—a task undertaken by outside agencies—as the main tool for ensuring quality in postsecondary programs. Under the act, accreditation for purposes of meeting federal requirements can only be done by agencies that are specifically “recognized” by Education. In all, Education recognizes 62 accrediting agencies. Some, such as Middle States Association of Colleges and Schools – Commission on Higher Education and the Western Association of Schools and Colleges – Accrediting Commission for Community and Junior Colleges, accredit entire institutions that fall under their geographic or other purview. Others, such as the American Bar Association—Council of the Section of Legal Education and Admissions to the Bar, accredit specific programs or departments. Collectively, accrediting agencies cover public and private 2-year and 4-year colleges and universities as well as for-profit vocational schools and nondegree training programs. Thirty-nine agencies are recognized for the purpose of accrediting schools or programs for participation in the federal student aid programs. Education is required to recognize or re-recognize these agencies every 5 years. In order to be recognized by Education as a reliable authority with regard to educational quality, accrediting agencies must, in addition to meeting certain basic criteria, establish standards that address 10 broad areas of institutional quality, including student support services, facilities and equipment, and success with respect to student achievement. While the statute provides that these standards must be consistently applied to an institution’s courses and programs of study, including distance education courses and programs, it also gives accrediting agencies flexibility in deciding what to require under each of the 10 areas, including flexibility in whether and how to include distance education within the accreditation review. The current accreditation process is being carried out against a public backdrop of concern about holding schools accountable for student learning outcomes. For example, concerns have been expressed about such issues as the following: Program completion—the percentage of full-time students who graduate with a 4-year postsecondary degree within 6 years of initial enrollment was about 52 percent in 2000. Unprepared workforce—business leaders and educators have pointed to a skills gap between many students’ problem solving, communications, and analytical thinking ability and what the workplace requires. To address concerns such as these, there is increased interest in using outcomes more extensively as a means of ensuring quality in distance education and campus-based education. The Council for Higher Education Accreditation—a national association representing accreditors—has issued guidelines on distance education and campus-based programs, that, among other things, call for greater attention to student learning outcomes. Additionally, in May 2003, we reported that 18 states are promoting accountability by publishing the performance measures of their colleges and universities, including retention and graduation rates, because some officials believe that this motivates colleges to improve their performance in that area. At the national level, Education stated in its 2004 annual plan that it will propose to hold institutions more accountable for results, such as ensuring a higher percentage of students complete their programs on-time. The congressionally appointed Web-based Education Commission has also called for greater attention on student outcomes. The Commission said that a primary concern related to program accreditation is that “quality assurance has too often measured educational inputs (e.g., number of books in the library, etc.) rather than student outcomes.” Finally, the Business Higher Education Forum—an organization representing business executives and leaders in postsecondary education—has said that improvements are needed in adapting objectives to specific outcomes and certifiable job-skills that address a shortage of workers equipped with analytical thinking and communication skills. Although current federal restrictions on the extent to which schools can offer programs by distance education and still qualify to participate in federal student aid programs affect a small number of schools, the growing popularity of distance education could cause the number to increase in the future. We found that 14 schools were either now adversely affected by the restrictions or would be affected in the future; collectively, these schools serve nearly 225,000 students. Eight of the 14 schools are exempt from the restrictions because they have received waivers as participants in Education’s Demonstration Program, under which schools can remain eligible to participate in the student aid programs even if the percentage of distance education courses or the percentage of students involved in distance education rises above the maximums set forth in the law. Three of the remaining 5 schools in the Demonstration Program are negotiating with Education to obtain a waiver. The 14 schools that the current federal restrictions—called the 50-percent rules—affect, or nearly affect, are shown in table 2. They vary in a number of respects. For example, 2 are large (the University of Phoenix has nearly 170,000 students and the University of Maryland University College has nearly 30,000), while 5 have fewer than 1,000 students. Six of the 14 are private for-profit schools, 5 are private nonprofit schools, and 3 are public. Thirteen of the schools are in Education’s Demonstration Program, and without the waivers provided under this program, 8 of the 13 would be ineligible to participate in federal student aid programs because 50 percent or more of their students are involved in distance education. One school that is not part of the Demonstration Program faces a potential problem in the near future because of its growing distance education programs. Two examples from among the 14 schools will help illustrate the effect that the restrictions on the size of distance education programs have on schools and their students. The University Maryland University College, a public institution, located in Adelphi, Maryland, had nearly 30,000 students and more than 70 percent of its students took at least one Internet course in the 2000-2001 school year. The college is participating in Education’s Demonstration Program and has received waivers to the restrictions on federal student aid for schools with substantial distance education programs. According to university officials, without the waivers, the college and about 10,000 students (campus-based and distance education students) would no longer receive about $65 million in federal student aid. Jones International University, a private for-profit school founded in 1993 and located in Englewood, Colorado, served about 450 students in the 2000-2001 school year. The university offers all of its programs online and offers no campus-based courses. The university has received accreditation from the North Central Association of Colleges and Schools, a regional accrediting agency that reviews institutions in 19 states. In August 2003, school administrators told us that they would be interested in federal student program eligibility in the future. In December 2003, the school became a participant in Education’s Demonstration Program and, therefore, its students will be eligible for federal student aid when Education approves the school’s administrative and financial systems for managing the federal student aid programs. In the second of two congressionally mandated reports on federal laws and regulations that could impact access to distance education, Education concluded, “he Department has uncovered no evidence that waiving the 50-percent rules, or any of the other rules for which waivers were provided, has resulted in any problems or had negative consequences.” In its report, Education also stated that there is a need to amend the laws and regulations governing federal student financial aid to expand distance education opportunities, and officials at Education recognize that several policy options are available for doing so. A significant consideration in evaluating such options is the cost to the federal student aid programs. Regarding these costs, Education has not provided data on the cost of granting waivers to the 50-percent rules in the first two reports on the Demonstration Program. Based in part on our discussions with Education officials and proposals made by members of Congress, there appear to be three main options for consideration in deciding whether to eliminate or modify the current federal restrictions on distance education: (1) continuing the use of case- by-case waivers, as in the current Demonstration Program, coupled with regular monitoring and technical assistance; (2) offering exceptions to those schools with effective controls already in place to prevent fraud and abuse, as evidenced by such characteristics as low default rates; or (3) eliminating the rules and imposing no additional management controls. Evaluating these options involves three main considerations: the extent to which the changes improve access to postsecondary schools, the impact the changes would have on Education’s ability to prevent institutions from fraudulent or abusive practices, and the cost to the federal student aid programs and to monitor schools with substantial distance education programs. Our analysis of the three options, as shown in table 3, suggests that while all three would improve students’ access to varying degrees, the first two would likely carry a lower risk of fraud and abuse than the third, which would eliminate the rules and controls altogether. We also found support for some form of accountability at most of the 14 schools that current restrictions affect or nearly affect. For example, officials at 11 of these schools said they were generally supportive of some form of accountability to preserve the integrity of the federal student aid programs rather than total elimination of the restrictions. The first option would involve reauthorizing the Demonstration Program as a means of continuing to provide schools with waivers or other relief from current restrictions. Even though exempting schools from current restrictions on the size of distance education programs costs the federal student aid programs, Education has yet to describe the extent of the costs in its reports on the program. According to Education staff, developing the data on the amount of federal student aid could be done and there are no major barriers to doing so. The data would prove valuable in determining the potential costs of various policy options since the program is expanding in scope—five new schools joined in December 2003—and additional reports will need to be prepared for the Congress. Our review of the Demonstration Program and our discussions with Education officials surfaced two additional considerations that would be worthwhile addressing if the Congress decided to reauthorize the program. They relate to streamlining Demonstration Program requirements and improving resource utilization. Reducing paperwork requirements. When the Congress authorized the Demonstration Program, it required that Education evaluate various aspects of distance education, including the numbers and types of students participating in the program and the effective use of different technologies for delivering distance education. These requirements now may be redundant since Education collects such information as part of its National Postsecondary Student Aid Study and other special studies on distance education. Eliminating such requirements could ease the paperwork burden on participating institutions and Education staff. Limiting participation to schools that are adversely affected by federal restrictions. Some schools participating in the Demonstration Program do not need waivers to the 50-percent rules, because their programs are not extensive enough to exceed current restrictions. Limiting participation in the program to only schools that need relief from restrictions on the size of distance education programs could ease the administrative burden on Education. However, in the future, more schools may be interested in receiving waivers if their distance education programs expand. The seven accrediting agencies we reviewed varied in the extent to which their institutional reviews included distance education. While all seven agencies had adopted standards or policies calling for campus-based and distance education programs to be evaluated using the same standards, the agencies varied in (1) the extent to which agencies required schools to demonstrate that distance education and campus-based programs were comparable and (2) the size a distance education program had to be before it was formally included in the overall institutional review. While the Higher Education Act requires Education to ensure that accrediting agencies have standards and policies in place regarding the quality of education, including distance education, it gives the agencies latitude with regard to the details of setting their standards or policies. Differences in standards or policies do not necessarily lead to differences in educational quality, but if one accrediting agency’s policies and procedures are more or less rigorous than another’s, the potential for quality differences may increase. An Education official said the historical role of the federal government in exerting control over postsecondary education has been limited. Similarly, Education has limited authority to push for greater consistency in areas related to the evaluation of distance education. The agencies we reviewed all had standards or policies in place for evaluating distance education programs. The Higher Education Act does not specify how accrediting agencies should review distance education programs, but instead directs them to cover key subject areas, such as student achievement, curricula, and faculty. The law does not specify how accrediting agencies are to develop their standards or what an appropriate standard should be. All seven agencies had a policy stating that the standards they would apply in assessing a school’s distance education programs would be the same as the standards used for assessing campus- based programs. The six regional accrediting agencies within this group had also adopted a set of supplemental guidelines to help schools assess their own distance education programs. While all the agencies had standards or policies in place for evaluating distance education and campus-based learning, we found variation among the agencies in the degree to which they required institutions to compare their distance learning courses with their campus-based courses. Five of the seven agencies, including the one national accrediting agency reviewed, required schools to demonstrate comparability between distance education programs and campus-based programs. For example, one agency required each school to evaluate “the educational effectiveness of its distance education programs (including assessments of student learning outcomes, student retention, and student satisfaction) to ensure comparability to campus-based programs.” Another accrediting agency required that the successful completion of distance education courses and programs be similar to those of campus-based courses and programs. The remaining two accrediting agencies did not require schools to demonstrate comparability in any tangible way. A second area in which variations existed is in the threshold for deciding when to conduct a review of a distance education program. While accrediting agencies complete their major review of a school on a multiyear cycle, federal regulations provide they also must approve “substantive changes” to the accredited institutions’ educational mission or program. The regulations prescribe seven types of change, such as a change in the established mission or objectives of the institution, that an agency must include in its definition of a substantive change for a school. For example, starting a new field of study or beginning a distance education program might both be considered a substantive change for a school. However, the seven agencies vary in their definition of “substantive” so the amount of change needed for such a review to occur varies from agency to agency. Three of the seven agencies review distance education programs when at least half of all courses in a program are offered through distance learning. A fourth agency reviews at an earlier stage—when 25 percent or more of a degree or certificate program are offered through distance learning. The remaining three agencies have still other polices for when they initiate reviews of distance education programs. The variations among accrediting agencies that we found probably result from the statutory latitude provided to accrediting agencies in carrying out their roles. For example, in the use of their varying policies and practices, the agencies are operating within the flexible framework provided under the Higher Education Act. Such variations likewise do not necessarily lead to differences in how effectively agencies are able to evaluate educational quality. However, the lack of consistently applied procedures for matters such as comparing distance education and campus-based programs or deciding when to incorporate reviews of new distance education programs could potentially increase the chances that some schools are being held to higher standards than others. Additionally, the flexible framework of the Higher Education Act extends to the requirements that accrediting agencies set for schools in evaluating student learning outcomes. In discussions on this matter, Education officials indicated that the law’s flexibility largely precludes them from being more prescriptive about the standards, policies, or procedures that accrediting agencies should use. The seven accrediting agencies we reviewed varied in the extent to which their standards and policies address student-learning outcomes for either campus-based or distance education courses or programs. Over the past decade, our work on outcomes-based assessments in a variety of different areas shows that when organizations successfully focus on outcomes, they do so through a systematic approach that includes three main components. The three are (1) setting measurable and quantifiable goals for program outcomes, (2) developing strategies for achieving these goals, and (3) disclosing the results of their efforts to the public. The accrediting agencies we reviewed generally recognized the importance of outcomes, but only one of the seven had an approach that required schools to cover all three of these components. The three-part approach we found being used to successfully implement an outcomes-based management strategy was based on our assessments across a wide spectrum of agencies and activities, including, for example, the Federal Emergency Management Agency working with local governments and the building industry to strengthen building codes to limit deaths and property losses from disaster and the Coast Guard working with the towing industry to reduce marine casualties. Briefly, here are examples of how these three components would apply in an educational setting. Developing measurable and quantifiable goals. It is important that outcome goals be measurable and quantifiable, because without such specificity there is little opportunity to determine progress objectively. A goal of improving student learning outcomes would require measures that reflect the achievement of student learning. For example, a goal of improving student learning outcomes would need to be translated into more specific and measurable terms that pertain directly to a school’s mission, such as an average state licensing examination score or a certain job placement rate. Other measures could include test scores measuring writing ability, the ability to defend a point orally, or analyze critically, and work habits, such as time management and organization skills. Developing strategies for achieving the goals. This component involves determining how human, financial, and other resources will be applied to achieve the goals. In education, this component could include such strategies as training for faculty, investments in information technology, or tutoring programs to help improve skills to desired levels. This component helps align an organization’s efforts towards improving its efficiency and effectiveness. Our work has shown that providing a rationale for how the resources will contribute to accomplishing the expected level of performance is an important part of this component. Reporting performance data to the public. Making student learning outcome results public is a primary means of demonstrating performance and holding institutions accountable for results. Doing so could involve such steps as requiring schools to put distance learning goals and student outcomes (such as job placement rates or pass rates on state licensing examinations) in a form that can be distributed publicly, such as on the school’s Web site. This would provide a basis for students to make more informed decisions on whether to enroll in distance education programs and courses. It would also provide feedback to schools on where to focus their efforts to improve performance. Education’s 2002-2007 strategic plan calls for public disclosure of data by stating, “n effective strategy for ensuring that institutions are held accountable for results is to make information on student achievement and attainment available to the public, thus enabling prospective students to make informed choices about where to attend college and how to spend their tuition dollars.” Similarly, in September 2003, the Council for Higher Education Accreditation stated that “institutions and programs should routinely provide students and prospective students with information about student learning outcomes and institutional and program performance in terms of these outcomes” and that accrediting organizations should “establish standards, policies and review processes that visibly and clearly expect institutions and programs to discharge responsibilities.” The accrediting agencies we reviewed generally recognized the importance of student learning outcomes and had practices in place that embody some aspects of the outcomes-based approach. However, only one of the agencies required schools to have all three components in place. Developing measurable and quantifiable goals. Five of seven agencies had standards or policies requiring that institutions develop measurable goals. For example, one accrediting agency required institutions to formulate goals for its distance learning programs and campus-based programs that cover student achievement, including course completion rates, state licensing examination scores, and job placement rates. Another accrediting agency required that schools set expectations for student learning in various ways. For example, the agency required institutions to begin with measures already in place, such as course and program completion rate, retention rate, graduation rate, and job placement rate. We recognize that each institution will need to develop its own measures in a way that is aligned with its mission, the students it serves, and its strategic plans. For example, a 2-year community college that serves a high percentage of low-income students may have a different mission, such as preparing students for 4-year schools, than a major 4-year institution. Developing strategies for achieving the goals. All of the agencies we visited had standards or policies requiring institutions to develop strategies for achieving goals and allocating resources. For example, one agency had a standard that requires institutions to effectively organize the human, financial, and physical resources necessary to accomplish its purposes. Another agency had a standard that an institution’s student development services must have adequate human, physical, financial, and equipment resources to support the goals of the institution. In addition, the standard requires that staff development to be related to the goals of the student development program and should be designed to enhance staff competencies and awareness of current theory and practice. Our prior work on accountability systems, however, points out that when measurable goals are not set, developing strategies may be less effective because there is no way to measure the results of applying the strategies and no way of determining what strategies to develop. Our visits to the accrediting agencies produced specific examples of schools they reviewed that had tangible results in developing strategies for meeting distance education goals. One was Old Dominion University, which had collected data on the writing skills of distance education students. When scores by distance learners declined during an academic year, school administrators identified several strategies to improve students’ writing abilities. They had site directors provide information on tutoring to students and directed students to writing and testing centers at community colleges. In addition, they conducted writing workshops at sites where a demonstrated need existed. After putting these strategies in place, writing test scores improved. Reporting performance data to the public. Only one of the agencies had standards or policies requiring institutions to disclose student learning outcomes to the public. However, various organizations, including the Council for Higher Education Accreditation, are considering ways to make the results of such performance assessments transparent and available to the public. Among other things, the Council is working with institutions and programs to create individual performance profiles or to expand existing profiles. The Student Right to Know and Campus Security Act of 1990 offers some context for reporting performance data to the public. This act requires schools involved in the federal student loan programs to disclose, among other things, completion or graduation rates and, if applicable, transfer-out rates for certificate- or degree-seeking, full-time, first-time undergraduates. In this regard, Education is considering ways to make available on its Web site the graduation rates of these schools. However, according to two postsecondary experts, the extent that schools make such information available to prospective students may be uneven. The federal government has a substantial interest in the quality of postsecondary education, including distance education programs. As distance education programs continue to grow in popularity, statutory restrictions on the size of distance education programs—put in place to guard against fraud and abuse in correspondence schools—might soon result in increasing numbers of distance education students losing eligibility for federal student aid. At the same time, some form of control is needed to prevent the potential for fraud and abuse. Over the past few years, the Department of Education has had the authority to grant waivers to schools in the Demonstration Program so that schools can bypass existing statutory requirements. The waivers offer schools the flexibility to increase the size of their distance education programs while remaining under the watchful eye of Education. Education is required to evaluate the efficacy of these waivers as a way of determining the ultimate need for changing the statutory restrictions against distance education. To do so, the Department would need to develop data on the cost to the federal student aid programs of granting waivers to schools. Developing such data and evaluating the efficacy of waivers would be a helpful step in providing information to the Congress about ways for balancing the need to protect the federal student aid programs against fraud and abuse while potentially providing students with increased access to postsecondary education. In addition to administering the federal student aid programs, Education is responsible for ensuring the quality of distance education through the postsecondary accreditation process. Among other things, measures of the quality of postsecondary education include student-learning outcomes, such as the extent to which students complete programs and/or the extent to which students’ performance improves over time. As distance education programs proliferate, challenges with evaluating these programs mount because accreditation procedures were developed around campus-based, classroom learning. There is growing awareness in the postsecondary education community that additional steps may be needed to evaluate and ensure the quality of distance education and campus-based programs, though there is far less unanimity about how to go about it. Several accrediting agencies have taken significant steps towards applying an outcome-based, results-oriented approach to their accreditation process, including for distance education. These steps represent a potential set of “best practices” that could provide greater accountability for the quality of distance education. Due to the autonomous nature of accrediting agency operations, Education cannot require that all accrediting agencies adopt these practices. It could, however, play a pivotal role in encouraging and fostering the use of an outcomes-based model. In the long run, if the practices of accrediting agencies remain so varied that program quality is affected, Education may need additional authority to bring about a more consistent approach. Finally, if Education wishes to hold schools more accountable for the quality of distance education and campus-based programs—such as ensuring that a minimum percentage of students complete their programs—aligning the efforts of accrediting agencies to ensure that these factors are measured could increase the likelihood for success in this area. Indeed, a more systematic approach by accrediting agencies could help Education in its effort to focus greater attention on evaluating schools and educational policy through such outcomes. To better inform federal policymakers, we recommend that the Secretary of Education include data in future Demonstration Program reports on the potential cost to the federal student aid programs of waiving the 50-percent rules. To enhance oversight of distance education quality, we recommend that the Secretary of Education, (1) develop, with the help of accrediting agencies and schools, guidelines or a mutual understanding for more consistent and thorough assessment of distance education programs, including developing evaluative components for holding schools accountable for such outcomes and (2) if necessary, request authority from the Congress to require that accrediting agencies use these guidelines in their accreditation efforts. In commenting on a draft of this report, Education generally agreed with our findings and the merits of our recommendations. For instance, Education said that it will consider the potential cost of the federal student aid programs of eliminating the 50-percent rules; however, due to the timing of the process of reauthorizing the Higher Education Act, Education believes it is unlikely these estimates will become part of a future report to Congress on the Demonstration Program. While we can appreciate the difficulties surrounding the timing of the reauthorization, we believe that policymakers would be better informed if this information was provided to them as part of the Demonstration Program. Given the uncertainty about whether Congress will indeed amend the 50-percent rules as part of reauthorization and that the timing of such changes is uncertain, providing information on the costs of the waivers would appear to have value—especially since such information would, in part, carry out the spirit of Demonstration Program requirements. With respect to our recommendation for accreditation, Education said that it would study it carefully. Education agrees that it could engage in a series of discussions with accrediting agencies and schools leading to guidance on assessment and public disclosure of information. Education, however, said that the results would be largely informational because the agencies would not be required to adopt the guidance, and Education is not convinced of the necessity or appropriateness of requiring the guidance via the Higher Education Act. Again, we can appreciate Education's position on this issue, but continue to believe that greater accountability for student learning outcomes is necessary for enhanced oversight of distance education programs. Given Education's stated desire to hold institutions more accountable for results, such as ensuring a higher percentage of students complete their programs on time, working with accrediting agencies to develop guidelines or a mutual understanding of what this involves would be one management tool for doing so. We are sending copies of this report to the Secretary of Education, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. Please call me at (202) 512-8403 if you or your staffs have any questions about this report. Other contacts and acknowledgments are listed in appendix III. To address the two questions about the extent to which current federal restrictions on distance education affect schools’ ability to offer federal student aid to their students and what the Department of Education’s Distance Education Demonstration Program has revealed with respect to the continued appropriateness of these restrictions, we obtained information from Education staff and other experts on which postsecondary institutions might be affected by these provisions or were close to being affected. We limited our work primarily to schools that were involved in the Demonstration Program or had electronically transmitted distance education programs and that were accredited or pre-accredited by accrediting agencies recognized by Education for eligibility in the federal student aid programs. We initially interviewed officials at 21 institutions with a standard set of questions regarding the effect, if any, current federal restrictions have on the schools’ ability to offer federal student aid, and we obtained information on the distance education programs at the schools. Based on our interviews, we determined that only 14 of the 21 schools had been affected or could be affected by the restrictions. We also obtained data on default rates at the 14 schools, if applicable, from Education’s student loan cohort default rate database. With respect to the Demonstration Program, we interviewed officials at Education who were responsible for assessing distance education issues. Additionally, we reviewed monitoring and progress reports on participating institutions involved in the Demonstration Program. We reviewed various reports on federal restrictions related to distance education as well as pertinent statutes and regulations. To address the two questions related to the work of accrediting agencies: To what extent do accreditation agencies include distance education in their reviews of schools or programs and as they evaluate distance education programs and campus-based programs, to what extent do accreditation agencies assess educational outcomes, we focused on the standards and policies of seven accrediting agencies that collectively are responsible for more than two-thirds of all distance education programs. We interviewed agency administrators and evaluated the extent of their outcomes-based assessment standards and policies using criteria that we had developed in a variety of past work addressing performance and accountability issues. We compared accrediting agency standards and policies with prior work we conducted on key components for accountability. We provided our preliminary findings to the seven accrediting agencies and asked them to verify our initial findings. In addition, we interviewed staff at Education involved in accreditation issues. We reviewed Education’s monitoring reports on accreditation agencies. Additionally, we interviewed officials at the Council for Higher Education Accreditation and reviewed various reports that they have produced. We conducted our work in accordance with generally accepted government auditing standards from October 2002 to February 2004. In addition to those named above, Jerry Aiken, Jessica Botsford, Elizabeth Curda, Luann Moy, Corinna Nicolaou, Jill Peterson, Stan Stenersen, and Susan Zimmerman made important contributions to this report. | Distance education--that is, offering courses by Internet, video, or other forms outside the classroom--has changed considerably in recent years and is a growing force in postsecondary education. More than a decade ago, concerns about fraud and abuse by some correspondence schools led to federal restrictions on, among other things, the percentage of courses a school could provide by distance education and still qualify for federal student aid. Given the recent changes in distance education, GAO was asked to review the extent to which the restrictions affect schools' ability to offer federal student aid and the Department of Education's assessment of the continued appropriateness of the restrictions. Additionally, GAO was asked to look at the extent to which accrediting agencies evaluate distance education programs, including their approach for assessing student outcomes. While federal restrictions on the size of distance education programs affect only a small number of schools' ability to offer federal student aid, the growing popularity of distance education could cause the number to increase in the future. GAO found that 14 schools were either now adversely affected by the restrictions or would be affected in the future; collectively, these schools serve nearly 225,000 students. Eight of these schools, however, will remain eligible to offer federal student aid because they have been granted waivers from the restrictions by Education. Education granted the waivers as part of a program aimed at assessing the continued appropriateness of the restrictions given the changing face of distance education. In considering the appropriateness of the restrictions, there are several policy options for amending the restrictions; however, amending the restrictions to improve access would likely increase the cost of the federal student aid programs. One way to further understand the effect of amending the restrictions would be to study data on the cost of granting the waivers to schools, but Education has yet to develop this information. The seven accrediting agencies GAO reviewed varied in the extent to which they included distance education programs in their reviews of postsecondary institutions. All seven agencies had developed policies for reviewing these programs; however, there were differences in how and when they reviewed the programs. Agencies also differed in the extent to which they included an assessment of student outcomes in their reviews. GAO's work in examining how organizations successfully focus on outcomes shows that they do so by (1) setting measurable goals for program outcomes, (2) developing strategies for meeting these goals, and (3) disclosing the results of their efforts to the public. Measured against this approach, only one of the seven accrediting agencies we reviewed had policies that require schools to satisfy all three components. As the key federal link to the accreditation community, Education could play a pivotal role in encouraging an outcomes-based model. |
Mortgage insurance, a commonly used credit enhancement, protects lenders against losses in the event of default, and FHA is a government mortgage insurer in a market that also includes private insurers. During fiscal years 2001 to 2003, FHA insured a total of about 3.7 million mortgages with a total value of about $425 billion. FHA plays a particularly large role in certain market segments, including low-income and first-time homebuyers. In 2000, almost 90 percent of FHA-insured home purchase mortgages had an LTV higher than 95 percent. FHA insures most of its mortgages for single-family housing under its Mutual Mortgage Insurance (MMI) Fund. To cover lender’s losses, FHA collects premiums from borrowers. These premiums, along with proceeds from the sale of foreclosed properties, pay for claims that FHA pays lenders as a result of foreclosures. In recent years, other members of the conventional mortgage market (such as private mortgage insurers, government-sponsored enterprises such as Fannie Mae and Freddie Mac, and large private lenders) have been increasingly active in supporting low and even no down payment mortgage products. For example, Fannie Mae and Freddie Mac’s no down payment mortgage products were introduced in 2000; and many private mortgage insurers will now insure a mortgage up to 100 percent LTV. However, the characteristics and standards for low and no down payment products vary among mortgage institutions. Currently, homebuyers with FHA-insured loans need to make a 3 percent contribution toward the purchase of the property and may finance some of the closing costs associated with the loan. As a result, an FHA-insured loan could equal nearly 100 percent of the property’s value or sales price. In recent years, a growing proportion of borrowers have received down payment assistance, which further helps them meet the hurdle of accumulating sufficient funds to purchase a home. Based on our preliminary analysis of FHA-insured loans that had LTVs above 95 percent, use of down payment assistance has grown to over half of such loans insured during the first seven months of 2005. When considering the risk of mortgages, a substantial amount of research GAO reviewed indicates that the LTV ratio and the borrower’s credit score are among the most important factors when estimating the risk level associated with individual mortgages. We also analyzed the performance, expressed by the percent of borrowers defaulting within four years of mortgage origination, of low and no down payment mortgages supported by FHA and others. Our analysis supports the findings we found in the research literature. Generally, mortgages with higher LTV ratios (smaller down payments) and lower credit scores are riskier than mortgages with lower LTV ratios and higher credit scores. As can be seen in Figure 1, when focusing only on LTV for FHA loans, default rates increase as the LTV ranges increase. In theory, LTV ratios are important because of the direct relationship that exists between the amount of equity borrowers have in their homes and the risk of default. The higher the LTV ratio, the less cash borrowers will have invested in their homes and the more likely it is that they may default on mortgage obligations, especially during times of economic hardship (e.g., unemployment, divorce, home price depreciation). Risk assessment is a very important component of issuing and insuring mortgages, particularly when introducing a mortgage product that has the risk associated with a higher LTV. To help assess the risks associated with mortgages, the mortgage industry has moved toward greater use of mortgage scoring and automated underwriting systems. Mortgage scoring is a technology-based tool that relies on the statistical analysis of millions of previously originated mortgage loans to determine how key attributes such as the borrower’s credit history, the property characteristics, and the terms of the mortgage note affect future loan performance. During the 1990s, private mortgage insurers, the GSEs, and larger financial institutions developed automated underwriting systems. Automated underwriting systems refer to the process of collecting and processing the data used in the underwriting process. These systems rely, in part, on individuals’ credit scores or credit history, and they have played an integral role in the provision of low and no down payment mortgage products. These systems allow lenders to quickly assess the riskiness of mortgages by simultaneously considering multiple factors including the credit score and credit history of borrowers. FHA has developed and recently implemented a mortgage scoring tool, called the FHA TOTAL Scorecard, to be used in conjunction with existing automated underwriting systems. More than 60 percent of all mortgages— conventional and government-insured—were underwritten by an automated underwriting system, as of 2002, and this percentage continues to rise. According to representatives of mortgage institutions we interviewed, they use a number of similar practices in designing and implementing new products. These practices can be especially important when designing and implementing new products with higher or less well understood risk, such as low and no down payment products. Some of these practices could be helpful to FHA in its design and implementation of a zero down payment product, as well as other new products. More specifically, mortgage institutions often establish additional requirements for new products such as additional credit enhancements or underwriting requirements. Although FHA has less flexibility in imposing additional credit enhancements it does have the authority to seek co-insurance, which it is not currently using. FHA makes adjustments to underwriting criteria and to its premiums, but told us that it is unlikely to use a credit score threshold for a new zero down payment product. Further, mortgage institutions also use different means to limit how widely they make available a new product, particularly during its early years. FHA does sometimes use practices for limiting a new product but usually does not pilot products on its own initiative. FHA officials with whom we spoke question the circumstances in which they can limit the availability of a program and told us they do not have the resources to manage programs with limited availability. Finally, according to officials of mortgage institutions, including FHA, they also often put in place more substantial monitoring and oversight mechanisms for their new products including lender oversight. In an earlier report, we made recommendations designed to improve HUD’s oversight of FHA lenders. Some mortgage institutions require additional credit enhancements— mechanisms for transferring risk from one party to another such as mortgage insurance—on low and no down payment products. Mortgage institutions such as Fannie Mae and Freddie Mac mitigate the risk of low and no down payment products by requiring additional credit enhancements such as higher mortgage insurance coverage. Fannie Mae and Freddie Mac believe that the higher-LTV loans represent a greater risk to them and they seek to partially mitigate this risk by requiring higher mortgage insurance coverage on these loans. For example, Fannie Mae and Freddie Mac require insurance coverage of 35 percent of the claim amount (on individual loans that foreclose) for loans that have an LTV of greater than 95 percent and require lower insurance coverage for loans with LTVs below 95 percent. Although FHA is required to provide up to 100 percent coverage of the loans it insures, FHA may engage in co-insurance of its single-family loans. Under co-insurance, FHA could require lenders to share in the risks of insuring mortgages by assuming some percentage of the losses on the loans that they originated (lenders would generally use private mortgage insurance for risk sharing). FHA has used co-insurance before, primarily in its multifamily programs, but does not currently use co-insurance at all. FHA officials told us they tried to put together a co-insurance agreement with Fannie Mae and Freddie Mac and, while they were able to come to agreement on the sharing of premiums, they could not reach agreement on the sharing of losses and it was never implemented. Mortgage institutions also can mitigate risk through stricter underwriting. For example, mortgage institutions such as Fannie Mae and Freddie Mac sometimes introduce stricter underwriting standards as part of the development of new low and no down payment products (or products about which they do not fully understand the risks). Institutions can do this in a number of ways, including requiring a higher credit score threshold for certain products, or requiring greater borrower reserves or more documentation of income or assets from the borrower. Once the mortgage institution has learned enough about the risks that were previously not understood, it can change the underwriting requirements for these new products. FHA could also benefit from mitigating risk such as through stricter underwriting. Although FHA has to meet some statutory standards, it retains some flexibility in how it implements a newly authorized product or changes an existing product. The HUD Secretary has latitude within statutory limitations in changing underwriting requirements for new and existing products and has done this many times. The requirements in H.R. 3043 that prospective zero down payment loans go through FHA’s TOTAL Scorecard and that borrowers receive prepurchase counseling are consistent with stricter underwriting. However, in addressing the final recommendations in our February report, FHA wrote that is unlikely to mandate a credit score threshold for a new zero down payment product because the new product is intended to serve borrowers who are underserved by the conventional market including those who lack credit scores. Also, FHA wrote that it is unlikely to mandate borrower reserve requirements since the purpose of a zero down payment product is to serve borrowers with little wealth or personal savings. Mortgage institutions can increase fees or charge higher premiums to help offset the potential costs of a program that is believed to have greater risk. For example, Fannie Mae officials stated that they would charge higher guarantee fees on low and no down payment loans if they were not able to require higher insurance coverage. FHA could set higher premiums in anticipation of higher claims from no down payment loans. Within statutory limits, the HUD Secretary has the authority to set up-front and annual premiums that are charged to borrowers who have FHA-insured loans. In fact, in the administration’s 2006 budget proposal for a zero down payment product, it included higher up front and annual premiums for these loans. Some mortgage institutions may limit in some way a new product before fully implementing the new product. For example, Fannie Mae and Freddie Mac sometimes use pilots, or limited offerings of new products, to build experience with a new product type or to learn about particular variables that can help them better understand the factors that contribute to risk for these products. Freddie Mac and Fannie Mae also sometimes set volume limits for the percentage of their business that could be low and no down payment lending. Fannie Mae and Freddie Mac officials provided numerous examples of products that they now offer as standard products but which began as part of underwriting experiments. These include the Fannie Mae Flexible 97® product, as well as the Freddie Mac 100 product. FHA has utilized pilots or demonstrations as well when making changes to its single-family mortgage insurance. Generally, HUD has done this in response to legislation that requires a pilot and not on its own initiative. For example, FHA’s Home Equity Conversion Mortgage (HECM) insurance program started as a pilot. Congress initiated HECM in 1987; the program is designed to provide elderly homeowners a financial vehicle to tap the equity in their homes without selling or moving from their homes (sometimes called a “reverse mortgage”). Through statute, HECM started as a demonstration program that authorized FHA to insure 2,500 reverse mortgages. Through subsequent legislation, FHA was authorized to insure an increasing number of these mortgages until Congress made the program permanent in 1998. Under the National Housing Act, the HECM program was required to undergo a series of evaluations and it has been evaluated four times since its inception. FHA officials told us that administering this demonstration for 2,500 loans was difficult because of the challenges of selecting a limited number of lenders and borrowers. FHA ultimately had to use a lottery to limit loans to lenders. H.R. 3043 also would mandate that FHA pilot the zero down payment program: it limits the annual number of zero down mortgages to 10 percent of the aggregate number of loans insured during the previous fiscal year, and sets an aggregate limit of 50,000 loans. The appropriate size for a pilot program depends on several factors. For example, the precise number of loans needed to detect a difference in performance between standard loans and loans of a new product type depends in part on how great the differences are in loan performance. If delinquencies early in the life of a mortgage were about 10 percent for FHA’s standard high LTV loans, and FHA wished to determine whether loans in the pilot had delinquency rates no more than 20 percent greater that the standard loans (delinquency no more than 12 percent), a sample size of about 1,000 loans would be sufficient to detect this difference with 95 percent confidence. If delinquency rates or FHA’s desired degree of precision were different, a different sample size would be appropriate. FHA officials told us they have conducted pilot programs when Congress has authorized them, but they questioned the circumstances under which pilot programs are needed. FHA officials also said that they lacked sufficient resources to appropriately manage a pilot. Additionally, some mortgage institutions may also limit the initial implementation of a new product by limiting the origination and servicing of the product to their better lenders and servicers. Mortgage institutions may also limit servicing on the loans to servicers with particular product expertise, regardless of who originates the loans. Fannie Mae and Freddie Mac both reported that these were important steps in introducing a new product and noted that lenders tend to take a more conservative approach when first implementing a new product. FHA officials agreed that they could, under certain circumstances, envision piloting or limiting the ways in which a new or changed product would be available but pointed to the practical limitations in doing so. FHA approves the sellers and services that are authorized to support FHA’s single-family product, but FHA officials told us they face challenges in offering any of their programs only in certain regions of the country or in limiting programs to certain approved lenders or servicers. FHA generally offers products on a national basis and, when they do not, specific regions of the county or lenders might question why they are not able to receive the same benefit (even on a demonstration or pilot basis). However, these officials did provide examples in which their products had been initially limited to particular regions of the country or to particular lenders, including the rollout of the HECMs and their TOTAL Scorecard. Mortgage institutions, including FHA, may take several steps related to increased monitoring of new products and subsequently make changes based on what they learned. Fannie Mae and Freddie Mac officials described processes in which they monitor actual versus expected loan performance for new products, sometimes including enhanced monitoring of early loan performance. Some mortgage institutions, such as Fannie Mae, told us that they may conduct rigorous quality control sampling of new acquisitions, early payment defaults, and nonperforming loans. Depending on the scale of a new initiative, and its perceived risk, these quality control reviews could include a review of up to 100 percent of the loans that are part of the new product. FHA officials told us they also monitor more closely loans underwritten under revised guidelines. Specifically, FHA officials told us that FHA routinely conducts a review of underwriting for approximately 6 to 7 percent of loans it insures. According to FHA officials, as part of the review, it may place greater emphasis on reviewing those aspects of the insurance product that are the subject of a recent change. Fannie Mae and Freddie Mac also reported that they conduct more regular reviews at mortgage servicer sites for new products. In some cases, Fannie Mae and Freddie Mac have staff who conduct on-site audits at the sellers and servicers to provide an extra layer of oversight. According to FHA officials, they have staff that conduct reviews of lenders that they have identified as representing higher risk to FHA programs. However, we recently reported that HUD’s oversight of lenders could be improved and identified a number of recommendations for improving this oversight. Loans with low or no down payments carry greater risk. Without any compensating measures such as offsetting credit enhancements and increased risk monitoring and oversight of lenders, introducing a new FHA no down payment product would expose FHA to greater credit risk. The administration’s proposal for a zero down product included increased premiums to help compensate for an increase in the cost of the FHA program which would permit FHA to potentially offset additional costs stemming from a new product that entails greater risk or not well understood risk. The proposed bill also requires that borrowers receive prepurchase counseling. Although FHA appears to follow many key practices used by mortgage institutions in designing and implementing new products, several practices not currently or consistently followed by FHA stand out as appropriate means to manage the risks associated with introducing new products or significantly changing existing products. Moreover, these practices can be viewed as part of a formal framework used by some mortgage institutions for managing the risks associated with new or changed products. The framework includes techniques such as limiting the availability of a new product until it is better understood and establishing stricter underwriting standards—all of which would help FHA to manage risk associated with any new product it may introduce. For example, FHA could set volume limits or limit the initial number of participating lenders in the product. Further, changes in FHA’s premiums, an important element of the administration’s 2006 budget proposal for a zero down payment product would permit FHA to potentially offset additional costs stemming from a new product that entails greater risk or not well understood risk. However, FHA officials believe that the agency does not have sufficient resources to implement products with limited volumes, such as through a pilot program. Yet, when FHA makes new products widely available or makes significant changes to existing products with less-understood risks, these products or actions also can introduce significant risks. Products that would introduce significant risks can impose significant costs. We believe that FHA could mitigate these risks and potential costs by using techniques such as piloting. Moreover, FHA told us that it believes that pilot programs are not needed because the risks of every new year of loans are assessed annually as part of credit subsidy budgetary transactions and in its annual actuarial study, and it could terminate the program early in its life if it identified problems. However, because it may take a few years to determine the risks of a new loan product, early termination could still expose the government to significant financial risk without some type of limits on the number of loans insured. If FHA is unsure about its authority to conduct pilots or concerned about expectations of equitable distribution of its products, Congress can make clear that FHA has this authority by requiring a product to be implemented as part of a pilot, or by explicitly giving the HUD Secretary the authority to establish and implement pilots for new products. If Congress authorizes FHA to insure a no down payment product or any other new single-family insurance products, Congress may want to provide guidance and clear authority to FHA on this new product. Congress may want to consider a number of means to mitigate the additional risks that these loans may pose. Such means may include limiting the initial availability of such a new product, requiring higher premiums, requiring stricter underwriting standards, or requiring enhanced monitoring. Such risk mitigation techniques would serve to help protect the Mutual Mortgage Insurance Fund while allowing FHA the time to learn more about the performance of loans using this new product. Limits on the initial availability of the new product would be consistent with the approach Congress took in implementing the HECM program. The limits could also come in the form of an FHA requirement to limit the new product to better performing lenders and servicers as part of a demonstration program or to limit the time period during which the product is first offered. Mr. Chairman, this completes my prepared statement. I would be pleased to respond to any questions you or other members of the Committee may have at this time. For more information regarding this testimony, please contact William B. Shear at (202) 512-8678 or shearw@gao.gov or Mathew Scirè at (202) 512- 6794 or sciremj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on this last page of this testimony. Individuals making key contributions to this testimony also included Anne Cangi, Bert Japikse, Austin Kelly, Andy Pauline, Susan Etzel, and Barbara Roesmann. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | To assist Congress in considering legislation to authorize the Secretary of the Department of Housing and Urban Development (HUD) to carry out a pilot program to insure zero down payment mortgages, this testimony provides information about practices mortgage institutions use in designing and implementing low and no down payment products. It also contains information about how these practices could be instructive for FHA in managing risks associated with a zero down payment product--a product for which the risks are not well understood. This testimony is primarily based on GAO's February 2005 report, Mortgage Financing: Actions Needed to Help FHA Manage Risks from New Mortgage Loan Products, (GAO-05-194). In recent years, many mortgage institutions have become increasingly active in supporting low and even no down payment mortgage products. In considering the risks of these new products, a substantial amount of research GAO reviewed indicates that loan-to-value (LTV) ratio and credit score are among the most important factors when estimating the risk level associated with individual mortgages. GAO's analysis of the performance of low and no down payment mortgages supported by FHA and others corroborates key findings in the literature. Generally, mortgages with higher LTV ratios (smaller down payments) and lower credit scores are riskier than mortgages with lower LTV ratios and higher credit scores. Some practices of other mortgage institutions offer a framework that could help FHA manage the risks associated with introducing new products or making significant changes to existing products. Mortgage institutions sometimes require additional credit enhancements, such as higher insurance coverage, and stricter underwriting, such as credit score thresholds, when introducing a new low or no down payment product. FHA is authorized to require an additional credit enhancement, but does not currently use this authority. FHA has used stricter underwriting criteria, but told us it is unlikely they would use a credit score threshold for a new zero down payment product. Mortgage institutions may also impose limits on the volume of the new products they will permit and on who can sell and service these products. FHA officials question the circumstances in which they can limit volumes for their products and believe they do not have sufficient resources to manage a product with limited volumes, but the potential costs of making widely available a product with risk that is not well understood could exceed the cost of initially implementing such a product on a limited basis. |
CPSC was created in 1972 under the Consumer Product Safety Act (P.L. 92-573) to regulate consumer products that pose an unreasonable risk of injury, to assist consumers in using products safely, and to promote research and investigation into product-related deaths, injuries, and illnesses. CPSC currently has three commissioners, who are responsible for establishing agency policy. One of these commissioners is designated the chairman; the chairman directs all the executive and administrative functions of the agency. The Consumer Product Safety Act consolidated federal safety regulatory activity relating to consumer products within CPSC. As a result, in addition to its responsibilities for protecting against product hazards in general, CPSC also administers four laws that authorize various performance standards for specific consumer products. These laws are the Flammable Fabrics Act (June 3, 1953, c.164), which authorizes flammability standards for clothing, upholstery, and other fabrics; the Federal Hazardous Substances Act (P.L. 86-613), which authorizes the regulation of substances that are toxic, corrosive, combustible, or otherwise hazardous; the Poison Prevention Packaging Act of 1970 (P.L. 91-601), which authorizes requirements for child-resistant packaging for certain drugs and other household substances; and the Refrigerator Safety Act of 1956 (Aug. 2, 1956, c.890), which establishes safety standards for household refrigerators. In fiscal year 1997, CPSC carries out this broad mission with a budget of about $42.5 million and a full-time-equivalent staff of 480. As figure 1 shows, after adjusting for inflation, the agency’s budget has decreased by about 60 percent since 1974. Similarly, CPSC’s current staffing level represents 43 percent fewer positions as compared with the agency’s 1974 staff. CPSC uses a number of regulatory and nonregulatory tools to reduce injuries and deaths associated with consumer products. Under several of the acts that it administers, CPSC has the authority to issue regulations that establish performance or labeling standards for consumer products. For example, in 1993, CPSC issued regulations under the Consumer Product Safety Act requiring disposable cigarette lighters to be child-resistant. If CPSC determines that there is no feasible standard that would sufficiently address the danger, CPSC may issue regulations to ban the manufacture and distribution of the product. In addition, under the Consumer Product Safety Act, if a product violates a safety regulation or presents a “substantial hazard,” CPSC may order a product recall, in which the item is removed from store shelves and consumers are alerted to return the item for repair, replacement, or refund. CPSC can also impose civil penalties for violations of federal safety standards. Although CPSC has these broad regulatory powers, much of the agency’s efforts are carried out using nonregulatory methods. In addition to federally mandated product safety standards, many consumer products are covered by voluntary standards. These voluntary standards, which are often established by private standard-setting groups, do not have the force of law. However, many voluntary standards are widely accepted by industry. The 1981 amendments to the Consumer Product Safety Act require CPSC to defer to a voluntary standard—rather than issue a mandatory regulation—if CPSC determines that the voluntary standard adequately addresses the hazard and that there is likely to be substantial compliance with the voluntary standard. As a result, voluntary standards development is an important tool in CPSC’s hazard-reduction efforts. For example, in 1996 CPSC helped a private group develop a voluntary standard to address the risk of children getting their heads stuck between the slats of toddler beds, and in 1995 CPSC assisted a standard-setting group in upgrading safety standards to prevent fires associated with Christmas tree lights. CPSC also addresses product hazards by providing information to consumers on safety practices that can help prevent product-related accidents. For example, to encourage consumers to use electricity safely—and particularly to promote the use of ground fault circuit interrupters—CPSC conducted a far-reaching publicity campaign that included radio public service announcements, messages printed on carryout bags for hardware stores, a joint press conference with industry representatives, presentations on television’s Home Shopping Network, and promotional letters to real estate and home inspection associations. In addition to its own active efforts to disseminate information, CPSC provides considerable amounts of information in response to requests from the public. Like other federal agencies, CPSC must comply with the Freedom of Information Act (FOIA) when responding to requests from the public for information. A notable feature of FOIA is its presumption in favor of disclosure: any person has the right to inspect and copy any government records unless the documents requested fall within one of the exemptions to the act (for example, disclosure of trade secrets). FOIA requests may come to CPSC from regulated industries, the press, consumer groups, or individuals. During calendar year 1995, CPSC responded to 16,424 formal requests made under FOIA. CPSC’s resource base and extensive jurisdiction require the agency to select among potential product hazards. New initiatives may come to CPSC in several ways. First, any person may file a petition asking CPSC to issue, amend, or revoke a regulation. Petitions, which can be as simple as a letter or as formal and detailed as a legal brief, have come to CPSC from doctors and nurses, consumers and advocacy groups, and industry representatives. CPSC may grant or deny a petition either in full or in part. Even when CPSC denies a petition and declines to issue a regulation, it may still begin a project to address the hazard by promoting a voluntary standard or conducting a consumer education campaign. For example, a project on heat tapes (heated wraps for exposed pipes) originated with a petition from a concerned consumer. CPSC denied the petition for a mandatory standard, but conducted a research study and review of the existing voluntary standard for heat tapes. Second, CPSC receives some product hazard projects from the Congress. The Congress may require CPSC to study a wide-ranging product area. For example, the Consumer Product Safety Improvement Act of 1990 resulted in a large body of work on products affecting indoor air quality, including wood stoves, kerosene heaters, and carpets. The Congress may also direct CPSC to impose a specific regulation, such as when it directed CPSC to require additional labeling on toys intended for children aged 3 to 6 warning parents of possible choking hazards when the toy is used by children under age 3. Finally, CPSC commissioners and agency staff may initiate projects or suggest areas to address. CPSC gathers death and injury information to help identify potential product hazards and also obtains input from the public. The agency maintains a toll-free hot line and an Internet site that the public can use to notify agency staff of a possible product hazard. In addition, CPSC holds public meetings to get input on possible hazards to address and on which hazards should receive priority. For example, CPSC increased its efforts to remove drawstrings from children’s clothing after receiving a letter from a woman whose daughter was strangled when her jacket string caught on a playground slide. The selection of projects to address new product hazards takes place at different levels of the agency throughout the year. For a petition, the commissioners decide whether the product hazard warrants further agency involvement, and the commissioners vote on whether to grant or deny the petition. If a project is believed to have a high potential for regulatory action or involve a substantial amount of agency resources, the commissioners decide whether to pursue it. Projects of this caliber are often noted in the agency’s annual budget and operating plan, which must also be approved by commissioner vote. Staff request a decision on such a project by preparing a briefing package about the product hazard for the commissioners, who vote to begin the regulatory process, take some other action, or terminate the project. Agency staff generally may decide to initiate projects that are unlikely to result in regulation, and no briefing package is sent to the commissioners for a vote. (Of the 115 CPSC projects we identified, 80 (70 percent) were detailed in briefing packages.) The scope of the agency’s projects varies greatly, and CPSC has no standard definition of what constitutes a project. A project might cover general product areas, such as fire hazards, or address only a specific product, like cigarette lighters. A project might require undertaking an extensive research study or providing technical assistance to a group that is developing a voluntary standard. The bulk of CPSC’s workload is made up of projects selected by the agency rather than by the Congress. CPSC has established criteria to help in project selection, such as the numbers of deaths and injuries associated with a product. However, CPSC is unable to accurately measure these criteria because its data on potential hazards are incomplete. In addition, CPSC does not maintain systematic information on past and ongoing projects, which makes it difficult to assess and prioritize the need for new projects in different hazard areas. The lack of comprehensive data on individual product hazards and on agency initiatives raises questions about CPSC’s ability to evaluate its own effectiveness—which it is now required to do under the Government Performance and Results Act of 1993 (the Results Act). CPSC has wide latitude over which potential product hazards it targets for regulatory and nonregulatory action. Although it has little or no discretion over projects mandated by the Congress, CPSC can choose to accept or reject suggestions that are submitted by petition or proposed by the agency staff. As shown in figure 2, 59 percent of CPSC projects were initiated by CPSC, 30 percent originated from a petition, and about 11 percent resulted from congressional mandates. Of the 115 projects the agency worked on from January 1, 1990, to September 30, 1996, 97 (about 90 percent) were chosen by the agency. Data were unavailable to assess the extent to which staff suggestions for projects were accepted or rejected. Of the petitions filed with CPSC between January 1, 1990, and September 30, 1996, 60 percent resulted in projects (32 percent by granting the petition in whole or in part, and 27 percent by denying the petition to establish a mandatory regulation but creating a nonregulatory project). In 27 percent of cases, CPSC decided that no action was needed or that existing actions or standards were sufficient to address the issue raised by the petition. In the remaining cases, a decision is still pending or the petition was withdrawn before a decision was rendered. CPSC has established criteria for setting agency priorities and selecting potential hazards to address. These criteria, which are incorporated in agency regulations, include the following: the frequency of injuries and deaths resulting from the hazard; the severity of the injuries resulting from the hazard; addressability—that is, the extent to which the hazard is likely to be reduced through CPSC action—agency regulations note that the cause of the hazard should be analyzed to help determine the extent to which injuries can reasonably be expected to be reduced or eliminated through CPSC action; the number of chronic illnesses and future injuries predicted to result from the hazard; preliminary estimates of the costs and benefits to society resulting from unforeseen nature of the risk—that is, the degree to which consumers are aware of the hazard and its consequences; vulnerability of the population at risk—whether some individuals (such as children) may be less able to recognize or escape from potential hazards and therefore may require a relatively higher degree of protection; probability of exposure to the product hazard—that is, how many consumers are exposed to the potential hazard, or how likely a typical consumer is to be exposed to the hazard; and other—additional criteria to be considered at the discretion of CPSC. CPSC’s regulations allow for considerable freedom in applying these criteria; commissioners and staff can base their project selections on what they perceive as the most important factors. For example, the regulations do not specify whether any criterion should be given more weight than the others, nor must all criteria be applied to every potential project. Indeed, our interviews with present and former commissioners and our review of CPSC briefing packages revealed a pattern in which three criteria—the numbers of deaths and injuries, the causality of injuries, and the vulnerability of the population at risk—were more strongly emphasized than the others. In addition, each of the commissioners we interviewed identified some criteria as being more important than others for project selection. For example, one commissioner indicated that the number of deaths and injuries was most important, while another commissioner included awareness of the hazard in a list of several criteria she believed were most important. However, there was considerable agreement among the commissioners about the importance of several criteria. The commissioners cited two criteria—vulnerability of population and number of deaths and injuries—as especially important for project selection. In addition, several—but not all—commissioners emphasized causality of injuries. None of the other criteria was emphasized by more than one or two commissioners. Because the commissioners use their judgment in applying these criteria, there is no systematic checklist or scoring system that would enable us to determine which factors were considered most important for a particular product. However, information related to some or all of these criteria is sometimes contained in briefing packages and other documents. Our review of CPSC project documentation showed that information on vulnerable populations and the numbers of deaths and injuries associated with the product was likely to be compiled at some time during the project, but information associated with other criteria was less likely to be documented. For example, of the 115 projects we reviewed, death and injury information was available in 97 cases. However, only 26 cases included information on exposure to the hazard, a less-emphasized criterion. Although data were insufficient to compare the universe of possible projects with the ones selected by CPSC, the characteristics of CPSC projects appear generally consistent with the stronger emphasis on death and injury data and on vulnerable populations expressed by the current and former commissioners. For example, while 76 of the 115 projects we examined were directed at least partially at a vulnerable population group, only 13 projects mentioned chronic illness. However, although the number of deaths and injuries associated with product hazards was almost always available in project documentation, there was no pattern of only those projects with high numbers of injuries or deaths being selected. Of the 97 projects that had death and injury statistics, 19 showed fewer than 50 injuries and/or deaths associated with the product. The estimated number of annual injuries associated with product hazards ranged from 1 to 162,100 (for baseball injuries), and the estimated number of deaths associated with product hazards ranged from zero to a high of 3,600 annually (for smoke detectors). This wide range is consistent with CPSC staff’s statement that there is no threshold for the number of deaths and injuries that would require acceptance or rejection of a project. Although the commissioners and former commissioners we interviewed generally agreed on the criteria they emphasized for project selection, they expressed very different views on how some of these criteria should be interpreted. For example, several commissioners viewed vulnerable populations as focusing on children, while others highlighted additional segments of the population that they considered vulnerable. One commissioner also listed low-income and poorly educated consumers as vulnerable populations, and another expressed concern that the elderly were especially vulnerable to injury from product hazards. Project documentation focused on children more frequently than on other population segments thought to be at special risk. Many projects we examined contained no information in the documentation that indicated a particular population was being considered vulnerable. However, of the 76 projects for which information was available on special populations, 69 (91 percent) mentioned children. Industry observers, consumer advocates, current and former commissioners, and others expressed widely diverging views on how to apply causality of injuries in selecting projects. All seven commissioners we interviewed mentioned this as an important criterion, and several stressed causal factors. A major issue surrounding the application of causality is determining the appropriate level of protection the agency should provide when a product hazard results, at least in part, from consumer behavior. For example, a consumer advocate stated that regulatory action may be necessary whatever the cause of the incident if children who were incapable of protecting themselves get hurt. Similarly, another individual told us that CPSC should deal with potential hazards on the basis of the behavior that actually took place, not the behavior that might be expected or considered reasonable. However, other individuals asserted that CPSC should address only those hazards that result from products that are defective—that is, products that create a hazard even when used as intended by the manufacturer. Some industry representatives stated that it was inappropriate for CPSC to take action concerning a product if the product was “misused” by the consumer. Complicating this debate is the difficulty of defining misuse of the product or negligence of the consumer. For example, the appropriate degree of parental supervision is frequently an issue with children’s products. One of the agency’s more controversial projects illustrates this point. CPSC staff conducted a project to investigate the deaths of children using baby bath seats or rings. In these incidents, infants slipped out of the seat and drowned in the bathtub when the parent or caregiver stepped out of the room and left the child unsupervised, despite warning signs on the seats not to leave children unattended. The Commission disagreed on the proper course of action, largely because of differing views on causality. In 1994, the staff recommended that the Commission issue an Advance Notice of Proposed Rulemaking (ANPR), the first step in the regulatory process. The staff argued (and one commissioner agreed) that some parents will leave a young child alone in the bathtub regardless of a warning not to. However, in voting against issuing an ANPR, the other two commissioners stated that they believed regulation was not appropriate because the lack of supervision, not the product, caused the tragedies. CPSC staff have also encountered other instances in which the behavior of consumers might be viewed as inappropriate. For example, the role of alcohol and drug use in accidents can also raise questions about the appropriate level of regulatory protection. In addition, a 1991 CPSC study found that at least 33 percent of bicycle accidents involved behaviors such as performing stunts and going too fast. Similarly, a 1991 CPSC study of fires associated with heat tapes found that at least 38 percent of the heat tapes had been installed improperly. In each of these cases, no regulatory action was taken; in the case of bicycles, the staff did recommend increasing efforts to encourage consumers to take safety precautions such as using lights at night and wearing helmets. CPSC uses data from internal management systems and from external sources to assist in project selection. CPSC collects information on product-related deaths and injuries to provide information for project selection as well as to perform risk assessments and cost-benefit analyses. Furthermore, the agency maintains a computerized management information system (MIS) that contains information on some of its major activities and is used by the agency to develop its annual budget. Both these internal and external data are of limited value. The inadequacy of the information raises questions about CPSC’s ability to make informed project selection decisions so that agency resources are being spent efficiently. CPSC has developed a patchwork of independent data systems to provide information on deaths and injuries associated with consumer products. To obtain estimates of the number of injuries associated with specific consumer products, CPSC relies on its National Electronic Injury Surveillance System (NEISS). NEISS gathers information from the emergency room records of a nationally representative sample of 101 hospitals. CPSC also obtains information on fatalities by purchasing a selected group of death certificates from the states. It supplements this information with anecdotal reports from individual consumers and with data from private organizations such as fire-prevention groups and poison control centers. Because neither NEISS nor death certificate data provide detailed information on hazard patterns or injury causes, CPSC investigates selected incidents to obtain more detailed information. In addition, CPSC sometimes uses mathematical modeling techniques or conducts special surveys to obtain information on product exposure. (For more information on CPSC’s data sources, see app. V.) CPSC’s data give the agency only limited assistance in applying its project selection criteria. (These criteria, the measures used for each, and major data limitations are given in table 1.) CPSC’s injury and death data allow the agency to piece together at best an incomplete view of the incidents that result from consumer product hazards. Product-related injuries may be treated in a variety of ways—in an emergency room, in a physician’s office, or through an outpatient clinic, for example. As figure 3 illustrates, CPSC obtains systematic surveillance information only on deaths and on injuries treated in the emergency room; injuries treated in other settings (such as physicians’ offices) are not represented in CPSC’s surveillance data. Partially Covered Covered Not Covered A “near miss” refers to an incident in which a product-related injury nearly occurred but was narrowly averted. In its regulations that address priority-setting, CPSC states that such incidents can be as important as actual injuries in identifying potential hazards. CPSC staff identified the lack of data on injuries treated in physicians’ offices and other settings as a key concern. Because CPSC’s data sets reveal only a portion of the injury picture, the agency underestimates the total numbers of deaths and injuries associated with any given consumer product. The extent of this undercount is unknown. For example, researchers report widely varying estimates of the percentage of injuries that are treated in emergency rooms as opposed to other medical settings. For example, a 1991 study by researchers at RAND found that approximately 65 percent of injuries were treated in the emergency room. However, recent data indicate that the number of injury-related visits to physicians’ offices alone were more than double the number of injury-related visits to the emergency room. CPSC’s estimates of product-related deaths are also undercounted, for two reasons. First, for budgetary reasons, the agency purchases only a subset of the total number of death certificates from states. Second, CPSC death counts include only those cases in which product involvement can be inferred from the information on the death certificate, and in some cases, product-related information is not recorded. Even if a reliable figure was available to determine the exact percentage of product-related injuries that were treated in emergency rooms, this percentage would not necessarily apply to any specific type of product-related injury. For example, even if it was established that 40 percent of all product-related injuries were treated in emergency rooms, the percentage of bunk bed injuries treated in emergency rooms might be much larger or smaller. The setting in which injuries are treated depends on a wide array of factors that vary among individuals, across geographic regions, and among different types of injuries. Research indicates that African Americans are more likely to use the emergency room than Caucasians are. Access to the emergency room or to a physician also depends on the type of medical insurance a person has. For example, health maintenance organizations (HMO) often place restrictions on reimbursement for emergency room care, and HMO membership as a percentage of the total population varies widely from state to state. In addition, injuries that occur at night, when most physicians’ offices are closed, may be more likely to be treated in the emergency room. As a result, it is unlikely that CPSC could approximate the number of injuries associated with a specific product by using data that apply to all consumer products as a group. The incompleteness of CPSC’s injury information also hampers its ability to reliably discern long-term trends in injuries, which is not only a criterion for project selection but also an important factor for evaluating the success of CPSC’s injury-reduction efforts and determining the need for follow-up actions. The relative sizes of the pieces of the injury puzzle in figure 3 are unknown but appear to change over time. For example, hospitalizations decreased by 5 percent on a per capita basis between 1982 and 1994, while between 1983 and 1993, hospital outpatient clinics saw a 53-percent increase in visits on a per capita basis. As a result, it is impossible to determine whether any change in the number of emergency room visits represents a true change in injuries or a shift to other medical settings. According to CPSC staff, identifying chronic illnesses associated with consumer products is nearly impossible. CPSC staff stated that little is known about many chronic illness hazards that may be associated with potentially dangerous substances, and even less information is available about which consumer products may contain these substances. Chronic illnesses are especially likely to be underestimated in CPSC’s NEISS data because they are underrepresented among emergency room visits and because product involvement is more difficult to ascertain. Similarly, consumer product involvement is very seldom recorded on death certificates in cases of chronic illnesses. CPSC’s surveillance data also give an incomplete picture of the severity of incidents. Although the data capture many relatively severe injuries—that is, those that result in death or require treatment in an emergency medical facility—data are missing for individuals who are admitted to the hospital through their physician rather than through the emergency room. Potentially less severe cases—for example, those treated in physicians’ offices, walk-in medical centers, or hospital outpatient clinics—are not represented at all in CPSC’s systematic surveillance data systems, and consequently, CPSC has no data on some consumer product problems that may result in numerous but potentially less severe injuries. Sketchy information about accident victims also limits CPSC’s ability to assess which consumer product hazards have a disproportionate impact on vulnerable populations. NEISS and death certificates provide only the age of the victim; no systematic or comprehensive information is available to determine whether a given hazard has a special impact on other vulnerable populations, such as persons with disabilities. A former commissioner told us that the lack of other demographic information such as race, income, and disability status made it difficult for her to know which subpopulations were predominantly affected by a particular hazard. Another commissioner echoed this concern, and said that such information would be useful in targeting public information campaigns on certain hazards to those groups that needed the information most. CPSC staff identified the need for additional exposure data as a major concern. However, they also told us that obtaining information on exposure to products and establishing causation requires special efforts that can be time consuming and costly. Although CPSC’s priority-setting criteria include exposure to the hazard, exposure data are generally not included in CPSC’s ongoing data collection efforts. As a result, exposure is assessed either not at all or further along in the project, precluding the use of exposure as an effective criterion for project selection. Similarly, CPSC’s emergency room and death certificate data provide little information on the circumstances surrounding the incident. As a result, CPSC staff perform follow-up investigations of selected incidents to obtain additional detail. These investigations may include detailed interviews with victims and/or witnesses, police or fire reports, photographs of the product and/or accident site, laboratory testing of the product, or recreations of the incident. As with exposure data, these investigations are not conducted for every project and are completed only after a project is well under way. Thus, assessment of causation at the project selection stage is unavoidably speculative. CPSC conducts a number of projects annually, but staff were unable to provide a comprehensive list of projects the agency had worked on in the 6-year period we examined. CPSC was also unable to verify the completeness of the project list we compiled from agency documents and interviews with staff. According to CPSC staff, internal management systems do not contain this information and such a list could be compiled only by relying on institutional memories of staff members who had been with the agency long enough to know which products the agency had addressed. Without systematic and comprehensive information on its past efforts, CPSC cannot assess whether some hazard areas have been overrepresented and whether agency resources might be more efficiently employed. CPSC also lacks information on the characteristics of, resources used on, or outcomes of individual projects. CPSC’s MIS tracks contract dollars and staff time by accounting codes that cover some specific projects and general categories, such as compliance work, which are composed of numerous activities. According to agency officials, CPSC’s MIS generally cannot provide descriptive information on individual projects, such as when a project was started or concluded; the number of staff days used; what aspect of the product was addressed; whether the project originated from a petition, congressional mandate or other source; or what action was taken to address the hazard (mandatory standard, voluntary standard, or public information campaign, for example). In addition, CPSC staff told us that two separate projects involving the same or similar products at different times may be assigned the same MIS code. As a result, even if a project appears to be tracked in the MIS, reliable inferences cannot be drawn from MIS data. CPSC’s limited data on deaths and injuries, combined with its lack of information on projects, reduce the agency’s ability to evaluate the impact of its work, a process it is now required to undertake under recently passed legislation. The Results Act requires every federal agency to evaluate the effectiveness of its efforts starting in fiscal year 1999. The Results Act is aimed at increasing the investment return of tax dollars by improving agencies’ performance. Under the Results Act, an agency is to set mission-related goals and measure progress toward these goals to evaluate agency impact. CPSC has preliminarily identified results-oriented goals in four areas: (1) reducing head injuries to children, (2) reducing deaths from fires, (3) reducing deaths from carbon monoxide poisoning, and (4) reducing deaths from electrocutions. However, the limitations in CPSC’s injury and death data raise a question about how well CPSC will be able to evaluate the effectiveness of agency actions in these and other areas. CPSC uses two analytical tools—risk assessment and cost-benefit analysis—to assist in making decisions on regulatory and nonregulatory methods to address potential hazards. Risk assessment involves estimating the likelihood of an adverse event (such as injury or death). For example, CPSC estimated that the risk of death from an accident involving an all-terrain vehicle (ATV) was about 1 death for every 10,000 ATVs in use in 1994. Cost-benefit analysis details and compares the expected effects of a proposed regulation or policy, including both the positive results (benefits) and the negative consequences (costs). Although cost-benefit analysis may not be applicable to every decision and may not be the only factor appropriately considered in a decision, it can be a useful decision-making tool. The Congress requires CPSC to perform cost-benefit analysis before issuing certain regulations, and CPSC has conducted cost-benefit analysis for these regulations and in other situations in which it was not required. Although perfectly complete and accurate data are rarely available for any analysis, CPSC’s data are frequently inadequate to support detailed, thorough, and careful risk assessment and cost-benefit analysis. In addition, CPSC’s cost-benefit analyses are frequently not comprehensive, and the reports on these analyses are not sufficiently detailed. Improvements in the agency’s methodology and in the quality of the underlying data are necessary to ensure the clarity and accuracy of CPSC’s risk assessments and cost-benefit analyses. Cost-benefit analysis can help decisionmakers by organizing and aggregating all the relevant information to clarify the nature of the trade-offs involved in a decision. Although cost-benefit analysis may not be appropriately used as the sole criterion for making a decision, a well-constructed cost-benefit analysis can highlight crucial factors, expose possible biases, and facilitate informed decisions even when it is impossible to measure all the potential effects of a specific regulatory proposal. The Congress has required CPSC to perform and publish a cost-benefit analysis when issuing a regulation (such as a mandatory standard or product ban) under the Consumer Product Safety Act. In addition, CPSC is also required to conduct cost-benefit analyses before issuing regulations under the authority of portions of the Federal Hazardous Substances Act (specified labeling provisions are exempt from this requirement) and the Flammable Fabrics Act. Because most of the agency’s projects do not involve mandatory regulation, relatively few CPSC projects conducted between January 1, 1990, and September 30, 1996, were subject to these requirements. We identified 8 cost-benefit analyses that CPSC performed in accordance with these requirements, and an additional 21 analyses it conducted in situations in which it was not required. For example, CPSC performed cost-benefit analyses in eight instances in which it was considering issuing requirements for child-resistant packaging under the Poison Prevention Packaging Act, which does not require cost-benefit analysis. CPSC frequently conducts cost-benefit analysis with respect to regulatory procedures, whether or not it is required to do so. However, a complete cost-benefit analysis is done less frequently for voluntary standards projects or information and education efforts, although some economic information may be generated to assist such projects. In addition to the complete cost-benefit analyses, we identified an additional 23 cases in which some information was provided on some economic benefits or costs. Before issuing a mandatory regulation, CPSC is required to consider the degree and nature of the risk of injury the regulation is designed to eliminate or reduce. However, CPSC usually does not conduct a formal numerical risk assessment before issuing a regulation, and the law does not require it. We found 24 risk assessments conducted by CPSC between January 1, 1990, and September 30, 1996; only 4 of these were associated with regulatory action. Both risk assessment and cost-benefit analysis require extensive data. Risk assessment requires information both on the adverse event and on exposure to the precipitating circumstances. For example, when CPSC performed a risk assessment on floor furnaces, the agency estimated the number of previous injuries associated with floor furnaces and the number of floor furnaces in use. Similarly, when CPSC performed a risk assessment to examine the risk of contracting cancer from dioxin traces in common paper products, the agency used information from laboratory studies on dioxin’s link to cancer and also incorporated data on exposure to paper products. Because cost-benefit analysis includes a comprehensive delineation of the expected effects of a given proposal, a careful and thorough cost-benefit analysis will also be very data intensive and rich with detail. CPSC’s data systems are frequently unable to adequately meet the extensive demands for information posed by risk assessment and cost-benefit analysis. As a result, the agency’s estimates of risks, costs, and benefits are less accurate because they reflect the substantial limitations of the underlying data. Available information does not permit us to determine the potential impact of better data on the results of CPSC’s cost-benefit analyses and risk assessments. Some of these data weaknesses tend to make product hazard risks seem larger, and other problems tend to make the same risks appear smaller. For example, because CPSC’s data undercount the deaths and injuries associated with particular consumer products, estimates of risk—and the potential benefits of reducing that risk—appear smaller. However, CPSC’s surveillance data provide information only on whether a product was involved in an accident, not on whether the product caused the accident. At least at the initial stages of a project, this can make the risks assessed by CPSC—and the benefits of reducing those risks—appear larger. For risk assessment, CPSC must also obtain information on exposure to the hazard. For example, to assess the risk associated with aluminum ladders, CPSC obtained estimates of the number of ladders available for use and on the number of times each year the ladder was used. Obtaining exposure data presents special challenges for CPSC. Because the product definition that relates to a particular hazard is often relatively narrow, existing data sources frequently offer insufficient detail. For example, CPSC was unable to use Census sources to determine the number of saunas in the United States because saunas were included in a broader classification of products when government data were collected. In addition, CPSC staff told us that it is often difficult to find accurate information on the number of products that are in households and available for use. CPSC sometimes responds to these challenges by using mathematical modeling techniques or easier-to-obtain proxy measures (such as population) to estimate product exposure. In addition, for a few large-scale projects, CPSC has incurred the substantial expenses necessary to conduct its own detailed exposure survey. For example, CPSC conducted a survey of households that asked detailed questions on matches and disposable cigarette lighters—the number purchased, where they are generally kept, how they are used, and other details. However, for the majority of the projects we reviewed, CPSC did not gather any data on exposure. Of the 80 projects we reviewed for which briefing packages were prepared, only 26 included information on exposure to the hazard, and CPSC’s risk assessments were confined to 24 cases between 1990 and 1996—approximately 21 percent of all projects conducted over that time period. The methodology used to conduct a cost-benefit analysis will frequently depend on the circumstances and the context of the analysis. For this reason, there is no complete set of standards for evaluating the quality of an individual cost-benefit analysis. However, the professional literature offers some guidance for analysts, and certain specific elements are frequently mentioned as essential for cost-benefit analysis. For example, because cost-benefit analysis is meant to be a complete delineation of the expected effects of a proposed action, all potential impacts (even those that cannot be quantified) should be discussed. To ensure that the reader is able to make an informed judgment, it is important to be explicit about the underlying data, methodology, and assumptions. Accordingly, the literature suggests that all methodological choices and assumptions should be detailed, all limitations pertaining to the data should be revealed, and measures of uncertainty should be provided to allow the reader to take into account the precision of the underlying data. Similarly, the literature calls for sensitivity analysis, which enables the reader to determine which assumptions, values, and parameters of the cost-benefit analysis are most important to the conclusions. On the basis of our review of the cost-benefit literature, we developed a list of the elements that are frequently used in evaluating cost-benefit analysis. This list, and a description of all the factors we examined, is in appendix IV. Although we compared each of these elements with each of CPSC’s analyses, not all elements were applicable to each case. For example, in some cases, the circumstances indicated by a given element—such as reliance on statistical data—were not found, and those cases were treated as not applicable to that element. In addition, for some elements it was not always possible to determine whether CPSC’s analysis was consistent with the element. For these reasons, and to emphasize those areas that we viewed as most critical, we reported only the evaluation results that relate to key elements, applied to the majority of CPSC’s analyses, and for which a determination was possible in all or nearly all cases. Our review of all the cost-benefit analyses that CPSC conducted between January 1, 1990, and October 31, 1996, showed that for many—but not all—elements, CPSC’s analyses were not comprehensive and not reported in sufficient detail (see table 2). For example, CPSC provided descriptive information on proposals and also provided information on a variety of reasonable alternatives in almost 100 percent of cases. However, CPSC analyses generally did not provide measures of uncertainty for the underlying data. Estimates derived from samples are subject to sampling error, which can be especially large when the estimates are projected from relatively fewer cases. In only 17 percent of its analyses did CPSC provide any statistical information on the precision of the underlying estimates. Similarly, when estimates are based on a relatively small sample size, projections are generally not considered reliable. CPSC analysts cautioned the reader against making conclusions based on small sample data only 45 percent of the time. In addition, some of CPSC’s data sets have a known upward or downward bias because of the way the data were constructed. For example, CPSC’s estimates of deaths based on CPSC’s death certificate database will be understated, and when estimates of incidents are based only on investigated or reported cases (such as cases reported to CPSC’s hot line), two potential biases are likely to be introduced into the analysis: (1) the estimates are likely to be biased downward by nonreporting and (2) the incidents reported tend to be the more severe ones. In only 53 percent of applicable cases did CPSC’s analysis inform the reader of known limitations inherent in the data being used for cost-benefit analysis. We identified several other areas in which CPSC analyses could benefit from improvement. For example, researchers agree that sensitivity analysis—a technique that enables the reader to determine which assumptions, data limitations, or parameters are most important to the conclusions—should be incorporated in cost-benefit analyses. CPSC usually did not provide sensitivity analysis information. For example, agency briefing packages did not include any information on how CPSC’s injury cost estimates were derived or what factors were the largest components of injury costs. CPSC applies a statistical model to injury estimates to derive a figure for injury cost. The model that computed injury cost estimates accounts for a number of components, including medical costs, forgone wages, and pain and suffering. With only one exception, CPSC briefing packages provided only the total cost, without any information on the derivation of those costs or the individual components. In addition, CPSC provided only an average injury cost, not a range of injury cost estimates. For situations in which injuries differ in severity, or for projects in which severity is probably overstated or understated in the data, the reader would find such information useful. Forty-six percent of CPSC analyses did not consider the full range of costs and benefits likely to result from regulation. For example, CPSC analysts frequently omitted mentioning intangible costs and/or benefits (costs or benefits that are difficult to quantify, such as loss of consumer enjoyment) or potential indirect effects (such as changes in the prices of related goods). In addition, CPSC frequently excluded risk-risk considerations from its evaluation of the costs and benefits of potential actions. Sometimes actions taken to reduce one risk can have the unintended effect of increasing that or another risk. Individuals may take more or fewer precautions in response to a change in a product’s safety features, and this behavior can result in an increase in the risk the intervention was designed to mitigate. For example, in establishing a standard for child-resistant packaging that was also “senior-friendly,” CPSC considered that because child-resistant medicine bottles can be difficult to open, a grandparent may leave the cap off the bottle, creating an even greater risk than would be the case with the original cap. Although CPSC considered such factors in some cases, only 49 percent of its analyses reflected potential risk-risk trade-offs. CPSC has not established internal procedures that require analysts to conduct comprehensive analyses and report them in sufficient detail. For example, according to CPSC staff, the agency has little written guidance about what factors should be included in cost-benefit analyses, what methodology should be used to incorporate these factors, and how the results should be presented. Staff also told us that CPSC analyses are not generally subject to external peer review. Such reviews can serve as an important mechanism for enhancing the quality and credibility of the analyses that are used to help make key agency decisions. To help minimize the possibility that a product might be unfairly disparaged, in section 6(b) of the Consumer Product Safety Act the Congress imposed restrictions on the disclosure of manufacturer-specific information by CPSC. Before CPSC can disclose any information that identifies a manufacturer, the agency must take “reasonable steps” to verify the accuracy of the information and to ensure that disclosure is fair, notify the manufacturer that the information is subject to release, and provide the manufacturer with an opportunity to comment on the information. If the manufacturer requests that its comments be included in CPSC’s disclosure of the information, CPSC can release the information only if accompanied by the manufacturer’s comments. If the manufacturer objects to the release even if its comments are included, it can challenge CPSC in U.S. district court to block disclosure. These restrictions on the release of information apply not only to information the agency issues on its own—such as a press release—but also to information disclosed in response to a request under FOIA. In addition, section 6(b) also requires CPSC to establish procedures to ensure that releases of information that reflect on the safety of a consumer product or class of products are accurate and not misleading, regardless of whether the information disclosed identifies a specific manufacturer. CPSC has established procedures to implement these requirements, including requiring technical staff to “sign off” on information releases and notifying manufacturers. Evidence from several sources—industry sources, published legal decisions, and agency retractions—suggests that CPSC has complied with its statutory requirements. CPSC staff and commissioners, industry representatives, and consumer advocates expressed a wide variety of opinions on the effectiveness of these requirements, and some individuals favored specific changes. Part of CPSC’s mission is to provide the public with information to help individuals use consumer products safely. CPSC disseminates information through its own initiatives and also in response to requests from the public. For example, CPSC informs both consumers and businesses about product hazards through product recall notices, provision of information at trade shows and special events, and a telephone hot line and Internet site. In addition, CPSC responds to thousands of telephone and written requests for information each year. CPSC’s mission and its responsibility under FOIA require the agency to disseminate a great deal of information. However, because much of this information is about specific products or manufacturers, CPSC’s information disclosure is often restricted under section 6(b). In its regulation implementing section 6(b), CPSC established several measures designed to ensure compliance with the statutory requirements. These measurers include obtaining written verification from consumers of the information they report to the agency, notifying manufacturers by certified mail when manufacturer-specific information is requested, and giving manufacturers the option of having their comments published along with any information being disclosed. CPSC’s procedures outline several steps to verify all information before it is released. For example, CPSC checks each report the agency receives from consumers about incidents involving potentially hazardous products to ensure that CPSC’s records accurately reflect the consumer’s version of the incident. Agency procedures require staff to send a written description of each incident back to the person who reported it with a request that he or she review it and state if any information needs to be corrected or supplemented. The commission staff review each of these incident reports for discrepancies or any obvious inaccuracies. Once they have been checked and confirmed with the consumer, incident reports are made available to the public upon request. If the reports contain information that would identify a specific manufacturer, they are subject to 6(b) requirements regarding disclosure. CPSC also investigates events surrounding selected product-related injuries or incidents. Investigation reports provide details about incident sequence, human behavior factors, and product involvement. The reports generally contain the consumer’s version of what happened and the observations of witnesses, fire and police investigators, and others. Investigations may also include follow-up inspections at retail stores or service centers. However, neither investigations nor incident reports include the manufacturer’s view of the incident. Its point of view may be expressed in comments it submits before the report is released. Like incident reports, investigation files are available to the public upon request and are subject to 6(b) requirements. CPSC has issued clearance procedures to cover situations in which commissioners or staff initiate public disclosures—for example, when the Commission publishes the results of agency research. These procedures are intended to verify any information—oral or written—released by the Commission, regardless of whether the information identifies a manufacturer. Under CPSC’s guidelines, each assistant or associate executive director whose area of responsibility is involved must review the information and indicate approval for the release in writing. After all other review has been completed, the Office of the General Counsel must also review and approve the release. Press releases with respect to product recalls are written and issued jointly by CPSC and the affected manufacturer. In addition, CPSC’s clearance procedures for press releases state that final clearance must be obtained from the Office of the Chairman of the Commission. In addition, CPSC staff told us that the current chairman’s policy of coordinating media inquiries through the Office of Public Affairs is intended to ensure that information provided is in compliance with section 6(b). CPSC has also established procedures to implement the notification and comment provisions of section 6(b). Before CPSC releases information in response to an FOIA request, an information specialist determines whether a manufacturer could be readily identified. CPSC staff said that agency policy is to clearly and narrowly identify hazardous products (including by manufacturer) whenever possible, in order to prevent the person receiving the information from confusing safe products with unsafe products of the same type. However, if an information request is broad, like “all bicycle accidents,” names of manufacturers are removed before the information is released, according to CPSC staff. If the requested information could identify a manufacturer, then staff review the information for appropriate exemptions (such as trade secrets), and delete portions as appropriate. The manufacturer is given 20 calendar days in which to review and comment on a summary of the information CPSC plans to release.Because CPSC often receives multiple requests for the same information, the agency informs manufacturers that it will not send them copies of subsequent requests for the same information unless specifically requested to do so. However, according to CPSC staff, more than 80 percent of the manufacturers that submit 6(b) comments routinely request such notification. In calendar year 1993 (the most recent year for which data were available), CPSC sent out 487 notices to manufacturers and received 154 responses (32 percent). Twenty-five manufacturers (5 percent) contested the accuracy of the information or claimed that the proposed disclosure would be unfair. If a manufacturer fails to comment, the information can be released 30 days from the date CPSC notified the manufacturer. After taking the manufacturer’s comments into account, CPSC may decide to disclose incident information despite the firm’s objection if, for example, the comments lack specific information to support a claim of inaccuracy or a request for confidentiality. If CPSC chooses to disclose information over a manufacturer’s objection, it must release the manufacturer’s comments along with the other information, unless the manufacturer requests otherwise. In addition, if CPSC decides to release information and the manufacturer objects, the manufacturer has 10 working days to go to court and seek to enjoin CPSC from disclosing the information. Manufacturers have sued CPSC to prohibit disclosure of records only 11 times since the agency was founded, and CPSC was prohibited from releasing the information in 2 of these cases. Information from three sources of evidence—industry, published legal cases, and data on retractions—suggest that CPSC complies with its statutory requirements concerning information release. Industry sources, even those otherwise critical of the agency, told us that CPSC generally does a good job of keeping proprietary information confidential as required by law. Our review of published legal decisions found no rulings that CPSC violated its statutory requirements concerning the release of information. Retractions by CPSC are also rare. If CPSC finds that it has disclosed inaccurate or misleading information that reflects adversely on any consumer product or class of consumer products or on any manufacturer, it must publish a retraction. Any retraction must be made in the same manner in which CPSC released the original information. According to CPSC, it has published only three such retractions. Two of these retractions, in 1984 and 1994, were made in response to requests from firms. A third retraction, in 1990, was issued after CPSC discovered that a report in its public reading room had mistakenly included inaccurate information. Industry observers, CPSC staff, and consumer groups expressed a wide range of opinions on the effectiveness of section 6(b). In response to our inquiries, some CPSC commissioners and former commissioners said that these restrictions serve a useful purpose and should not be changed. However, CPSC’s current chairman, industry and advocacy group representatives, and others expressed dissatisfaction with 6(b), and some of these suggested possible changes. Although these individuals raised issues about the extent of the protection afforded to manufacturers and the resources necessary to ensure compliance, we did not assess whether the specific suggestions were necessary or feasible. CPSC’s chairman, other CPSC officials, former commissioners, and the representative of a consumer advocacy group stated that compliance with 6(b) is costly for CPSC and delays the agency in getting information out to the public. Although CPSC has not estimated the cost of complying with 6(b), agency staff told us that it takes much more staff time to respond to FOIA requests that come under 6(b) than it does to respond to FOIA requests that do not involve company names. To reduce the burden of complying with these requirements, CPSC staff have suggested that the notification requirement that gives manufacturers 20 days in which to comment should apply only the first time an item is released. Some have suggested that instead of requiring CPSC to verify information from consumer complaints, the agency should be allowed to issue such information with an explicit disclaimer that CPSC has not taken a position on the validity of the consumer’s report. Instead of reducing CPSC’s verification requirements, some industry representatives suggested expanding them. These manufacturers stated that before CPSC releases incident information, it should substantiate the information rather than relying on a consumer’s testimony. Industry representatives stated—and CPSC staff confirmed—that many of the requests for CPSC information come from attorneys for plaintiffs in product liability suits. As a result, some industry representatives expressed concern that unsubstantiated consumer complaints could be used against them in product liability litigation. They suggested that 6(b) should require CPSC to substantiate all incident reports by investigating them before they can be disclosed instead of merely checking with the consumer as it does now. However, CPSC officials told us that investigations—which are time consuming and costly—can be conducted only on a small proportion of specially selected cases because of limited resources. Industry representatives also said that the current restrictions do not provide sufficient protection when information is released on product groups instead of on the products of an individual manufacturer. Several industry representatives expressed concern that producers of safer products may be unfairly maligned when CPSC releases information about a group of products, only some of which may be associated with a safety problem. According to some of these industry representatives, CPSC should extend protection to product groups similar to the safeguards manufacturers receive under 6(b). Retailers’ representatives also suggested specific changes to CPSC’s information release requirements. They said that retailers do not receive timely notice of recalls because CPSC has interpreted the law to prohibit advance notification of retailers. Consequently, the retailers said that they sometimes receive notice of recalls at the same time as their customers and have no time to prepare. For example, when consumers come in with recalled products, the retailer may not yet know whether the manufacturer has agreed to replace the product, refund the purchase price, or provide some other remedy. Retailers’ representatives suggested amending 6(b) to give 5 business days’ advance notice to retailers before the public announcement of a recall. CPSC officials said that typically manufacturers are and should be the ones to contact the retailers and make all arrangements for a recall. Although they disagreed on the need for a statutory change, both CPSC staff and a major retailers’ association said that they were trying to work out a more satisfactory arrangement. CPSC’s current data provide insufficient information to monitor ongoing projects and to determine whether potential projects adhere to the agency’s selection criteria. Moreover, inadequate agency data often prevent CPSC from conducting risk assessments on projects, potentially limiting the agency’s ability to target resources to the hazards presenting the greatest risks. The lack of sufficient data, combined with methodological problems, also makes CPSC’s cost-benefit analyses less useful than they could be. With more detailed information on both internal resources and external product hazards, CPSC would be better able to assure the Congress and taxpayers that its resources are expended wisely. We identified several key areas where CPSC management could improve its collection and analysis of external data. For example, CPSC would be better able to make informed decisions on potential agency projects if it had additional statistically reliable and timely data in several areas, including (1) injuries treated outside of hospital emergency rooms; (2) exposure to consumer products and product-related hazards; (3) chronic illnesses related to consumer products; and (4) hazards that disproportionately affect certain vulnerable populations, such as low-income individuals and consumers with disabilities. In addition, project selection and implementation could be improved if commissioners and staff had tracking information on CPSC projects, such as starting and ending dates, project origin, project costs (including staff days and contract costs), and agency actions taken to address the potential hazard. Such information could assist the commissioners in monitoring ongoing projects, targeting new efforts on the basis of previous work, and assessing the allocation of resources across current projects and among major hazard areas. CPSC could also benefit from an improved methodology for cost-benefit analysis. A stronger methodological base for CPSC’s cost-benefit analyses, including more complete documentation, would promote sound regulatory decision-making and could improve the quality of the input and comment CPSC receives during the regulatory process. Without these improved data, CPSC will remain unable to accurately apply measurable criteria in choosing projects or to rigorously assess relative risks among potential hazards. Some of this necessary information—the need for more representative and complete injury and exposure data, for example—may require a significant investment of resources, so CPSC may need to prioritize these data needs. In doing so, is important for CPSC to draw on the insight of individuals outside the agency to ensure that all available alternatives for obtaining these data are explored. Some of the other information CPSC needs, however, could be compiled internally at relatively little additional cost and effort. For example, more detailed information on individual projects could be collected within the agency. Similarly, the methodological problems we identified in CPSC’s cost-benefit analysis could be remedied without additional data. We recommend that the Chairman of CPSC take the following actions: Improve the quality of CPSC’s injury, death, and exposure data by consulting with experts both within and outside the agency to (1) prioritize CPSC’s needs for additional statistically valid surveillance data on injuries and deaths related to consumer products and on exposure to consumer products and product-related hazards, (2) investigate the feasibility and cost of alternative means of obtaining these data, and (3) design data systems to collect and analyze this information. Direct agency staff to develop and implement a project management tracking system to compile information on current agency projects. For each project, such a system should include, at a minimum, a description of the hazard addressed, start and end dates, project origin, and major agency action resulting from it. Direct agency staff to develop and implement procedures to ensure that all cost-benefit analyses performed on behalf of CPSC are comprehensive and reported in sufficient detail, including providing measures of precision for underlying data, incorporating information on all important costs and benefits, and performing sensitivity analysis. We received two separate sets of comments from CPSC’s commissioners—one from Chairman Brown and Commissioner Moore and one from Commissioner Gall. CPSC staff also submitted some technical comments, which we incorporated in the report as appropriate. Chairman Brown and Commissioner Moore stated that they are considering our recommendations and that in some respects our recommendations parallel efforts already under way at the Commission. However, they disagreed with some of the specific findings of our report. They stated that CPSC’s actions are based on solid injury and death estimates and that CPSC (1) employs sound economic analyses that are appropriate for the circumstances, (2) tracks projects to monitor the progress of its work, and (3) has been successful in dramatically reducing the threat to consumers from unsafe products. We concluded that CPSC’s death and injury data are generally insufficient to support the agency’s project selection process for two reasons: (1) CPSC has little or no data on several project selection criteria and (2) CPSC’s data on its other project selection criteria exhibit significant gaps. Similarly, our finding that CPSC’s cost-benefit analyses were not comprehensive and not reported in sufficient detail supports our conclusion that these analyses were less useful than they could have been in the agency’s decision-making process. Because CPSC’s current management information system operates at a very high level of generality and (according to CPSC staff) does not produce consistent, reliable information, we recommend that the agency implement an improved tracking system that would provide enough information to monitor the projects selected and the resources spent for each hazard. We did not review whether CPSC’s actions had been successful in reducing the number of injuries and deaths associated with consumer products. The full text of Chairman Brown and Commissioner Moore’s detailed comments and our response are in appendix VI. Commissioner Gall stated in her comments that she is looking forward to implementing many of the reforms we recommended. She also stated that she agrees with our conclusion that CPSC could improve the way it gathers information, including reassessing the need for injury data gathered outside of hospital emergency rooms. Commissioner Gall also agreed that CPSC needs an improved system to track ongoing activities or projects and that sensitivity analysis, measures of uncertainty, and risk-risk analysis should be incorporated into CPSC analyses. However, she also commented that additional GAO analysis of the implications of inadequate data could have been helpful. Unfortunately, available information does not permit us to determine the impact of better-quality data on the decisions made by CPSC. In addition, Commissioner Gall said that she believes the Compliance function of CPSC also warrants further review. Such a review is outside the scope of this report. Commissioner Gall’s detailed comments and our response are in appendix VII. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies to the commissioners of the Consumer Product Safety Commission and make copies available to others upon request. If you or your staff have any questions about this report, please contact me at (202) 512-7014. Major contributors are listed in appendix VIII. This report provides information on project selection, analytical research, and information release procedures at the Consumer Product Safety Commission (CPSC), which we gathered from a variety of sources inside and outside the agency. For example, we interviewed current CPSC staff, including analysts and line managers, to obtain information about CPSC’s data and procedures. We also interviewed the three current CPSC commissioners and four former commissioners. In addition, we interviewed representatives of manufacturers, retailers, trade associations, consumer groups, and academic and other experts to obtain their perspectives on CPSC’s activities. We reviewed the legislation governing CPSC’s activities and the associated regulations. We also reviewed statements by public and private organizations and the academic research literature dealing with consumer protection issues and the technical literature on cost-benefit analysis and risk assessment. We gathered extensive information from CPSC documents. We reviewed every agency budget request, operating plan, mid-year review, and regulatory agenda from 1990 to 1996. We reviewed all CPSC’s annual reports from fiscal years 1986 to 1996. We also obtained all CPSC Federal Register notices from January 1993 to May 1996 and selected Federal Register notices from January 1990 to December 1992. We reviewed all CPSC press releases from September 1992 to February 1996. We also reviewed 644 CPSC briefing packages prepared from January 1, 1990, to September 30, 1996, including all briefing packages that concerned a specific, identifiable potential product hazard under consideration for regulatory, voluntary, or information-gathering activities. We excluded matters exclusively related to compliance with and enforcement of existing standards, civil penalties, and internal management issues as well as other items not related to a specific potential product hazard. In addition, we reviewed other agency documents, including information-clearing procedures, documentation of data systems, consumer education materials, internal memorandums and correspondence, and selected documents downloaded from CPSC’s Internet site. In describing CPSC’s project selection, we drew on our interviews with commissioners, former commissioners, and agency staff, and we reviewed the project selection criteria in CPSC’s regulations. Although we described the criteria used by the agency, we did not make judgments about the merit of individual criteria or about whether any criterion was appropriately applied in any specific case. We compiled a list of agency projects on the basis of the documents we reviewed. We included not only major regulatory efforts but also smaller-scale projects. We gathered information on the characteristics of these projects; for example, we obtained information on how the projects originated, what action CPSC took, and (for most) how many deaths and injuries were associated with the hazard. CPSC officials examined our list and provided some information that was not readily available in agency documents. However, because CPSC’s internal data management system does not track agency projects, neither we nor CPSC were able to verify the accuracy and comprehensiveness of this information. We also reviewed agency documentation and the technical literature and interviewed outside experts to obtain information on CPSC’s data-gathering systems. To examine CPSC’s cost benefit analysis, we reviewed the technical literature and developed a set of objective evaluation questions to elicit descriptions of and to evaluate the analytical work. These evaluation questions were designed to indicate only whether CPSC’s cost benefit analyses were consistent with elements that are commonly used to evaluate whether cost-benefit analyses are comprehensive and reported in sufficient detail. Our review did not assess whether CPSC’s analyses were the best that could be done on any particular topic. We reviewed these evaluation questions with two leading experts in the field of risk assessment and cost-benefit analysis: Professor John Mendeloff of the University of Pittsburgh and Professor John Graham of Harvard University. Then we examined all CPSC’s cost-benefit analyses from January 1990 through September 1996 to see how they measured up against these evaluation questions. In making these assessments, we reviewed all the information available to the commissioners in the written record; that is, we did not confine our analysis to the portion of the briefing packages that dealt explicitly with cost-benefit analysis or risk assessment, and we examined all the briefing packages that pertained to that particular project. Although we evaluated CPSC’s cost-benefit analyses using a wide-ranging set of evaluation questions, we reported results of our analysis only for those questions where: (1) the question was applicable to the majority of the CPSC analyses we reviewed; and (2) we were able to make a determination of whether the analyses were consistent with the evaluation question in applicable cases. To review CPSC’s information release process, we reviewed internal agency procedures, discussed the information release requirements with agency officials and industry representatives, and reviewed the relevant legal cases. We provided information on CPSC’s information release procedures but did not audit whether CPSC complied with these procedures in releasing information. Instead, we relied on other sources—industry sources, published legal decisions, and retractions—to assess whether this readily available evidence suggested that CPSC had violated its statutory requirements in releasing manufacturer-specific information. Our review was conducted between August 1996 and May 1997 in accordance with generally accepted government auditing standards. The Consumer Product Safety Act (P.L. 92-573) provides for the appointment of five commissioners by the President of the United States for staggered 7-year terms. Not more than three of the commissioners may be affiliated with the same political party. The President appoints the commissioners, subject to Senate confirmation. However, the President cannot directly overrule an agency decision or fire a commissioner for making an unpopular decision. Since 1986, there have been no more than three commissioners at one time. In fulfilling the provisions of the government in the Sunshine Act (P.L. 94-409), in general, commissioners must open to the public any meeting held for the purpose of disposing of CPSC business—that is, if any two of the three commissioners want to discuss agency business, generally public notice must first be given. On a daily basis, communication among CPSC offices takes place at the staff level. The chairman, the principal executive officer of the Commission, directs all executive and administrative functions of the agency. The chairman oversees the appointment and supervision of all agency personnel except those employed in the immediate offices of other commissioners. CPSC annually elects a vice chairman to act in the absence or disability of the chairman or in case of a vacancy in the office of chairman. As figure II.1 shows, six offices report directly to the chairman: the Office of the Secretary manages the agency’s records, publishes CPSC’s public meetings calendar, and administers the requirements of the Freedom of Information Act; the Office of Congressional Relations serves as CPSC’s liaison with the Congress; the Office of General Counsel provides advice and counsel to the agency on legal matters; the Office of the Inspector General is an independent office that undertakes activities to prevent and detect waste, fraud, and abuse; and the Office of Equal Opportunity and Minority Enterprise monitors compliance with equal opportunity employment laws. The agency’s executive director, who is appointed by and reports directly to the chairman, is responsible for overseeing the day-to-day management of the agency. The executive director’s office also houses the agency’s small business ombudsman, who acts as a liaison to small businesses, providing technical assistance concerning CPSC programs and regulations, among other things. The executive director has line authority over nine operating directorates: The Office of Compliance is responsible for compliance with and administrative enforcement of CPSC regulations, including product standards. The office initiates investigations on safety hazards of products already in the consumer marketplace or being offered for import. It negotiates and subsequently monitors corrective action plans designed to give public notice of hazards and recall defective or noncomplying products. The Office of Hazard Identification and Reduction manages many of CPSC’s activities that involve identifying, examining, and remedying new product hazards. The office’s responsibilities include collecting and analyzing data to identify hazards and hazard patterns, carrying out CPSC’s regulatory and voluntary standards development projects, and coordinating international activities related to consumer product safety. Serving in various groups under the director of hazard identification and reduction are the epidemiologists and statisticians, who provide information on particular products; engineers and human factors specialists, who help develop design remedies for product hazards; and economists, who provide market information and perform cost-benefit analysis on commission projects. The Office of Information and Public Affairs is CPSC’s touchstone with consumers and the media. It prepares and publishes brochures, booklets, fact sheets, and safety alerts providing safety information to consumers on products used in the home environment. The Office of Information and Public Affairs also handles requests from the media for information and access to CPSC staff and prepares press releases announcing CPSC actions or decisions. The Office of Information Services manages the agency’s toll-free hot line, Internet, and fax-on-demand services. Within the Office of Information Services, CPSC’s Information Clearinghouse provides summary information on product-related injuries in response to requests from the public. The Office of the Budget is responsible for overseeing the development of CPSC’s budget. The Office of Human Resources Management provides support to the agency in recruitment and placement, position classification, training, and other personnel areas. The Office of Planning and Evaluation assists with long-term planning efforts and manages and ensures agency compliance with paperwork reduction regulations. In addition, the Office of Planning and Evaluation is currently preparing for the implementation of the Government Performance and Results Act (the Results Act) and reviewing the effectiveness of the agency’s outreach efforts. The Directorate for Field Operations coordinates the activities of CPSC’s 128 field staff located in 38 cities across the country. CPSC field staff carry out a wide range of agency activities, including conducting investigations of injury incidents; acting as liaisons with state and local organizations; working with the local press to support consumer education campaigns; and inspecting manufacturers, importers, distributors, and retailers to ensure compliance with safety regulations and standards. To complement the efforts of the field staff at the local level, CPSC contracts some product safety work to state and local entities; commissions state and local officials to function as officials of CPSC for the purpose of conducting investigations, inspections, recalls, and sample collections; and transmits information on CPSC programs and activities to states. The Directorate for Administration is responsible for executing general administrative policies. CPSC has addressed many different product hazards. We identified 115 projects the agency worked on between January 1, 1990, and September 30, 1996. Because CPSC does not maintain a list of projects it has worked on or any project characteristics, such as project origin or result, we used various agency documents, such as budget requests, annual operating plans, regulatory agendas, and project briefing documents, to compile our list. We attempted to determine how each project came to the attention of the agency, for example, through a petition, congressional direction, or internal initiation. This was not always clear. For example, in three cases, a petition was submitted and subsequently denied, but the issue was later mandated by the Congress as a project the agency must address. For example, a petition concerning the safety of bicycle helmets submitted in 1990 was denied. In 1994, the Congress passed the Children’s Bicycle Helmet Safety Act of 1994, which requires CPSC to establish a mandatory standard. We counted such projects as originating through the Congress. In other cases, we categorized projects on the basis of information in CPSC documents. We also attempted to identify the most significant action that resulted from the agency’s efforts. Some projects were ongoing for several years and involved more than one agency action, such as information campaigns conducted in conjunction with voluntary standards efforts. For example, agency work on carbon monoxide (CO) detectors resulted in a voluntary standard in 1992, a “CO Awareness Week” information campaign in 1994, and public hearings on the issue in 1996. In cases such as these, we attempted to identify the activity with the greatest long-lasting impact; usually this was a voluntary or mandatory standard. Also, projects classified as “voluntary standard activity” include those for which a new voluntary standard was created, an existing voluntary standard was revised, or staff are currently participating in voluntary standard activities. In addition, we identified whether a risk assessment or cost-benefit analysis was completed for a given project, using CPSC briefing packages. We defined a formal risk assessment to include only those cases for which a numerical estimate of unit risk was calculated; we defined a complete cost-benefit analysis to include those projects for which both economic costs and benefits were explicitly compared, even if quantitative estimates were not made for all economic factors considered. Some risk-related or economic information was provided for many of the projects for which no formal, complete risk assessment or cost-benefit analysis was performed. Children’s sleep-wear flammability Fireworks, large and small reloadable shell Fireworks, multiple tube mine and shell Gas control valves, automatic Gas grills, 20-lb. systems Heaters, unvented gas space heaters Liquified petroleum (LP) gas odorant fade All-terrain vehicles (ATV) x (continued) Baseball, chest protectors for young batters Baseball, face-guards on helmets Baseball, safety baseballs (soft) Pools, barriers for swimming pools and spas Pools, swimming pool covers Shock protection devices (GFCI) x (continued) Child restraints on grocery carts Choking hazards—balloons, balls, marbles, x small figures, pom-poms Drawstrings on children’s clothing Chemical/poison prevention (continued) Child-resistant, adult-friendly packaging of drugs Child-resistant packaging of certain dietary products with iron powders Child-resistant packaging of drugs with loperamide Child-resistant packaging of glue removers with acetonitrile Child-resistant packaging of hair wave neutralizers Child-resistant packaging of mouthwashes with ethanol Child-resistant packaging of spot remover with naptha Coal- and woodburning stove emissions Exemption to child-resistant packaging of mouthwashes with ethanol Indoor air quality—carpet emissions x (continued) Cost-benefit analysis can be described as an analytical technique that details the expected positive and negative effects of a given policy proposal (expected benefits and costs). To construct this framework, researchers approach a policy question needing considerable information about the potential remedy. Because cost-benefit analysis requires an inclusive approach to evaluating a proposal’s impact, the proposal itself must be well defined, and some information must be known about its impact. Once the expected benefits and costs of the proposal are thoroughly identified and delineated, the next step is to place a value on each benefit or cost. Frequently, these expected costs and benefits are expressed in numerical or monetary terms to facilitate a comparison of the aggregate costs and benefits. If all relevant factors can be translated into monetary terms, the decision rule suggested by this proceeding is to accept the proposal if its aggregate benefits exceed its aggregate costs. Although at first the concept behind cost-benefit analysis seems relatively straightforward, the application of cost-benefit analysis is not. The practical difficulties associated with measuring effects, quantifying results, and accounting for uncertainty (to name only a few issues) can create a gap between the way cost-benefit analysis is described in theory and the way it is implemented in practice. For this reason, experts generally agree that analysts should be comprehensive in including all important factors and be explicit in their description of the underlying data, methodology, and assumptions. This appendix describes many of the major methodological issues that often arise in cost-benefit analysis and outlines specific elements that are frequently used to evaluate cost-benefit analyses. The methodology used to conduct a cost-benefit analysis frequently varies depending on the circumstances and the context of the analysis. For this reason, there is no complete set of standards for evaluating the quality of an individual cost-benefit analysis. However, the professional literature offers some guidance for analysts, and certain specific elements are frequently used to determine whether a given analysis meets a minimum threshold of comprehensiveness and openness. These elements are necessary, but not sufficient, for a quality analysis. From the extensive cost-benefit literature, we developed objective evaluation questions to evaluate cost-benefit analyses performed by CPSC. These evaluation questions are summarized in table IV.1. Although these evaluation questions are not a comprehensive measure of the quality of an analysis, they were designed to reflect whether an analysis is comprehensive and reported in sufficient detail. We applied these questions to each of CPSC’s analyses, but not all evaluation questions were applicable to each case. In addition, for some questions it was not always possible to determine whether CPSC’s analysis complied with the particular element reflected in the question. For these reasons, and to emphasize those areas that we considered as most critical, we reported only the evaluation results that related to key elements and that applied to the majority of CPSC’s analyses; these eight are shown in bold in table IV.1. Was descriptive information given about well-defined alternative courses of action? Was more than one alternative course of action considered in the analysis? Was evidence given to support the degree to which the proposal was assumed to mitigate the problem? Were all important categories of potential costs and benefits included in the analysis? (For example, did the analysis include [where applicable] intangible benefits and costs, health and safety benefits, costs of compliance, upfront costs, price changes, and changes in consumer surplus?) Were the effects of intangible costs and benefits detailed explicitly, even if they could not be quantified? (For example, did the analysis include consumer utility?) Did the analysis show how the existence of intangible costs or benefits could affect the conclusions? Were possible indirect effects discussed? (For example, did the analysis include price changes of related goods or likely changes in market concentration?) If indirect effects were measured quantitatively, were the calculations and assumptions behind this measurement discussed in detail? If indirect effects were measured qualitatively, did the analysis show how the existence of these factors could affect the conclusions? Were possible issues of standing identified? Was an incremental (marginal) analysis of costs and benefits performed? (continued) Were major policy interdependencies identified? (For example, did the analysis discuss the implications if another agency also had jurisdiction over the product, or if the proposal could create a conflict with existing government policy?) Did the analysis make clear the distribution of gains and losses? (For example, who would pay the costs and who would get the benefits of the proposal?) Was the impact of distribution of gains and losses assessed? (For example, would the proposal be likely to have a greater adverse impact on small businesses, potentially leading to increased market concentration?) Were distributional weights employed in the analysis? If weights were used, was information provided about the basis for those weights? Is the standard assumption that relative market prices are insensitive to the policy change likely to hold in this case—that is, are there likely to be few or no macroeconomic effects? If no, was this impact discussed or considered in the analysis? Were risk-risk or offsetting behavior concerns identified and considered in the analysis? Were willingness-to-pay measures used to value reductions in the risk of death or injury? Is the numerical value of a statistical life used in this analysis consistent with the literature? Is the numerical value of a reduction in the chance of injury consistent with the literature? Are costs and benefits that occur at different points discounted for differences in timing? Does the discount rate or rates used include the suggested rate by the Office of Management and Budget? Does the discount rate or rates used include the Treasury bill rate for the time horizon of the analysis? Does the discount rate or rates used include a lower “social” discount rate? Where applicable, were the implications of unverified or unverifiable data provided by an interested party identified and discussed? Where applicable, were the implications of uncertainty surrounding a dose-response model identified and discussed? Where applicable, were the implications of uncertainty surrounding crossspecies extrapolation identified and discussed? (continued) Where applicable, were the implications of the use of data relying on investigated or reported cases identified and discussed? Where applicable, were the implications of survival bias in the underlying data identified and discussed? Where applicable, were the implications of small sample sizes identified and discussed? Where applicable, were the implications of other known biases in the underlying data identified and discussed? If the underlying data were derived from a statistical sample, were appropriate measures of precision provided? Was sensitivity analysis performed on any parameter in the analysis? Was sensitivity analysis performed on the value of a statistical life? Was sensitivity analysis performed on the value of injury reduction? Was sensitivity analysis performed on the discount rate? Was sensitivity analysis performed on the precision of the underlying data? Was sensitivity analysis performed on other important parameters? As an organizing framework, cost-benefit analysis can help a decisionmaker to organize and aggregate all the relevant information in a way that can clarify the nature of the trade-offs involved in a decision. At the same time, by providing a framework to convert dissimilar effects to a common measurement, cost-benefit analysis can allow this information to be weighed and aggregated to help make a decision. A well-constructed cost-benefit analysis can highlight crucial factors, expose possible biases, and perhaps expand the openness of the decision-making process by clarifying the factors on which the decision was based—whether these factors are purely economic criteria or include other social factors. A key advantage of carefully built cost-benefit analysis is that it promotes explicit rather than implicit decision-making, even when it is impossible to monetize or even quantify all the potential effects of a given regulatory proposal. Despite the value of this analytical tool, in some situations using cost-benefit analysis as the sole basis for decision-making may be inappropriate. For example, alternatives that would eliminate basic human rights or dramatically increase income inequality may be viewed as morally unacceptable. Similarly, when uncertainty about possible effects is so pervasive that attempting to identify potential costs and benefits would amount to nothing more than uninformed speculation, cost-benefit analysis probably has little to offer. In addition, for minor decisions it may not be necessary to employ a rigorous, time- and resource-consuming decision-making process, and a less detailed cost-benefit analysis—or none at all—may be more appropriate. Virtually all observers agree that the appropriate role for cost-benefit analysis—sole decision-making rule, input into decision-making, or not done at all—will depend on the context in which the particular decision is being considered. To some individuals, application of an abstract analytical technique such as cost-benefit analysis is especially unpalatable in certain situations. For example, the idea of applying a value to “saving life” may be distasteful, because we regard our own lives as “priceless” or of infinite value. However, although our lives may be priceless, avoiding risks often means forgoing time, convenience, enjoyment, or other opportunities—all of which do have a price. Thus, policy interventions that can affect life expectancy pose an unavoidable problem—to refuse to consider the value of potentially lifesaving interventions is to implicitly value them at zero, and to consider any potentially lifesaving activity as infinitely valuable implies that individuals would never take any action that involves risk (driving a car, for example). The literature on cost-benefit analysis makes a key distinction in this area. For ethical reasons, most practitioners consider it inappropriate to use cost-benefit analysis to evaluate alternatives that would (with certainty) affect the life expectancy of a given, known individual. But government policy does not usually involve making such decisions. Instead, policy questions typically center on actions that may bring about small changes in the statistical life expectancy of anonymous members of a large group. For example, when CPSC considered imposing a mandatory regulation on large, multiple-tube fireworks, the agency estimated that such a standard could reduce the number of individuals that die in related accidents by one anonymous consumer over a 3-year period. On a daily basis, each of us makes such trade-offs between perfect safety and other things we value, such as the convenience of driving a car or the excitement of skiing down a mountain. Such assessments, which place a value on reductions in risk, are viewed as appropriate, whereas valuing a given person’s life may be viewed as less reasonable. Similarly, a decision to surrender basic rights—such as liberty—may be unacceptable on moral grounds, and so cost-benefit analysis might not be applicable in a situation involving such rights. In addition, some individuals object to cost-benefit analysis in circumstances in which other species may be affected adversely. For example, one individual objected to a proposal that would allow increased pollution in a particular river because the proposal would adversely affect the fish and other species in the river. Cost-benefit analysis would probably be unable to address this objection because there is no accepted method to place a value on the losses suffered by the fish. Over the past 3 decades, a substantial economic literature has explored the conceptual, analytical, and technical issues posed by the application of cost-benefit analysis. From this literature, generally accepted standards of professional practice have emerged that cover a wide range of research methods. As one experienced researcher has said, “Good studies follow procedures that are in accord with economic theory for estimating benefits and costs, provide a clear statement of all assumptions, point out uncertainties where they exist, and suggest realistic margins of error.”Even when undertaken by careful and competent researchers, cost-benefit analysis can sometimes be difficult to interpret, especially when uncertainty is substantial and information is incomplete. Some of the issues underlying the application of cost-benefit analysis are conceptual, and the researcher may make a different judgment in different circumstances. With respect to these issues, being complete and explicit is important—the consensus in the literature is that while there may be no single method for dealing with these issues that is universally appropriate, the researcher must be clear and direct in detailing how the issue was addressed in the context of the analysis. Ideally, a cost-benefit analysis involves translating each impact into a common measurement (such as dollars) for comparison. However, some effects may be difficult or impossible to measure or quantify. For example, a researcher evaluating an alternative outpatient mental health treatment program realized that the greater independence afforded by the outpatient program could create increased anxiety—or, alternatively, higher self-esteem—for some participants. However, these intangible effects would be very difficult to measure and even more difficult to quantify.Some individuals have criticized certain individual practitioners of cost-benefit analysis for ignoring or de-emphasizing aspects of proposed changes that cannot be easily quantified. Although it is necessary for such effects to be described and emphasized appropriately, the existence of such intangibles does not necessarily limit the value of the analysis. For example, in the discussion of mental health programs, the researchers found that patients in the experimental program experienced higher satisfaction and reported having a greater number of social relationships. These qualitative benefits were important to the analysis even though they could not be valued in dollars. In addition, if it can be shown explicitly that the value of the intangibles would be unlikely to change the conclusion, then cost-benefit analysis has played a valuable role by considering intangible effects explicitly, even if it is not possible to consider them quantitatively. (We addressed this issue in questions 5 and 6 of table IV.1.) Researchers must also decide whether or not to include indirect or secondary effects that may result from the proposal. For example, a change in tax rules may not only have an initial, direct effect on individuals’ income but may also create a secondary ripple (or “multiplier”) effect on the economy as a whole. Similarly, a medical treatment that prolongs life can be expected to have a secondary effect on health care costs as individuals live longer. A researcher who wants to include such secondary impacts will frequently find a measurement challenge, because determining the magnitude (or perhaps even the existence) of many secondary effects involves answering a “what if?” type of question, and frequently without relevant historical experience. For example, individuals who receive improved treatment for hypertension may contract more prolonged or costly illnesses in the future. Any attempt to measure the impact of these indirect future costs would require the researcher to predict the future health status and health care costs of the patient group in the event that the treatment had or had not been available. In addition, the existence of some secondary effects is dependent on one or more key assumptions. For example, an economic multiplier effect is generally thought to take place only in an economy operating below its productive capacity. A researcher who wants to include such an effect must thus determine whether or not this assumption holds, and then decide how large an effect should be included. Secondary impacts may be considered part of the cost-benefit analysis or extraneous to it. The proper choice in each circumstance probably depends on the likelihood that the secondary effects in fact exist, their probable importance, and the ability to measure them. However, like intangible effects, potential secondary or indirect effects should be detailed even if they are not included quantitatively. Furthermore, should such effects be included, the assumptions underlying their existence and measurement must be revealed explicitly to allow the reader to make an informed judgment. (We addressed this issue in question 7 of table IV.1.) In some circumstances, it may be difficult to define the point of view from which the cost-benefit analysis is calculated—that is, who has “standing,” or whose costs and benefits should be included in the analysis. For most cost-benefit calculations involving government policy, the analysis is appropriately done at the level of “society” (rather than, for example, considering only the impact on individuals in a particular state or locality), so that a wide range of implications is considered. However, in some circumstances, it is important to make the point of view explicit. For example, when policy effects may cross national boundaries, “society” may be best defined on a broader basis. If the cost-benefit analysis is considering policies to reduce acid rain, for example, considering the effects of acid rain and pollution reduction on the Canadians who share U.S. weather patterns may be important. Similarly, analysts may wish to consider the impact of product regulation on foreign producers as well as on U.S. consumers. Even when the unit of analysis is clear, occasionally some members of that unit are not afforded standing under cost-benefit analysis. For example, suppose that a particular policy proposal is likely to result in a decrease in property crime. If a thief steals a television, and both the victim and the thief are given standing, then the gain to the thief could offset the loss to the victim in a cost-benefit accounting. “Society,” however, is clearly worse off. Unless we try to measure the psychological and sociological costs of crime, it may make more sense not to afford the thief standing in the cost-benefit analysis. Such issues arise infrequently but should be made clear if they could affect the interpretation of the analysis. (We addressed this issue in question 8 of table IV.1.) Some cost-benefit analyses have been criticized for excluding relevant factors on what appears to be an ad hoc basis. Without including such factors, or providing an explicit justification for excluding them, a researcher limits the value of the analysis. However, while it is important to consider a variety of alternative actions, it is equally crucial to adopt a realistic view of the possible alternatives. For example, political and agency realities often place meaningful constraints on options available for consideration. If these constraints are not incorporated into the analysis, the results could be “pie in the sky” recommendations that would be of little use of the decisionmaker. (We addressed these issues in questions 1, 2, 4, and 10 of table IV.1.) Similarly, it is important that cost-benefit analysis be conducted in a way that evaluates the change in aggregate costs and benefits, with current conditions serving as a baseline. For the properties of economic efficiency to hold, cost-benefit analysis must take what economists call a marginal or incremental approach—that is, it must consider only the changes that would result from the proposed intervention. For example, a proposal to renovate a museum should be evaluated, for cost-benefit purposes, on the basis of the incremental cost of the renovation, not on the basis of the original cost of constructing the entire museum. (We addressed this issue in question 9 of table IV.1.) By definition, cost-benefit analysis is a method for considering the aggregate effects of a given proposal. If the aggregate expected benefits of the proposal exceed the aggregate expected costs, the proposal is said to pass the cost-benefit test. This does not necessarily mean that adopting the proposal would improve the condition of each individual; instead, it implies only that when the expected gains and losses to all individuals are added up, the total expected gains exceed the total expected losses. By adding up individual gains and losses to determine the effect on society as a whole, cost-benefit analysis implicitly assumes that each individual’s gains or losses should be valued equally with any other individual’s gains or losses. As a result, cost-benefit analysis is “neutral” with respect to the distribution of gains and losses. To put it another way, the “fairness” of how gains and losses are distributed is generally not included in the calculations underlying a cost-benefit analysis. Sometimes a decisionmaker might want to address such issues of fairness. For example, if a proposal would involve redistributing income, the loss of one dollar to a wealthy person may be viewed as less consequential than the gain of one dollar to a poor person. In order to address such an issue, the researcher or decisionmaker can consider issues of distribution inside or outside the cost-benefit context. One method of examining such issues might be to analyze the distributional impact of a proposal separately, and consider fairness issues—along with the cost-benefit calculation—as another factor in the decision-making process. Under these circumstances, if the proposal is viewed as having negative distributional consequences, and it would be costly or difficult to redistribute the gains and losses, then the proposal might be rejected even though it would otherwise meet a cost-benefit test. Occasionally a researcher who wants to consider distributional issues explicitly chooses to incorporate distributional consequences into the mathematics of the cost-benefit calculation. Instead of simply adding up the costs and benefits accruing to each individual, the researcher uses a mathematical formula that applies different weights to different individuals. These weights could be based on a number of factors, depending on the characteristics of the “fairness” issues being addressed. For example, if the proposal would affect income directly, weights could be based on income (with greater weight being applied to gains in income experienced by low-income individuals). Weights could also be based on other circumstances; for example, greater weight could be placed on vulnerable populations or on future generations. Similarly, if a change was proposed in a given program, greater weight could be applied to the smaller number of program participants (who would be more significantly affected) than the rest of society (who would be affected only indirectly). A weighting scheme would be most helpful to decisionmakers and outside observers, however, if the formula was detailed explicitly and the analysis was accompanied by a sensitivity analysis, so the effect of the distributional weights was clear to the reader. (We addressed these issues in questions 11 and 12 of table IV.1.) While a number of conceptual issues arise in cost-benefit analysis for which the appropriate answer depends on circumstances, there are also a number of methodological or implementation issues about which there is widespread agreement. For example, years of debate on the appropriate valuation method for risk reduction have largely been resolved in the favor of “willingness to pay” measures, and the importance of sensitivity analysis is generally recognized. While the choice of discount rate has long been a subject of controversy, with some practitioners arguing for the use of market rates and others advocating a lower “social” discount rate, the literature generally recognizes the value of multiple rates. Finally, there is virtually universal agreement on the importance of reliable data and careful risk measurement to cost-benefit analysis. Frequently, the various consequences of the proposal under discussion will differ in when they occur. For example, changing the labeling requirements on bags of charcoal could reduce carbon monoxide deaths years later and also result in an immediate, one-time increase in industry costs. As individuals and as a society, we generally prefer to have dollars or resources now than at some time in the future because we can benefit from them in the interim. In addition, if we acquire $1 next year instead of today, we give up the opportunity to invest that dollar and earn interest on our investment. As a result, it is generally agreed that future dollar cost and benefit streams should be reduced or “discounted” to reflect differences in timing. The rate at which this adjustment is made is usually done on a case-by-case basis. Many analysts choose to use the market rate of interest—for example, the rate payable on government bonds with a time horizon comparable to the analysis. Some researchers have argued that the discount rate should be set somewhat higher because, presumably, economic growth will leave future generations better off. Others have advocated using a “social” discount rate, which is generally lower than the market rate, on the grounds that society’s interest in the welfare of future generations implies that the discount rate for all of society should be lower than the rates chosen by individuals. Experts on cost-benefit analysis generally encourage researchers to use multiple rates—both to assess the sensitivity of the results to the chosen discount rate and to facilitate comparison across studies. (We addressed issues of discounting in question 18 of table IV.1.) Ideally, proponents of extensive cost-benefit analysis would like to be able to order and prioritize regulatory interventions across agencies. A potential mechanism for this type of coordination is provided in Executive Order 12866, which requires agencies to submit to the Office of Management and Budget (OMB) cost-benefit analyses for regulations with an estimated impact on the economy of $100 million or more. OMB has suggested or prescribed certain rules (such as a presentation format and/or a specific discount rate) to facilitate comparability across agencies and to help promote quality analyses. While a common set of assumptions or parameters may facilitate comparability by a central government authority, so that priorities may be set across as well as within agencies, some of these assumptions may fit the circumstances of the given analyses less well. For example, a given discount rate may be more appropriate for one project (for which the benefits and costs are spread out over a long period) than another (with costs and benefits spread over a shorter period). Therefore, there may be a tension between making an analysis comparable to others and customizing it to fit a unique situation. (We addressed these issues in questions 10 and 18 of table IV.1.) Typically, cost-benefit analysis assumes that the intervention considered has only a small or negligible effect on relative prices throughout the economy. Generally, this assumption makes sense; however, for major changes (such as a big change in tax law) the assumption is clearly inappropriate. In these cases, it is much harder to put a value on the potential impact of the proposed change. For example, a large cut in the capital gains tax could affect a wide range of investment behavior, and may even affect how much some individuals choose to work and how much they are paid. A researcher trying to place a value on the potential consequences of a capital gains tax cut might introduce an error by valuing changes in productivity using the wage rates that prevailed before the cut, because, in reality, once the new policy has been implemented, wages rates might change. In such a situation, it is especially important for the researcher to point out the potential mismeasurement, and (if feasible) try to model its effect. (We addressed these issues in question 13 of table IV.1.) Every cost-benefit analysis will include some level of uncertainty or imprecision, or reflect a methodological choice that not everyone will necessarily agree with. Careful analysts will identify critical sources of uncertainty or controversy, and revise or test the analysis quantitatively or qualitatively to identify how or whether these areas affect the conclusions reached. If large variations in measurement or assumption do not alter the conclusions, then the researcher and the decisionmaker can have greater confidence in the original results. However, if the conclusions of the analysis can change depending on methodological choices or variable measurement, then the researcher may want to try to improve the measurement of the original variables. If this is not possible, then the analysis may be of more limited value—an implication the decisionmaker would need to know. (We addressed these issues in question 21 of table IV.1.) Despite the importance of sensitivity analysis, even the most careful and elaborate sensitivity analysis often cannot sufficiently compensate for poor underlying data. Obviously, if key variables are defined poorly or measured imprecisely, the quality of the cost-benefit analysis will suffer. Because many variables used in research are measured from surveys or samples, sampling error may be unavoidable. The researcher can use a sensitivity analysis to test the potential for sampling error to affect the conclusions; however, if the survey instrument itself is flawed, the results may be unreliable even beyond the degree indicated by sampling error. Similarly, researchers may face a difficult problem when key data (such as the cost of compliance) are provided for the analysis by an interested party (such as the industry). If the interested party provides the data, that party may have an incentive to provide biased, inaccurate, or misleading information. In such cases, it is important to try to verify the data to the extent possible; if not, the researcher needs to at least note the source of the underlying information and allow the reader to make an informed judgment about the reliability of the final analysis. (We addressed potential data issues in questions 19 and 20 of table IV.1.) Frequently, the benefits of a particular proposal involve a reduction in risk of injury or death. In these situations, the quality of the cost-benefit analysis will also depend substantially on the estimates of the proposal’s impact on these risks. However, it is often difficult to accurately measure and value risk reduction. A number of issues arise, including the effect of the proposal on the level of risk (especially when individual behavior is a factor in determining the size of the risk) and measuring individuals’ willingness to take risks in exchange for rewards. Measuring the effect of a proposal on the level of risk can be especially difficult. Generally, it is not realistic to assume that any regulatory intervention will reduce the risk level to zero—removing an environmental carcinogen, for example, will probably reduce the number of cancer deaths, but is unlikely to eliminate cancer entirely. Considerable uncertainty may be inevitable when extrapolating a dose-response model. Sometimes large doses of a potentially dangerous substance, given over a short period of time, are used to predict the results of long-term exposure. When the model moves across species—for example, predicting cases of cancer in humans based on experiments with laboratory animals—another source of uncertainty is introduced. Measurement problems in the underlying data can make it hard to predict future risk from epidemiological data as well. For example, when longitudinal data are used, there may arise a “survival bias” in that the analysis could be biased if it excludes individuals who die or otherwise move out of the data set. Similarly, when some cases go unreported, the data may understate the size and/or overstate the severity of the hazard. For rare hazards, limited variation in the epidemiological data can make measurement and prediction of risk more difficult. In addition, some data may be known to produce biased estimates (because of exclusions or potential double-counting, for example). Data problems such as these (and the statistical or analytical methods used to deal with them) should be pointed out to the reader to increase the opportunity for informed judgment. (We addressed these issues in questions 3 and 19 of table IV.1.) Sometimes actions taken to reduce one risk can have the unintended effect of increasing that or another risk. For example, unforeseen consequences arose in the 1970s when CPSC issued regulations requiring children’s sleepwear to meet flammability standards. Manufacturers used a chemical called Tris to meet these standards, but later it was discovered that Tris posed a cancer risk. Changes in individual behavior can also create uncertainty in predicting the level of risk because valuations of costs and benefits of a proposed action are often based on historical behavior. For example, valuations of a proposed new highway may be based on the number of individuals currently traveling the route, their commuting times, and other factors. However, once the new highway is built, some people may make extra trips in the area (shopping in different stores, for example) than they did before. Some people who had driven at off-hours to avoid congestion on the old road may now travel at peak hours on the new one. Similarly, changes in public policy may cause individuals or firms to change the technology they use, the amount of time they spend at leisure or at work, or the amount they invest in innovation. A special type of behavioral change—often referred to as “offsetting behavior”—occurs when individuals change the amount of precautions they take in response to a change in policy. For example, having an air bag in the car or being required to wear a motorcycle helmet might make some drivers feel safer, so they exercise less caution on the road. Sometimes this offsetting behavior can result in an increase in the risk the intervention was designed to mitigate. For example, because child-resistant medicine bottles can be difficult to open, a grandparent may leave the cap off the bottle, creating an even greater risk than would be posed with a non-child-resistent cap. If such changes in consumer behavior are foreseeable, the analysis will be improved if the researcher points out such possibilities, and it will be even more useful if reasonable attempts can be made to measure the impact. (We addressed these factors in question 14 of table IV.1.) Measuring the benefit of risk reduction requires placing a value on avoiding death and/or injury. Several approaches are available for this task, but two have been used in cost-benefit analysis: (1) the “human capital” approach, in which death or injury is valued at the market value of the lost production it causes plus the medical costs expended and (2) the “willingness-to-pay” approach, which attempts to measure directly individuals’ willingness to pay for reducing the risk of death or injury. In early cost-benefit analysis, the human capital approach was used, largely because lost wages and medical costs were relatively easy to measure. However, this approach is not preferred today because of a number of shortcomings. First, if this approach were taken literally to apply to individuals, then persons who do not produce output in the marketplace—such as the elderly or homemakers—are not valued—an assumption that is clearly ethically and economically inappropriate. In addition, the human capital approach is unable to take into account costs to the individual of death or injury such as pain and suffering. Also, it is difficult to apply an “average” value based on human capital calculations to a statistically anonymous member of a large group. Finally, we are all more valuable than the sum of what we produce, and these amounts could not be included in a human-capital-based formulation. Thus, human capital measurements are generally viewed as a lower boundary on the value of avoiding death and injury, and are less preferred than the more recent willingness-to-pay measures. In part because of dissatisfaction with human capital measures as used in cost-benefit analysis, and facilitated by newly available large micro-level data sets, economists measure the benefits of reducing death or injury by calculating the consumer’s willingness to pay for small reductions in the probability of injury or death. These calculations have been done several ways. Some approaches attempt to glean willingness-to-pay measures from observed behavior in the marketplace. For example, one approach examines pay differentials for jobs with different risks. Another approach looks at the payoff to consumers who purchase safety devices, such as smoke detectors and air bags in automobiles. These measures have the advantage of being based on actual observed behavior in the marketplace rather than on an artificial experimental situation. However, the validity of such measures is based on the questionable assumption that workers and consumers are sufficiently knowledgeable about the risks they face and potential of different occupations or safety devices to alleviate those risks. For many purposes, practitioners of cost-benefit analysis can select an appropriate value from the range of research already done in this field without performing the actual analysis themselves. However, for other analyses, especially those involving unique risk-taking situations, it may be wiser to gather new data to construct an estimate that is based on circumstances as close as possible to those being studied. Another common method for valuing risks is known as contingent valuation, in which such values are elicited by observing responses or behavior on a survey or in a controlled experiment. For example, researchers surveyed individual shoppers on how much of a premium they would be willing to pay for pesticide-free grapefruit. These methods have the advantage of being able to provide information on areas that cannot be addressed with market data. However, this characteristic could also be a weakness—the very artificiality of the situation could make the consumer make a less deliberate choice or could limit the usefulness of applying this measure to other situations. This method does entail some technical requirements—for example, it may be useful to perform statistical tests on the distributional assumptions when constructing contingent valuation measures. (We addressed these issues in questions 15 through 17 of table IV.1.) In order to target its resources and analyze the costs and benefits of projects or potential projects, CPSC must obtain data on injuries and deaths related to particular product hazards. CPSC relies on a patchwork of independent data systems to address this need. However, not only does each of these data sources have its own internal limitations, but together CPSC’s data sources present an incomplete—and potentially distorted—picture of consumer-product-related injuries and deaths. The implications of this lack of data range from reducing CPSC’s ability to apply regulatory project selection criteria to limiting the agency’s ability to estimate the impact of its regulatory actions. CPSC obtains most of its injury information from its National Electronic Injury Surveillance System (NEISS), which gathers information from hospital emergency room records. NEISS provides national estimates about the number and severity of emergency-room-treated injuries associated with, although not necessarily caused by, consumer products in the United States. To accomplish this, a stratified probability sample of hospitals is drawn that is representative of all hospitals with emergency departments in the United States and its territories. CPSC constructs national estimates of the number of injuries associated with individual consumer products on the basis of reports from this sample of 101 hospitals. NEISS data result from information abstracted from hospital emergency room records by coders trained by CPSC staff. The data are coded and entered, at each site, into a personal computer programmed for this purpose. The software checks the data entries for consistency. The data collections are linked nightly to a CPSC permanent central database. Data about product-related injuries are available to CPSC staff within 72 hours after the accident for most of the injuries reported. These daily inputs are reviewed by CPSC staff for quality and for identifying possible emerging hazards. The timeliness of the data also allows staff to observe seasonal or episodic variations. For example, during a 30-day period surrounding the Fourth of July in 1990, CPSC gathered extra data through NEISS for a special study on injuries involving fireworks. The unit of analysis for NEISS is the injured person. Other key characteristics coded include the date of treatment, age and gender of patient, injury diagnosis, body part affected, disposition of case, product involved, and accident location. In addition, important details about the injury and the injured person are provided in NEISS. For example, the address and phone number of the injured person is included, permitting follow-up investigations about the nature and cause of the injury. There are about 900 product codes, ranging from abrasive cleaners to youth chairs, which the coders use to specify the product involved. Consumer products are coded to allow for a great deal of specificity in the estimates. For example, a “hand saw” would be differentiated from a “portable circular power saw,” and a “bicycle-mounted baby carrier” would be specified differently from a “backpack baby carrier.” However, NEISS’ coding system that describes the diagnosis of the primary injury and the body part injured is not overly specific, and because it is unique it cannot be directly compared with similar data from other databases. Despite the extent of valuable information provided by NEISS, this system also has significant limitations. One important consideration relates to the nature and size of the NEISS sample design. NEISS, throughout its history, has been designed so that only national—not state, local, or regional—estimates can be made. Thus, NEISS cannot detect interregional or interstate differences, and may also be limited in its ability to identify emerging product hazard patterns that are focused in specific states or regions. A limitation in the NEISS data for which CPSC has received criticism is the lack of information that would assist in assessing causality—that is, information that would establish (or provide a starting point to establishing) whether the product in question caused the accident or merely was involved in the accident. NEISS contains neither “N” codes—which describe the nature of the injury—nor “E” codes—which help explain how the injury happened. E codes briefly describe the circumstances of the accident that produced the injury. For example, E codes could help distinguish among falls that occurred on stairs, from a ladder, or off a roof. NEISS’ short narrative section sometimes contains this type of information, but NEISS does not generally provide much information on the circumstances surrounding the product’s involvement. CPSC augments the NEISS injury reports with other sources of anecdotal information. For example, CPSC maintains an Injury or Potential Injury Incident (IPII) database of reports the agency receives about injuries or incidents involving consumer products. These reports come from a variety of sources, including news clips; consumer complaints, including calls to CPSC’s hot line; and public reports of product liability suits. Although the IPII can provide only anecdotal information, this database sometimes contains more detailed information than can be found in NEISS. For example, IPII records can contain specific information about the product involved—such as the manufacturer and date of purchase—that are generally not found in NEISS. The IPII also serves as a source for cases to investigate. This is particularly important for hazards for which NEISS provides relatively few cases. CPSC has also purchased data on poisonings in the form of the American Association of Poison Control Centers (AAPCC) database. This database is composed of reports from approximately 65 poison control centers throughout the country. Reports from the AAPCC database contain such information as the number of phone calls received by participating centers concerning possible ingestion of a product and the number of individuals who reported experiencing symptoms related to the ingestion. But these reports contain little other information—for example, they do not show how much of the poison was ingested. Although NEISS provides some information on fire-related injuries, CPSC obtains additional data on fires from the National Fire Protection Association (NFPA) and the U.S. Fire Administration. NFPA, a private organization, conducts an annual survey of fire departments that is designed to make statistically valid national estimates of the total number of fires experienced nationally each year. However, the NFPA survey does not collect much detailed information about the characteristics of individual fires. To augment the NFPA information, CPSC relies on the more detailed information provided in the National Fire Reporting System (NFIRS), which is compiled by the U.S. Fire Administration. NFIRS data can provide information on the ages of the victim or victims, and the type of dwelling (apartment versus single-family home, for example). However, these data are available only with a long lag; a CPSC official we interviewed in October 1996 told us that the most recent data he had available at that time was from 1993. CPSC obtains the majority of its information on fatalities from purchasing death certificates from the states and culling information from them to determine which deaths were related to consumer products. (Like the NEISS emergency room reports, the death certificates establish only involvement—not causality—of a consumer product.) CPSC does not purchase the complete annual set of death certificates from each state; instead, the agency buys death certificates according to selected E codes. CPSC purchases about 8,000 death certificates annually; of these, approximately 50 percent will be related to a consumer product. Over the years, in response to declining budgets, CPSC has reduced even further the number of E codes for which it purchases certificates. Death certificates include the date, place, cause of death, age, gender, race, and residence of the deceased. Although death certificates cannot indicate whether the consumer product was “at fault” in a death, they do provide some information on the underlying circumstances. For example, the E code would specify that a person was killed in an accident caused by electric current. Of course, the quality of the data depends on the care with which causation is determined and reported. It is likely that this differs from locality to locality as well as among individual doctors. However, there are some objective indications that the quality of reporting causes of death has been improving; for example, the proportion of cases that are categorized under ill-defined conditions has been falling. Although death certificates do not constitute a sample of known probability or a complete count of fatalities, and thus statistically reliable estimates cannot be made, the geographic information contained in death certificates may help to identify state and regional patterns as well as those that are national in scope. However, death certificate data also have substantial limitations. First, there is an extensive lag—usually 2 years—before data become available. Therefore, death certificates are not very useful in timely identifying emerging hazards. Furthermore, a number of factors contribute to obtaining rather sketchy causal information. For example, details are quite limited, which inhibit determining the degree to which a consumer product was involved in the death, let alone whether the product was defective or hazardous. Certificates are frequently completed without the benefit of autopsy information to establish the precise cause of death, either because an autopsy was not performed or because the certificate was filled out before the autopsy took place. There is substantial variation in coding practices and the level of detail available from state to state.The information available about the injured individual is very limited, and the number of cases for most categories is very small; such data therefore limit the kinds and amount of analyses CPSC can perform and the conclusions it can draw. CPSC augments its death certificate data with other sources where possible. For example, CPSC has instituted a Medical Examiners’ and Coroners’ Alert Project (MECAP), to provide more timely fatality information that lends itself to follow-up investigations. CPSC has engaged some 100 coroners and medical examiners from across the country to report potential product-related hazards. This project produces about 2,000 reports annually, and CPSC staff credit the coroners’ reports with alerting them to a suffocation hazard concerning infant cushions, which eventually led CPSC to recall existing products and ban future production. In addition to the MECAP data, CPSC also records some reports of fatalities in the IPII, NEISS, and NFIRS databases. In order to effectively target resources, identify hazard patterns, and determine the appropriate remedy for particular product hazards, CPSC needs detailed analyses of the causes of reported incidents. CPSC’s data sets generally do not provide such information. The NEISS data do not include E codes (standardized, if brief and incomplete, descriptions of incident circumstances), nor do they include detailed information about how the incident happened. Some information may be provided in the short, free-text comment area of the NEISS report, but generally few such details are recorded. Death certificates do include E codes, but product involvement is often difficult to ascertain. As a result, CPSC staff perform follow-up investigations on selected cases to develop additional information about each incident. Some of these investigations are conducted entirely by telephone, while others are conducted at the accident site. These investigations may include detailed interviews with victims and/or witnesses, police or fire reports, photographs of the product and/or the accident site, laboratory testing of the product involved, or re-creations of the incidents. For example, in 1996 CPSC staff investigated an incident in which a baby’s leg was scratched as it was caught between the slats of her crib. As part of the on-site investigation, the CPSC investigator interviewed the child’s mother, examined the child, examined and photographed the crib, and interviewed staff at the store where the crib was purchased. The CPSC staff we interviewed told us that investigations, particularly on-site investigations, were an important source of information on established projects. The additional detail these investigations gathered helps determine causality and identify hazard patterns, leading analysts to the appropriate remedies. For example, investigations revealed that very few bicycle accidents were related to mechanical problems, and as a result, CPSC staff decided not to recommend any changes to existing bicycle standards. In addition, investigations may provide key evidence to help identify and correct compliance problems. For example, the investigator who reviewed the crib incident found that the crib in question appeared to violate several mandatory safety standards. In order to produce a numerical risk assessment, CPSC must have some information on the extent to which individuals are exposed to a particular product hazard. Exposure information can take many different forms, and the best measure of exposure will depend on the characteristics of the particular product hazard. For example, one measure of exposure might be the number of products in use, while another measure might be the number of hours a person spends using the product, and another measure might take into account the intensity with which a product is used. If a product is for one-time use only and is usually used soon after it is sold (such as fireworks), number of products sold might be a reasonable proxy for exposure. However, when a product is more durable, and used for a longer period, and often “handed down” to another user, such as a baby high chair, it would probably be more reasonable to base estimates of exposure on the number of products in use. In cases where the potential hazard is especially ubiquitous (like air pollution, for example), population measures (the number living near the source or the number of children under 5, for example) may be reasonable. For some hazards, it may be important to account for the intensity of use. For example, the probability of developing cancer from exposure to a wood stove may depend on how often the stove is used, how large a space it heats, and how many times the stove door is opened to add wood. Similarly, bicycles could be used very frequently and very intensely (every day in urban traffic) or infrequently and not intensely (once a season on the bike path in the park); thus, a good exposure measure would take such factors into account. CPSC does not conduct a formal, numerical risk assessment for each project it undertakes. Of the 115 CPSC projects we reviewed, only 24 included a numerical assessment of risk. CPSC most frequently relied on estimates of products in use to provide the exposure information for its risk assessments. In 65 percent of the cases where an epidemiological risk assessment was performed, CPSC based its exposure measure on an estimate of the number of products in use. In an additional 5 percent, CPSC obtained information on the actual number of products in use. In 30 percent of cases, CPSC used a population-based measure, and in 10 percent of cases, CPSC used a sales measure. We did not evaluate whether the exposure measure CPSC chose was appropriate for each case. CPSC conducted special surveys on bicycles and on cigarette lighters and matches, for instance, to develop exposure information. The survey questions were designed to obtain information on the number of products in the home, the intensity of use, the characteristics of the households using the product, and the usual patterns of use. For example, the bicycle survey included questions on the number of hours spent biking, the ages, education, and income of household members, whether riding was done most often on streets, sidewalks, or bike paths, and whether riders used helmets. Where special surveys are not practical—because of time, resource, or other limitations—CPSC sometimes uses mathematical modeling techniques to estimate the number of products in use. These models can take sales information and information on the life of the product to estimate the number of units in use. For example, CPSC used such a model to estimate how many of the portable heaters made before a 1991 revision to a voluntary standard were still in use. CPSC may also receive information about potential product hazards through industry reporting requirements. Although this information is used mostly for identifying and addressing compliance problems, it may also help identify new hazards. Companies are legally obligated to report to CPSC information they receive that indicates a consumer product they distribute is potentially hazardous. Under section 15 of the Consumer Product Safety Act, manufacturers (including importers), distributors, and retailers of consumer products must notify CPSC if they obtain information that a product (1) fails to comply with a consumer product safety regulation or a voluntary consumer product safety standard, (2) contains a defect that could create a substantial product hazard, or (3) creates an unreasonable risk of serious injury or death. In addition to these reporting requirements, manufacturers and importers of a consumer product must report to CPSC if (1) a particular model of a consumer product is the subject of at least three civil actions that have been filed in federal or state court, (2) each suit alleges the involvement of that model in death or grievous bodily injury, and (3) within a specified 2-year period at least three of the actions resulted in a final settlement involving the manufacturer or importer or in a judgment for the plaintiff. Manufacturers must file a report within 30 days after the settlement or judgment in the third such civil action. CPSC may receive requests for information that firms have reported to it under section 15 requirements. The law limits CPSC’s disclosure of any information identifying a manufacturer and further limits CPSC’s release of information that firms have provided under these requirements. Reports on civil lawsuits may not be publicly disclosed by CPSC or subpoenaed or otherwise obtained from CPSC through discovery in any civil action or administrative procedure. The following are GAO’s comments on Chairman Brown and Commissioner Moore’s letter dated July 24, 1997. 1. As we discuss in the report, CPSC relies on a number of different sources for injury incident data. (Descriptive information on all CPSC’s data sources is provided in app. V.) However, only one source—the NEISS system—is capable of providing statistically reliable, representative information, and the NEISS system covers only those injuries treated in hospital emergency rooms. Although CPSC obtains information from other sources, these data are anecdotal and thus their usefulness is very limited in estimating injury prevalence. As a result, we have recommended that CPSC consult with experts both inside and outside the agency to prioritize its additional data needs and to explore the feasibility of options for obtaining these data. Changes over the past decade in the health care market, including the growth of ambulatory care, changes in reimbursement procedures, and improved health services research have the potential to make such additional data collection more feasible and less costly than it was a decade ago when some of these assessments were last made. 2. Figure 3 correctly states that CPSC does not obtain systematic surveillance data on injuries treated outside the emergency room. In the text of the report, we state that the number of injuries treated in each setting is unknown. We added a similar statement to the figure to emphasize this point. 3. In several places in their comments, Chairman Brown and Commissioner Moore refer to CPSC’s investigations of selected incidents to obtain more information. We discussed these investigations in our report. We believe that these investigations provide valuable information on causation, characteristics of accident victims, and hazard patterns, and we agree that some information gleaned from investigations is not obtainable from surveillance data. However, according to CPSC staff, this information is available to the agency only after a project is well under way, not at the initial stage of project selection. As a result, CPSC has little information on these important factors to assist in project selection, and evaluation of these criteria at the project selection stage is thus unavoidably speculative. 4. Descriptive information on this source of information was provided on in appendix V. 5. For reasons we discussed in the report, relying on aggregate data (data not specific to a particular consumer product) to address the limitations of CPSC’s surveillance data is problematic. In addition, in the briefing packages we reviewed, the estimates of injury incidents usually referred only to NEISS cases and were not extended with modeling techniques; because these techniques were used so infrequently for this purpose, we did not assess their application in this report. 6. In referring to the agency’s monitoring of other aggregated sources of injury data, Chairman Brown and Commissioner Moore state that “Tracking of this type is sufficient to assure the adequacy of our data, as we use it.” We disagree. Examining trend information from other sources (without a rigorous application to CPSC’s own specific needs) is not sufficient to compensate for (or even measure the magnitude of) the limitations of CPSC’s injury surveillance data. 7. We believe this comment may reflect a misunderstanding of our point. We do not mean to suggest that CPSC incorrectly relies on NEISS to provide information on chronic illnesses. Rather, we are pointing out that CPSC has virtually no systematic data on chronic illnesses. As we stated in the report, on page 20, we agree with Chairman Brown and Commissioner Moore (and with the CPSC staff we interviewed) that such data are often difficult to obtain. However, chronic illness is listed as a criterion for CPSC project selection, and CPSC has little information to assist in applying this criterion. Accordingly, we are pleased with the statement by Brown and Moore that CPSC will consider additional surveillance methods to obtain more information on chronic illnesses. 8. We acknowledge in the report that CPSC uses other sources to supplement its death certificate data. However, our interviews with CPSC staff and our review of agency documents confirmed that death certificates are the most important source of death data for CPSC. In the briefing packages we reviewed, 80 percent of the calculations for numbers of deaths were based in whole or in large part on death certificate data, and death certificates were the sole source of death information in 36 percent of all CPSC briefing packages (a far greater percentage than for any other single source). Death data were usually reported in CPSC briefing packages with a lag of 2 or more years, especially when death certificates were the sole source of data. 9. We agree with Chairman Brown and Commissioner Moore that there is no single mold or perfect set of criteria for evaluating a cost-benefit analysis, and we have revised the report to further emphasize this point. We believe that Chairman Brown and Commissioner Moore may have misunderstood the purpose of our evaluation questions. We did not “derive a particular methodology” for cost-benefit analysis, nor do we mean to suggest that CPSC should follow some “formula” for conducting analyses that does not leave room for competent professional judgment. However, to say that there is no perfect “formula” for cost-benefit analysis does not imply that all methodological choices are equally consistent with rigorous and comprehensive professional work. Although no litmus test exists for a “good” analysis, the professional literature offers some basic, minimum elements that are commonly used in evaluating cost-benefit analyses. These elements, which are based on the principles of transparency and completeness, are generally considered necessary, although not sufficient, for a good analysis. Although flexibility may be sometimes necessary in the assumptions or models underlying a cost-benefit analysis, the elements we used—including full disclosure of data limitations, sensitivity analysis, and incorporating all important costs and benefits—are appropriate to a wide range of situations. 10. Chairman Brown and Commissioner Moore are correct in stating that we evaluated 29 CPSC cost-benefit analyses that were completed between January 1, 1990, and September 30, 1996. We identified these 29 analyses as complete on the basis of CPSC’s statements—specifically, we considered a cost-benefit analysis to be complete only if an explicit comparison was made between aggregate costs and benefits. In addition, although the analyses may have been prepared at different stages of the project, we based our review on all available documentation on the project, and we reported results only for the evaluation questions that applied to the majority of cases and for which a clear determination could be made. We did not report results separately for the eight regulatory analyses that were required by law, because there were relatively few of these. However, we found no substantial differences between these 8 and the remaining 21 in terms of how they performed, compared with commonly used elements of evaluation of cost-benefit analysis. Therefore, we are confident that our results present an accurate assessment of CPSC’s cost-benefit analyses, and we recommend that the agency implement changes to ensure that its analyses are comprehensive and reported in sufficient detail. 11. We believe that, as now constructed, CPSC’s method for tracking projects operates at too high a level of generality and provides too little information to give a comprehensive, accurate picture of the agency’s activities either at a given point or over a longer period. CPSC staff told us, and our review of agency documentation confirmed, that the Management Information System (MIS) usually tracks most agency activities only at a very general level. For example, CPSC’s 1996 year-end MIS report lists some specific projects such as “upholstered furniture” and “range fires,” but most projects are accounted for under either broad umbrella codes such as “sports and recreation” or “children’s projects,” or under activity codes such as “investigations,” “product safety assessment,” or “emerging problems.” In addition, CPSC staff told us that reliable inferences on resources spent cannot be drawn from MIS data because of limitations in the computer system and because no consistent rule exists about how staff time in different directorates is recorded to project codes. As a result, CPSC staff were unable to generate a comprehensive list of projects or to provide accurate information about resources allocated to those projects. We recommend an improved tracking system that would provide enough information to monitor the projects selected and resources spent for each specific consumer product hazard. We believe that as CPSC develops its planned accounting system, it should attempt to make it as compatible as practicable with the recommended tracking system. Nevertheless, we believe that whether or not it implements its planned accounting system, CPSC can and should improve its ability to track projects. 12. Our interviews with present and former commissioners revealed a pattern by which certain of CPSC’s regulatory criteria have historically been given greater emphasis in CPSC’s project selection process. Our objective was to describe the process as it was related to us; we have not taken a position on whether this process is appropriate. We have added a statement to our methodology section to emphasize this point. The following are GAO’s comments on Commissioner Gall’s letter dated July 23, 1997. 1. Our interviews with present and former commissioners revealed a pattern by which certain of CPSC’s regulatory criteria have historically been given greater emphasis in CPSC’s project selection process. Our objective was to describe the process as it was related to us; we have not taken a position on whether this process is appropriate. We have added a statement to our methodology section to emphasize this point. 2. Commissioner Gall states that we did not analyze the relative importance of the deficiencies we found in CPSC’s data and methodology. However, as we stated in the report, available information does not permit us to determine the impact of better-quality data on the decisions CPSC made. The limitations we found in CPSC’s data have a variety of potentially conflicting impacts, precluding us from determining exactly how the results of the analysis might change if improved data were available. For example, because CPSC’s injury estimates are often confined to injuries treated in hospital emergency rooms, CPSC’s estimates will generally understate the actual number of injuries associated with a consumer product. However, CPSC’s systematic injury and death data can generally tell only whether a product was involved in an accident—not whether the product caused or contributed to the accident. As a result, this can make the risks assessed by CPSC appear larger than they might actually be. Similarly, we cannot determine how improved exposure data would change the relative importance of the risks assessed by CPSC. We agree with Commissioner Gall that not all projects will merit the same level of data or analysis. However, our review of CPSC raises questions about the agency’s ability to obtain and analyze data necessary to support rigorous analysis of important agency projects. 3. We agree with Commissioner Gall that resource considerations should enter into CPSC’s decisions to undertake new data collection. For this reason, we recommended an overall feasibility study for CPSC to prioritize among its data needs and investigate new options for obtaining additional information. In addition to those named above, the following individuals made important contributions to this report: Sheila A. Nicholson, Analyst, gathered and analyzed data on CPSC’s information release procedures; Nancy K. Kintner-Meyer, Senior Evaluator, compiled and analyzed information on CPSC projects; George Bogart, Senior Attorney, provided legal assistance; Harold Wallach, Senior Analyst, assisted in the analysis of CPSC’s data systems; and Charles Jeszeck served as Assistant Director for the project in its early stages. Abkowitz, M., and P. Der-Ming Cheng. “Developing a Risk/Cost Framework for Routing Truck Movements of Hazardous Materials.” Accident Analysis and Prevention, 20 (1988), pp. 39-51. Abraham, John. “Negotiation and Accommodation in Expert Medical Risk Assessment and Regulation: An Institutional Analysis of the Benoxaprofen Case.” Policy Sciences, 27 (1994), pp. 53-76. Adams, John. “The Emperor’s Old Clothes: The Curious Comeback of Cost-Benefit Analysis.” Environmental Values, 2 (1993), pp. 247-60. Adler, Robert S. “From ’Model Agency’ to Basket Case—Can the Consumer Product Safety Commission Be Redeemed?” Administrative Law Review, 42 (Winter 1989), pp. 61-129. Alberini, Anna. “Testing Willingness-to-Pay Models of Discrete Choice Contingent Valuation Survey Data.” Land Economics, 71 (1) (Feb. 1995), pp. 83-95. Allen, Julius. Cost Benefit Analysis of Federal Regulation: A Review and Analysis of Developments, 1978-1984. Report No. 84-74 E. Washington, D.C.: Congressional Research Service, May 15, 1984. Allison, Paul D. Event History Analysis: Regression for Longitudinal Event Data, Sage University Paper Series: Quantitative Applications in the Social Sciences, series no. 07-046. Newbury Park, Calif.: Sage Publications, 1984. American Academy of Pediatrics. “Hospital Discharge Data on Injury: The Need for E Codes.” Policy Statement in AAP News, Mar. 1992. Arrow, Kenneth J., and others. Benefit-Cost Analysis in Environmental, Health, and Safety Regulation: A Statement of Principles. American Enterprise Institute, The Annapolis Center, and Resources for the Future. Washington, D.C.: AEI Press, 1996. Asch, Peter. Consumer Safety Regulation: Putting a Price on Life and Limb. New York: Oxford University Press, 1988. Ault, Eric B. “CPSC’s Voluntary Standards: An Assessment and a Paradox.” The Frontier of Research in the Consumer Interest. Eds. E. Scott Maynes and ACCI Research Committee. Columbia, Mo.: American Council on Consumer Interests, 1986, pp. 77-81. Bailey, Martin J. Reducing Risks to Life: Measurement of the Benefits. Washington, D.C.: American Enterprise Institute, 1980. Baker, S.P., and others. The Injury Fact Book, 2nd ed. New York: Oxford University Press, 1992. Battiato, Salvatore Enrico. “Cost-Benefit Analysis and the Theory of Resource Allocation.” Alan Williams and Emilio Giardina. Efficiency in the Public Sector. Aldershot, Hants, England: Edward Elgar Publishing, 1993, pp. 26-42. Berlage, L., and R. Renard. “The Discount Rate in Cost-Benefit Analysis and the Choice of a Numeraire.” Oxford Economic Papers, 37 (1985), pp. 691-9. Berlau, John. “Play (Regulated) Ball “ Reason, Dec. 1996, pp. 71-3. Bhat, M., and S. Li. “Consumer-Product-Related Tooth Injuries Treated in Hospital Emergency Rooms: United States, 1979-1987.” Community Dentistry and Oral Epidemiology, 18 (3) (June 1990), pp. 133-8. Bird, Peter J.W.N. “A Note on Relative Price Changes in Cost-Benefit Analysis.” Public Finance, 35 (2) (1980), pp. 318-23. Blackorby, Charles, and David Donaldson. “A Review Article: The Case Against the Use of the Sum of Compensating Variations in Cost-Benefit Analysis.” Canadian Journal of Economics, 23 (3) (Aug. 1990), pp. 471-94. _____. “Welfare Ratios and Distributionally Sensitive Cost-Benefit Analysis.” Journal of Public Economics, 34 (1987), pp. 265-90. Boardman, Anthony, Aidan Vining, and W.G. Waters II. “Costs and Benefits Through Bureaucratic Lenses: Example of a Highway Project.” Journal of Policy Analysis and Management, 12 (3) (1993), pp. 532-55. Bordley, Robert F. “Making Social Trade-offs Among Lives, Disabilities, and Costs.” Journal of Risk and Uncertainty, 9 (1994), pp. 135-49. Boustead, Thomas. The Law and Economics of Administrative Law: A Statistical Analysis of the Consumer Product Safety Commission’s Petition Process. Ph.D. Dissertation, Fordham University, May 1995. Boyle, M.H., and others. “Economic Evaluation of Neonatal Intensive Care of Very Low Birthweight Infants.” New England Journal of Medicine, 308 (1983), pp. 1330-37. Bradfield, R.E. “Social Cost-Benefit Analysis in Project Evaluation: A Practical Framework for Decision Making.” Journal of Studies in Economics and Econometrics, 13 (2) (1989), pp. 25-35. Brent, Robert J. “The Cost-Benefit Analysis of Government Loans.” Public Finance Quarterly, 19 (1) (Jan. 1991), pp. 43-66. Breyer, Stephen. Breaking the Vicious Circle. Cambridge, Mass.: Harvard University Press, 1993. Burt, Catharine E. “Injury-Related Visits to Hospital Emergency Departments: United States, 1992.” Advance Data, No. 261 (Feb. 1, 1995). Buzby, Jean C., Richard C. Ready, and Jerry R. Skees. “Contingent Valuation in Food Policy Analysis: A Case Study of a Pesticide-Residue Risk Reduction.” Journal of Agricultural and Applied Economics, 27 (2) (Dec. 1995), pp. 613-25. Campen, James T. Benefit, Cost, and Beyond: The Political Economy of Benefit-Cost Analysis. Cambridge, Mass.: Ballinger Publishing Company, 1986. Cesario, Frank J. “Benefit-Cost Analysis Under Pricing Constraints.” Applied Economics, 13 (1981), pp. 215-24. Chan, Yuk-Shee, and Anthony M. Marino. “Regulation of Product Safety Characteristics Under Imperfect Observability.” Journal of Regulatory Economics, 6 (1994), pp. 177-95. Chirinko, Robert S., and Edward P. Harper, Jr. “Buckle Up or Slow Down? New Estimates of Offsetting Behavior and Their Implications for Automobile Safety Regulation.” Journal of Policy Analysis and Management, 12 (2) (1993), pp. 270-96. The Institute of Medicine and the National Research Council, Commission on Life Sciences, Committee on Trauma Research. Injury in America: A Continuing Public Health Problem. Washington, D.C.: National Academy Press, 1985. Cost-Benefit Analysis: Wonder Tool or Mirage? Report by the U.S. House of Representatives Subcommittee on Oversight and Investigation, Committee on Interstate and Foreign Commerce, Dec. 1980. Cowen, Tyler. “The Scope and Limits of Preference Sovereignty.” Economics and Philosophy, 9 (1993), pp. 253-69. Cox, Louis Anthony (Jr.). “Theory of Regulatory Benefits Assessment: Econometric and Expressed Preference Approaches.” Benefits Assessment: The State of the Art. Eds. Judith D. Bentkover, Vincent T. Covello, and Jeryl Mumpower. Dordrecht, Holland: D. Reidel Publishing Company, 1986, pp. 85-160. Crandall, Robert W. “The Use of Cost-Benefit Analysis in Product Safety Regulation.” The Frontier of Research in the Consumer Interest. Eds. E. Scott Maynes and ACCI Research Committee. Columbia, Mo.: American Council on Consumer Interests, 1986, pp. 61-75. Crews, Clyde Wayne. “Ten Thousand Commandments: Regulatory Trends 1981-92 and the Prospect for Reform.” Journal of Regulation and Social Cost, Vol. 2 (Mar. 1993), pp. 105-50. Cummings, Ronald G., Louis Anthony Cox, Jr., and A. Myrick Freeman III. “General Methods for Risk Assessment.” Benefits Assessment: The State of the Art. Eds. Judith D. Bentkover, Vincent T. Covello, and Jeryl Mumpower. Dordrecht, Holland: D. Reidel Publishing Company, 1986, pp. 161-92. Curtin, Leo, and Ronald Krystynak. “An Economic Framework for Assessing Foodborne Disease Control Strategies With an Application to Salmonella Control in Poultry.” The Economics of Food Safety. Ed. Julie A. Caswell. London: Elsevie, 1991, pp. 131-51. Dasguptha, A.K., and D.W. Pearce. Cost-Benefit Analysis: Theory and Practice. London: Macmillan, 1972. Davis, Miriam. Health Risk Assessment Research at the Department of Energy: Background Information. Washington, D.C.: U.S. Office of Technology Assessment, Sept. 15, 1992. Davis, Yvette, and others. “An Evaluation of the National Electronic Injury Surveillance System for Use in Monitoring Nonfatal Firearm Injuries and Obtaining National Estimates.” Journal of Safety Research, 27 (2) (Summer 1996), pp. 83-91. Dehar, Mary Anne, Sally Casswell, and Paul Duignan. “Formative and Process Evaluation of Health Promotion and Disease Prevention Programs.” Evaluation Review, 17 (2) (Apr. 1993), pp. 204-20. Drummond, Michael F. “Cost Benefit Analysis in Health and Health Care: Fine in Practice, but Does It Work in Theory?” Alan Williams and Emilio Giardina. Efficiency in the Public Sector. Aldershot, Hants, England: Edward Elgar Publishing, 1993, pp. 106-28. Drummond, Michael F., Greg L. Stoddart, and George W. Torrance. Methods for Economic Evaluation of Health Care Programmes. New York: Oxford University Press, 1996. Drummond, Michael F., and others. “Users’ Guide to the Medical Literature: How to Use an Article on Economic Analysis of Clinical Practice.” Journal of the American Medical Association, 277 (19) (May 21, 1997), pp. 1552-57. Evans, J.S., N.S. Hawkins, and J.D. Graham. “Uncertainty Analysis and the Value of Information: Monitoring for Radon in the Home.” Journal of Air Pollution Control Association, 38 (1988), pp. 1380-85. Evans, L., and M.C. Frick. “Helmet Effectiveness in Preventing Motorcycle Driver and Passenger Fatalities.” Accident Analysis and Prevention, 20 (1988), pp. 447-58. Fingar, Ann R., Richard S. Hopkins, and Marjorie Nelson. “Work-Related Injuries in Athens County, 1982 to 1986: A Comparison of Emergency Department and Workers’ Compensation Data.” Journal of Occupational Medicine, 34 (8) (Aug. 1992), pp. 779-87. Finkel, Adam M. Comparing Risks Thoughtfully. Washington, D.C.: U.S. Office of Technology Assessment, Sept. 1994. _____. “Who’s Exaggerating?” Discover, May 1996, pp. 45-54. Fischhoff, Baruch. Ranking Risks. Washington, D.C.: U.S. Office of Technology Assessment, Feb. 1994. _____, and Louis Anthony Cox, Jr. “Conceptual Framework for Regulatory Benefits Assessment.” Benefits Assessment: The State of the Art. Eds. Judith D. Bentkover, Vincent T. Covello, and Jeryl Mumpower. Dordrecht, Holland: D. Reidel Publishing Company, 1986, pp. 51-84. Fise, Mary Ellen. “Consumer Product Safety Regulation,” Chapter 11 in Regulation and Consumer Protection. Dame Publications, Inc., 1995, pp. 277-94. _____. The CPSC: Guiding or Hiding From Product Safety? Washington, D.C.: Consumer Federation of America, May 1987. _____, and M. Kristen Rand. Pennies for Consumer Protection? A Report on the CPSC’s 1991 Budget. Washington, D.C.: Consumer Federation of America, May 1990. Fisher, Ann, Maria Pavlova, and Vincent Covello, eds. Evaluation and Effective Risk Communication Workshop Proceedings. Washington, D.C.: Interagency Task Force on Environmental Cancer and Heart and Lung Disease, Committee on Public Education and Communication, Jan. 1991. Fisher, Ann, Lauraine G. Chestnut, and Daniel M. Violette. “The Value of Reducing Risks of Death: A Note on New Evidence.” Journal of Policy Analysis and Management, 8 (1) (1989), pp. 88-100. Fuchs, Edward Paul, and James Anderson. “Institutionalizing Cost-Benefit Analysis in Regulatory Agencies.” Research in Public Policy Analysis and Management, 4 (1987), pp. 187-211. Fung, V.A., J.C. Barrett, and J. Huff. “Reviews and Commentaries: The Carcinogenesis Bioassay in Perspective: Application in Identifying Human Cancer Hazards.” Environmental Health Perspectives, 103 (1995), pp. 7-8. Gerner, Jennifer L. “Product Safety: A Review.” The Frontier of Research in the Consumer Interest, eds. E. Scott Maynes and ACCI Research Committee. Columbia, Mo.: American Council on Consumer Interests, 1986, pp. 37-59. Gillroy, John Martin, and Maurice Wade, eds. The Moral Dimensions of Public Policy Choice. Pittsburgh, Pa.: University of Pittsburgh Press, 1992. Gold, Marthe R., and others. Cost-Effectiveness in Health and Medicine. New York: Oxford University Press, 1996. Goldenhar, Linda, and Paul A. Schulte. “Intervention Research in Occupational Health and Safety.” Journal of Occupational Medicine, 36 (7), pp. 763-75. Goldstein, B. “Risk Management Will Not Be Improved by Mandating Numerical Uncertainty Analysis for Risk Assessment.” University of Cincinnati Law Review, Vol. 63, pp. 1599-1610. Goodstein, Eban. “Benefit-Cost Analysis at the EPA.” Journal of Socio-Economics, 24 (2) (1995), pp. 375-89. Graham, John D. “Making Sense of Risk: An Agenda for Congress.” Risks, Costs and Lives Saved: Getting Better Results from Regulation. Ed Robert W. Hahn. Washington, D.C.: AEI Press, 1996, pp. 183-207. _____. “Technology, Behavior, and Safety: An Empirical Study of Automobile Occupant-Protection Regulation.” Policy Sciences, 17 (1984), pp. 141-51. _____, and Younghee Lee. “Behavioral Response to Safety Regulation: The Case of Motorcycle Helmet-Wearing Legislation.” Policy Sciences, 19 (1986), pp. 253-73. Graves, Edmund J., and Brenda S. Gillum. “1994 Summary: National Hospital Discharge Survey.” Advance Data, No. 278 (Oct. 3, 1996). Gray, George M. The Challenge of Risk Characterization. Washington, D.C.: U.S. Office of Technology Assessment, Nov. 1992. Greenberg, David H. “Conceptual Issues in Cost/Benefit Analysis of Welfare-to-Work Programs.” Contemporary Policy Issues, 10 (Oct. 1992), pp. 51-64. Gregory, Robin, Thomas C. Brown, and Jack L. Knetsch. “Valuing Risks to the Environment.” The Annals of the American Academy of Political and Social Science. Eds. Howard Kunreuther and Paul Slovic. Vol. 545 (May 1996), pp. 54-63. Griffin, Ronald. “On the Meaning of Economic Efficiency in Policy Analysis.” Land Economics, 71 (1) (Feb. 1995), pp. 1-15. Guyer, Bernard, and Susan S. Gallagher. “An Approach to the Epidemiology of Childhood Injuries.” Pediatric Clinics of North America, 32 (1) (Feb. 1985), pp. 5-15. Haas, C.N., and others. “Risk Assessment of Viruses in Drinking Water.” Risk Analysis, Vol. 13, pp. 545-52. Hadden, Susan G. Read the Label: Reducing Risk by Providing Information. Boulder, Co.: Westview Press, 1986. Haddix, A.C., and others, eds. Prevention Effectiveness: A Guide to Decision Analysis and Economic Evaluation. New York: Oxford University Press, 1996. Hahn, Robert W. “Improving Regulation: Steps Toward Reform.” Statement before the U.S. Senate Subcommittee on Financial Management and Accountability, Committee on Govermental Affairs, Sept. 25, 1996. _____. “A Preliminary Estimate of Some Indirect Costs of Environmental Regulation.” Unpublished paper, Feb. 1995. _____. “Regulatory Reform: A Legislative Agenda.” Statement before the U.S. Senate Committee on Governmental Affairs, Feb. 8, 1995. _____. “Regulatory Reform: What Do the Government’s Numbers Tell Us?” Risks, Costs and Lives Saved: Getting Better Results From Regulation. Ed. Robert W. Hahn. Washington, D.C.: AEI Press, 1996, pp. 208-54. _____. “Why We Need to Balance the Costs and Benefits of Regulation.” Statement before the U.S. House of Representatives, Subcommittee on Oversight and Investigations, Committee on Economic and Educational Opportunities, and the Subcommittee on Paperwork and Regulation, Committee on Small Business, Feb. 2, 1995. Hall, Margaret Jean, and Maria F. Owings. “Hospitalizations for Injury and Poisoning in the United States, 1991.” Advance Data, No. 252 (Oct. 7, 1994). Hammitt, J.K., and J.A.K. Cave. Research Planning for Food Safety: A Value of Information Approach. RAND R-3946-ASPE/NCTR. Santa Monica, Calif.: Rand, 1991. Hanke, Steve. “On the Feasibility of Benefit-Cost Analysis.” Public Policy, 29 (2) (Spring 1981), pp. 147-57. Haskins, Jack B. “Evaluative Research on the Effects of Mass Communication Safety Campaigns: A Methodological Critique.” Journal of Safety Research, 2 (2) (June 1970) pp. 86-8. Haupt, Barbara. “1982 Summary: National Hospital Discharge Survey.” Advance Data, No. 95 (Dec. 27, 1983). Havrilesky, Thomas. “The Persistent Misapplication of the Hedonic Damages Concept to Wrongful Death and Personal Injury Litigation.” Journal of Forensic Economics, 8 (1) (1995), pp. 49-54. Heimann, Christopher M., and others. “Project: The Impact of Cost-Benefit Analysis on Federal Administrative Law.” Administrative Law Review, 42 (Fall 1990), pp. 545-654. Henderson, J. Vernon. “Peak Shifting and Cost-Benefit Miscalculations.” Regional Science and Urban Economics, 22 (1992), pp. 103-121. Hensler, Deborah R., and others. Compensation for Accidential Injuries in the United States. Santa Monica, Calif.: RAND, The Institute for Civil Justice, 1991. Hildred, William, and Fred Beauvais. “An Instrumentalist Critique of ’Cost-Utility Analysis.’” Journal of Economic Issues, 29 (4) (Dec. 1995), pp. 1083-96. Hoehn, John P., and Alan Randall. “A Satisfactory Benefit Cost Indicator From Contingent Valuation.” Journal of Environmental Economics and Management, 14 (1987), pp. 226-47. _____. “Too Many Proposals Pass the Benefit-Cost Test.” American Economic Review, 79 (3) (June 1989), pp. 544-51. _____. “Too Many Proposals Pass the Benefit-Cost Test: Reply.” American Economic Review, 81 (5) (Dec. 1991), pp. 1450-52. Hoffer, George E., Stephen W. Pruitt, and Robert J. Reilly. “The Impact of Product Recalls on the Wealth of Sellers: A Reexamination.” Journal of Political Economy, 96 (3) (1988). Hopkins, Thomas D. “The Costs of Federal Regulation.” Journal of Regulation and Social Costs, Mar. 1992, pp. 5-31. Horowitz, John K., and Richard T. Carson. “Discounting Statistical Lives.” Journal of Risk and Uncertainty, 3 (1990), pp. 403-13. Hubin, Donald C. “The Moral Justification of Benefit-Cost Analysis.” Economics and Philosophy, 10 (1994), pp. 169-94. Jarrell, Gregg A., and Sam Peltzman. “The Impact of Product Recalls on the Wealth of Sellers.” George J. Stigler. Chicago Studies in Political Economy. Chicago, Ill.: Chicago Press, 1988, pp. 612-34. Johansson, Per-Olov. “Altruism in Cost-Benefit Analysis.” Environmental and Resource Economics, 2 (1992), pp. 605-13. Joksch, H. “Critique of Peltzman’s Study: The Effects of Automobile Safety Regulation.” Accident Analysis and Prevention, 8 (1976), pp. 213-14. Keeney, Ralph L. “The Role of Values in Risk Management.” The Annals of the American Academy of Political and Social Science. Eds. Howard Kunreuther and Paul Slovic. Vol. 545 (May 1996). Philadelphia: Sage Periodicals Press, pp. 126-34. Kemper, Peter, David A. Long, and Craig Thornton. “A Benefit-Cost Analysis of the Supported Work Experiment.” The National Supported Work Demonstration. Robinson G. Hollister, Peter Kemper, and Rebecca A. Maynard, eds. Madison, Wis.: University of Wisconsin Press, 1984, pp. 239-85. Kerton, Robert, and Richard Bodell. “Quality, Choice and the Economics of Concealment: The Marketing of Lemons.” Journal of Consumer Affairs, 29 (1) (Summer 1995), pp. 1-28. Kopp, Raymond J. “Why Existence Value SHOULD Be Used in Cost-Benefit Analysis.” Journal of Policy Analysis and Management, 11 (1) (1992), pp. 123-30. Langlois, Jean A., and others. Improving the E Coding of Hospitalizations for Injury: Do Hospital Records Contain Adequate Documentation?” American Journal of Public Health, 85 (9) (Sept. 1995), pp. 1261-65. Laughery, Kenneth R., David R. Lovvoll, and Meredith L. McQuilkin. “Allocation of Responsibility for Child Safety.” Proceedings of the Human Factors and Ergonomics Society 40th Annual Meeting. Santa Monica, Calif.: Human Factors and Ergonomics Society, 1996, pp. 810-13. Lave, Lester B. “Benefit Cost Analysis: Do the Benefits Exceed the Costs?” Risks, Costs and Lives Saved: Getting Better Results From Regulation. Ed. Robert W. Hahn. Washington, D.C.: AEI Press, 1996, pp. 104-34. Layard, Richard. Cost-Benefit Analysis. Middlesex, England: Penguin Books, 1972. Lescohier, Ilana, Susan S. Gallagher, and Bernard Guyer. “Not by Accident.” Issues in Science and Technology, Summer 1990, pp. 35-42. Lichtenberg, Erik. “Conservatism in Risk Assessment and Food Safety Policy.” The Economics of Food Safety. Ed. Julie A. Caswell. London: Elsevier, 1991, pp. 89-102. Linneman, Peter. “The Effects of Consumer Safety Standards: The 1973 Mattress Flammability Standard.” George J. Stigler. Chicago Studies in Political Economy. Chicago, Ill.: University of Chicago Press, 1988, pp. 441-60. Litan, Robert E., and William D. Nordhaus. Reforming Federal Regulation. New Haven: Yale University Press, 1983. Loomis, John B., and Pierre H. duVair. “Evaluating the Effect of Alternative Risk Communication Devices on Willingness to Pay: Results From a Dichotomous Choice Contingent Valuation Experiment.” Land Economics, 69 (3) (Aug. 1993), pp. 287-98. Lovvoll, David R., and others. “Responsibility for Product Safety in the Work Environment.” Proceedings of the Human Factors and Ergonomics Society 40th Annual Meeting. Santa Monica, Calif.: Human Factors and Ergonomics Society, 1996, pp. 814-17. MacLean, D. Values at Risk. Totowa, N.J.: Rowman and Allanheld, 1986. Magat, Wesley A., and Michael J. Moore. “Consumer Product Safety Regulation in the United States and the United Kingdom: the Case of Bicycles.” Rand Journal of Economics, 27 (1) (Spring 1996), pp. 148-64. Magat, Wesley A., and W. Kip Viscusi. Informational Approaches to Regulation. Cambridge, Mass.: MIT Press, 1992. Mayer, Michelle, and Felicia B. LeClere. “Injury Prevention Measures in Households with Children in the United States, 1990.” Advance Data, No. 250 (May 31, 1994). McCaig, Linda F. “National Hospital Ambulatory Medical Care Survey: 1992 Emergency Department Summary.” Advance Data, No. 245 (Mar. 2, 1994). McGarity, Thomas O. “Media Quality, Technology, and Cost Benefit Blancing Strategies for Health and Environmental Regulation.” Law and Contemporary Problems, 46 (3) (1983), pp. 159-233. _____, and Sidney A. Shapiro. Workers at Risk: The Failed Promise of the Occupational Safety and Health Administration. Westport, Conn.: Praeger Press, 1993. McGinnis, J.M., and W.H. Foege. “Actual Causes of Death in the United States.” Journal of the American Medical Association, Vol. 270, pp. 2207-12. McKean, Kevin. “They Fly in the Face of Danger.” Discover, Apr. 1986, pp. 48-58. Miller, James C. III, and Bruce Yandle, eds. Benefit-Cost Analyses of Social Regulation: Case Studies From the Council on Wage and Price Stability. Washington, D.C.: American Enterprise Institute, 1979. Miller, Ted R., and Diane C. Lestina. “Patterns in U.S. Medical Expenditures and Utilization for Injury, 1987.” American Journal of Public Health, 86 (1) (Jan. 1996), pp. 89-93. Mishan, E.J. Cost-Benefit Analysis. London: Unwin Hyman, 1988. Moore, John L. Cost Benefit Analysis: Issues in Its Use in Regulation. Report No. 95-760 ENR. Washington, D.C.: Congressional Research Service, June 28, 1995. Morgan, M. Granger, and Max Henrion. Uncertainty: A Guide to Dealing With Uncertainty in Quantitative Risk and Policy Analysis. Cambridge, Mass.: Cambridge University Press, 1990. Nash, Chris. “Cost-benefit Analysis of Transport Projects.” Alan Williams and Emilio Giardina. Efficiency in the Public Sector. Aldershot, Hants, England: Edward Elgar Publishing, 1993, pp. 83-105. National Committee for Injury Prevention and Control. Injury Prevention: Meeting the Challenge. New York: Oxford University Press, 1989. Needleman, Carolyn, and Martin L. Needleman. “Qualitative Methods for Intervention Research.” American Journal of Industrial Medicine, 29 (1996), pp. 329-37. Nelson, Cheryl R., and Barbara J. Stussman. “Alcohol- and Drug-Related Visits to Hospital Emergency Departments: 1992 National Hospital Ambulatory Medical Care Survey.” Advance Data, No. 251 (Aug. 10, 1994). Newbery, David. “The Isolation Paradox and the Discount Rate for Benefit-Cost Analysis: A Comment.” Quarterly Journal of Economics, Feb. 1990, pp. 235-8. Oi, Walter Y. “Safety at What Price?” American Economic Review, AEA Papers and Proceedings, 85 (2) (May 1995), pp. 68-71. Ozonoff, Victoria Vespe, Susan Tan-Torres, and Catherine W. Barber. “Assessment of E-Coding Practices and Costs in Massachusetts Hospitals.” Public Health Reports, 108 (5) (Sept. -Oct. 1993), pp. 633-6. Page, Talbot. “A Framework for Unreasonable Risk in the Toxic Substances Control Act (TSCA).” Management of Assessed Risk for Carcinogens. Ed. William J. Nicholson. Annals of the New York Academy of Sciences, New York: 1981. Page, Talbot, and Paolo F. Ricci. “A Cost-Benefit Perspective for Risk Assessment.” Principles of Health Risk Assessment. Ed. Paolo F. Ricci. Englewood Cliffs, N.J.: Prentice-Hall, Inc., 1985, pp. 37-65. Pease, William S. The Impact of Scientific Research on Federal Agency Guidelines for Conducting Health Risk Assessments. Washington, D.C.: U.S. Office of Technology Assessment, Dec. 18, 1992. Peltzman, Sam. “The Effect of Automobile Safety Regulation.” George J. Stigler. Chicago Studies in Political Economy. Chicago, Ill. : University of Chicago Press, 1988, pp. 349-403 (originally published in Journal of Political Economy, 83, 1975, pp. 677-725). Petty, Ross D. “Bicycle Safety: A Case Study in Regulatory Review.” Regulation, 17 (2) (1994), pp. 22-24. _____. “Regulating Product Safety: The Informational Role of the U.S. Federal Trade Commission.” Journal of Consumer Policy, 18 (1995), pp. 387-415. Pollack, E.S., and D.G. Keimig, eds. Counting Injuries and Illnesses in the Workplace: Proposals for a Better System. New York: National Academy Press, 1987. Price, Colin. “Does Social Cost-Benefit Analysis Measure Overall Utility Change?” Economics Letters, 26 (1988), pp. 357-61. Proulx, Guylene, and Joelle Pineau. “The Impact of Age on Occupants’ Behavior During a Residential Fire.” Unpublished paper, 1996. Quiggin, John. “Existence Value and Cost-Benefit Analysis: A Third View.” Journal of Policy Analysis and Management, 12 (1) (1993), pp. 195-9. _____. “Too Many Proposals Pass the Benefit-Cost Test: Comment.” American Economic Review, 81 (5) (Dec. 1991), pp. 1446-9. Ramirez, Jorge, and others. “Ex-Post Analysis of Flood Control: Benefit-Cost Analysis and the Value of Information.” Water Resources Research, 24 (8) (Aug. 1988), pp. 1397-1405. Rauchschwalbe, Renae, and N. Clay Mann. “Pediatric Window-Cord Strangulations in the United States, 1981-1995.” Journal of the American Medical Association, 277 (21) (June 4, 1997), pp. 1696-98. Rice, Dorothy P., and Wendy Max. “Annotation: The High Cost of Injuries in the United States.” American Journal of Public Health, 86 (1) (Jan. 1996), pp. 14-15. Risa, Alf Erling. “Public Regulation of Private Accident Risk: The Moral Hazard of Technological Improvements.” Journal of Regulatory Economics, 4 (1992), pp. 335-46. Rivara, Frederick P., and others. “Cost Estimates for Statewide Reporting of Injuries by E Coding Hospital Discharge Abstract Data Base Systems.” Public Health Reports, 105 (6) (Nov.-Dec. 1990), pp. 635-38. Robins, Marcia P. “An Examination of the Effect of CPSC Prescription Drug Child-Resistant Packaging Requirements on Child Fatalities.” Unpublished paper, Nov. 1989. Rodgers, Gregory B. “The Effectiveness of Helmets in Reducing All-Terrain Vehicle Injuries and Deaths.” Accident Analysis and Prevention, 22 (1) (1990), pp. 47-58. _____. “Evaluating Product-Related Hazards at the Consumer Product Safety Commission: The Case of All-Terrain Vehicles.” Evaluation Review, Feb. 1990, pp. 3-21. _____, with P. Rubin. “Cost-Benefit Analysis of All-Terrain Vehicles at the CPSC.” Risk Analysis, 9 (1989), pp. 63-9. Rosenthal, Donald H., and Robert H. Nelson. “Why Existence Value Should NOT Be Used in Cost-Benefit Analysis.” Journal of Policy Analysis and Management, 11 (1) (1992), pp. 116-22. Rubin, Paul H., R. Dennis Murphy, and Gregg Jarrell. “Risky Products, Risky Stocks.” Regulation, No. 1 (1988), pp. 35-9. Rubinfeld, Daniel L., and Gregory B. Rodgers. “Evaluating the Injury Risk Associated With All-Terrain Vehicles: An Application of Bayes’ Rule.” Journal of Risk and Uncertainty, 5 (1992), pp. 145-58. Schappert, Susan M. “National Ambulatory Medical Care Survey: 1992 Summary.” Advance Data, No. 253 (Aug. 18, 1994). Schierow, Linda-Jo. “Risk Analysis and Cost-Benefit Analysis of Environmental Regulations. 94-961 ENR. Washington, D.C. : U.S. Congressional Research Service, Dec. 2, 1994. Schroeder, Elinor P., and Sidney A. Shapiro. “Responses to Occupational Disease: The Role of Markets, Regulation, and Information.” Georgetown Law Journal, 72 (1983), pp. 1231-1306. Selbst, S., M.D. Baker, and M. Shames. “Bunk Bed Injuries.” American Journal of Diseases in Children, 144 (6) (June 1990), pp. 721-23. Sen, Amartya. “Approaches to the Choice of Discount Rates for Social Benefit-Cost Analysis.” Chapter 8 in Resources, Values, and Development. Cambridge, Mass.: Harvard University Press, 1984, pp. 172-203. Senturia, Yvonne D., and others. “Exposure Corrected Risk Estimates for Childhood Product Related Injuries.” American Journal of Public Health, 1993, pp. 473-77. Shew, Russel, and Rachel Dardis. “An Economic Analysis of Child Restraints.” Journal of Consumer Policy, 18 (1995), pp. 417-31. Smith, V. Kerry. “A Conceptual Overview of the Foundations of Benefit-Cost Analysis.” Benefits Assessment: The State of the Art. Eds. Judith D. Bentkover, Vincent T. Covello, and Jeryl Mumpower. Dordrecht, Holland: D. Reidel Publishing Company, 1986, pp. 13-34. _____. “Uncertainty, Benefit-Cost Analysis, and the Treatment of Option Value.” Journal of Environmental Economics and Management, 14 (1987), pp. 283-92. _____, and others. “Can Public Information Programs Affect Risk Perceptions?” Journal of Policy Analysis and Management, 9 (1) (1990), pp. 41-59. Spitzer, Hugh. Case Studies to Demonstrate the Impact of Research on Assessment of Carcinogenic Risk. Washington, D.C.: U.S. Office of Technology Assessment, Dec. 1, 1992. Stason, W.B., and M.C. Weinstein. “Allocation of Resources to Manage Hypertension.” New England Journal of Medicine, 296 (1977), pp. 732-9. Stayner, Leslie, and others. “Approaches for Assessing the Efficacy of Occupational Health and Safety Standards.” American Journal of Industrial Medicine, 29 (1996), pp. 353-7. _____. “National Hospital Ambulatory Medical Care Survey: 1994 Emergency Department Summary.” Advance Data, No. 275 (May 17, 1996). Stussman, Barbara J. “National Hospital Ambulatory Medical Care Survey: 1993 Emergency Department Summary.” Advance Data, No. 271 (Jan. 25, 1996). Tengs, Tammy O., and John D. Graham. “The Opportunity Costs of Haphazard Social Investments in Life Saving.” Risks, Costs and Lives Saved: Getting Better Results from Regulation. Ed. Robert W. Hahn. Washington, D.C.: AEI Press, 1996, pp. 167-82. Thacker, Stephen B., and others. “Assessing Prevention Effectiveness Using Data to Drive Program Decisions.” Public Health Reports, 109 (2) (Mar.-Apr. 1994), pp. 187-94. Thomas, L.G. “Revealed Bureaucratic Preference: Priorities of the Consumer Product Safety Commission.” Rand Journal of Economics, 19 (1) (Spring 1988), pp. 102-13. Thompson, M.S., J.L. Read, and M. Laing. “Feasibility of Willingness-to-Pay Measuresment in Chronic Arthritis.” Medical Decision Making, 4 (2) (1984), pp. 195-215. Tolchin, Susan J., and Martin Tolchin. “Particles of Truth: Cotton Dust and Cost-Benefit Analysis.” Dismantling America: The Rush to Deregulate, Boston, Mass.: Houghton Mifflin Company, 1983, pp. 111-41. Tolley, G., K. Kenkel, and R. Rabian, eds. Valuing Health for Policy: An Economic Approach. Chicago, Ill.: University of Chicago Press, 1994. Trinkoff, Alison, and Peggy L. Parks. “Prevention Strategies for Infant Walker-Related Injuries.” Public Health Reports, 108 (6) (Nov.-Dec. 1993), pp. 784-8. Trumbull, William. “Reply to Whittington and MacRae.” Journal of Policy Analysis and Management, 9 (4) (1990), pp. 548-50. _____. “Who Has Standing in Cost-Benefit Analysis?” Journal of Policy Analysis and Management, 9 (2) (1990), pp. 201-18. U.S. Commission on Risk Assessment and Risk Management. Risk Assessment and Risk Management in Regulatory Decision-Making. Washington, D.C.: 1997. U.S. Department of Health and Human Services, Centers for Disease Control and Prevention. “A Framework for Assessing the Effectiveness of Disease and Injury Prevention.” Morbidity and Mortality Weekly Report, Recommendations and Reports, Vol. 41, No. RR-3 (Mar. 27, 1992), pp. 1-13. _____. “Guidelines for Death Scene Investigation of Sudden Unexplained Infant Deaths.” Morbidity and Mortality Weekly Report, Recommendations and Reports, Vol. 45, No. RR-10 (June 21, 1996), pp. 1-31. _____. “Injuries and Deaths Associated With the Use of Snowmobiles—Maine, 1991-1996.” Morbidity and Mortality Weekly Report, Vol. 46, No. 1 (Jan. 10, 1997), pp. 1-4. _____. Inventory of Federal Data Systems in the United States for Injury Surveillance, Research and Prevention Activities. Washington, D.C.: May 1996. U.S. Department of Health and Human Services, Centers for Disease Control and Prevention, National Center for Health Statistics. “Advance Report of Final Mortality Statistics, 1994.” Monthly Vital Statistics Report, 45 (3) (Supplement, Sept. 30, 1996). _____. “Hospitalizations for Injury and Poisoning in the United States, 1991.” Advance Data, No. 252 (Oct. 7, 1994). U.S. Department of Health and Human Services, Public Health Service. Healthy People 2000: Midcourse Review and 1995 Revisions. Washington, D.C.: 1995. _____. Healthy People 2000: National Health Promotion and Disease Objectives. Washington, D.C.: 1991. U.S. General Accounting Office. Peer Review: EPA’s Implementation Remains Uneven. Sept. 24, 1996, GAO/RCED-96-236. U.S. Office of Management and Budget. Guidelines and Discount Rates for Benefit-Cost Analysis of Federal Programs. Circular A-94. Washington, D.C.: Oct. 29, 1992. _____. Regulatory Program of the United States Government. Bulletin No. 91-04. Washington, D.C.: Nov. 26, 1990. U.S. Office of Technology Assessment. Researching Health Risks. OTA-BBS-570. Washington, D.C.: Jan. 1993. _____. Risks to Students in School. OTA-ENV-633. Washington, D.C.: Sept. 1995. U.S. Physician Payment Review Commission, A Comparison of Alternative Approaches to Risk Measurement. Washington, D.C.: Dec. 1994. van Ravenswaay, Eileen O., and John P. Hoehn. “The Impact of Health Risk Information on Food Demand: A Case Study of Alar and Apples.” The Economics of Food Safety. Ed. Julie A. Caswell. London: Elsevier, 1991, pp. 155-174. Veasie, M.A., and others. “Epidemiologic Research on the Etiology of Injuries at Work.” Annual Review of Public Health, 15 (1994), pp. 203-21. Viscusi, W. Kip. “The Dangers of Unbounded Commitments to Regulate Risk.” Risks, Costs and Lives Saved: Getting Better Results From Regulation. Ed. Robert W. Hahn. Washington, D.C.: AEI Press, 1996, pp. 135-66. _____. Regulating Consumer Product Safety. Washington, D.C.: American Enterprise Institute, 1984. _____. Tax Policy and the Economy. Ed. James M. Porterba, Vol. 9, 1995, p. 180. _____. “The Valuation of Risks to Life and Health: Guidelines for Policy Analysis.” Benefits Assessment: The State of the Art. Eds. Judith D. Bentkover, Vincent T. Covello, and Jeryl Mumpower. Dordrecht, Holland: D. Reidel Publishing Company, 1986, pp. 193-210. _____. “The Value of Risks to Life and Health. Journal of Economic Literature, 31 (Dec. 1993), pp. 1912-46. _____, and Richard J. Zeckhauser. “Hazard Communication: Warnings and Risk.” The Annals of the American Academy of Political and Social Science. Eds. Howard Kunreuther and Paul Slovic. Vol 545 (May 1996). Philadelphia: Sage Periodicals Press, pp. 106-15. _____, and Wesley A. Magat, eds. Learning About Risk: Consumer and Worker Responses to Hazard Information. Cambridge, Mass.: Harvard University Press, 1987. Waller, Julian A. “Reflections on a Half Century of Injury Control.” American Journal of Public Health, 84 (4) (Apr. 1994), pp. 664-70. _____, Joan M. Skelly, and John H. Davis. “Treated Injuries in Northern Vermont.” Accident Analysis and Prevention, 27 (6) (1995), pp. 819-28. Warr, Peter G., and Brian D. Wright. “The Isolation Paradox and the Discount Rate for Benefit-Cost Analysis.” Quarterly Journal of Economics, 96 (1) (Feb. 1981), pp. 129-45. Weinstein, M.C., and others. “Recommendations of the Panel on Cost-Effectiveness in Health and Medicine.” Journal of the American Medical Association, 276 (15) (1996), pp. 1253-8. Weisbrod, Burton A. “Benefit-Cost Analysis of a Controlled Experiment: Treating the Mentally Ill.” Journal of Human Resources, 26 (4) (1981), pp. 523-48. Weiss, K.B., P.J. Gergen, and T.A. Hodgson. “An Economic Evaluation of Asthma in the United States.” New England Journal of Medicine, 326, pp. 862-6. Wheelwright, Jeff. “The Air of Ostrava: Pollution and Risk Assessment in the Czech Republic.” Discover, 17 (5) (May 1996), pp. 56-64. Whittington, Dale, and Duncan MacRae, Jr. “Comment: Judgments About Who Has Standing in Cost-Benefit Analysis.” Journal of Policy Analysis and Management, 9 (4) (1990), pp. 536-47. Wildasin, David E. “Indirect Distributional Effects in Benefit-Cost Analysis of Small Projects.” The Economic Journal, 98 (Sept. 1988), pp. 801-7. Williams, Alan. “Cost-Benefit Analysis: Applied Welfare Economics or General Decision Aid?” Alan Williams and Emilio Giardina. Efficiency in the Public Sector. Aldershot, Hants, England: Edward Elgar Publishing, 1993, pp. 65-82. Wood, William W. “Cost-Benefit Analysis of Small Business Assistance: Do Entrepreneurs Really Need ’Assisting’?” Journal of Private Enterprise, 10 (1) (Summer 1994), pp. 13-21. Zeckhauser, Richard J., and W. Kip Viscusi. “The Risk Management Dilemma.” The Annals of the American Academy of Political and Social Science. Eds. Howard Kunreuther and Paul Slovic. Vol. 545 (May 1996). Philadelphia: Sage Periodicals Press, pp. 144-55. Zerbe, Richard O. “Comment: Does Benefit Cost Analysis Stand Alone? Rights and Standing.” Journal of Policy Analysis and Management, 10 (1) (1991), pp. 96-105. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Consumer Product Safety Commission's (CPSC) project selection, use of cost-benefit analysis and risk assessment, and information release procedures, focusing on: (1) the criteria CPSC uses to select projects and the information it relies upon in making these choices; (2) the information CPSC draws on to perform risk assessment and cost-benefit analyses and CPSC's methodology for conducting cost-benefit analyses; and (3) CPSC's procedures for releasing manufacturer-specific information to the public and whether evidence exists that CPSC violated its statutory requirements concerning the release of such information. GAO noted that: (1) although CPSC has established criteria to help select new projects, with the agency's current data these criteria can be measured only imprecisely, if at all; (2) although CPSC has described itself as "data driven," its information on product-related injuries and deaths is often sketchy; (3) this makes it more difficult not only for agency management to monitor current projects but also for staff and commissioners to assess and prioritize the need for new projects in different hazard areas; (4) CPSC has insufficient data on both internal agency efforts and external product hazards to assess the impact and cost of each project; (5) to help evaluate alternative methods of addressing potential hazards, CPSC may perform a risk assessment to estimate the likelihood of injury associated with a hazard or conduct a cost-benefit analysis to assess the potential effects of a proposed regulation; (6) although CPSC does not complete either a risk assessment or cost-benefit analysis for every project, the agency conducts these analyses more often than it is required to by law; (7) nevertheless, CPSC's data are often insufficient to support a thorough application of these analytical techniques; (8) to evaluate relative risks, it is usually necessary to have information on how many consumers use the product--information that CPSC frequently does not have; (9) risk assessment of consumer products requires measurement of the number of harmful incidents; (10) CPSC's imprecise and incomplete death and injury data make risk assessment and cost-benefit analysis at best less reliable and at worst impossible to do; (11) the cost-benefit analyses conducted by CPSC between 1990 and 1996 were often not comprehensive; (12) CPSC has established procedures to implement statutory requirements concerning the release of manufacturer-specific information; (13) when releasing information to the public that identifies a specific manufacturer, CPSC is required to verify the information and allow the manufacturer an opportunity to comment; (14) evidence from the industry and from legal cases suggests that CPSC has met its statutory requirements in this area; (15) individuals within CPSC, as well as some industry representatives and consumer groups, expressed dissatisfaction with the requirements of this law; and (16) some of these individuals have proposed statutory changes that range from reducing to expanding the current requirements. |
USAID has provided foreign aid since 1961 in an effort to improve countries’ economies, health, environment, and democratic processes. In times of crisis, the agency has also provided humanitarian assistance to those in need. USAID currently has 72 overseas missions and offices that manage projects associated with this foreign assistance. These projects are generally implemented by private voluntary organizations, nongovernmental organizations, international agencies, universities, and contractors. Since the end of the Cold War, Congress has reduced USAID’s appropriations, citing other funding priorities as well as basic questions about the effectiveness of foreign aid. USAID’s fiscal year 1996 obligations of $5.7 billion were 13 percent less than those in the previous year (see app. I for information on USAID obligations for fiscal years 1995 through 1997). To accommodate budget reductions, USAID cut its total workforce from 11,150 in September 1993 to 7,609 in June 1997 and will have closed 24 missions by the end of fiscal year 1997. Under the executive branch’s decision to consolidate the Department of State, the Arms Control and Disarmament Agency, and the U.S. Information Agency, the USAID Administrator will report directly to the Secretary of State rather than, as previously, to the President. Congress specifically directs how USAID spends much of its funding, earmarking substantial amounts of USAID’s direct appropriation for development assistance for specific purposes, such as child survival. Congress also earmarks funds for economic support and food assistance, which are administered by USAID. Congressional earmarks and executive branch directives (e.g., for the population program) accounted for 59.8 percent of development assistance funds in fiscal year 1995, 66.5 percent in fiscal year 1996, and 69.5 percent for fiscal year 1997 (see app. II for more details on directed and undirected assistance). In 1993, in response to criticisms about USAID’s inefficient processes and inability to demonstrate a significant impact on developing countries and some calls for the agency’s abolition, USAID’s Administrator volunteered the entire agency as a “reinvention laboratory.” USAID established five strategic goals to meet its agency mission of pursuing sustainable development in developing countries: (1) achieving broad-based economic growth, (2) building democracy, (3) stabilizing world population and protecting human health, (4) protecting the environment, and (5) providing humanitarian assistance. According to USAID, reengineering the agency’s overseas operations has revolved around (1) increasing its customer focus, (2) managing for results, (3) enhancing staff participation and teamwork, (4) empowering and increasing the accountability of staff, and (5) valuing diversity. The reengineering included reorganizing mission organizational structures, eliminating unnecessary administrative requirements, and including the participation of development partners and recipients in program planning. In 1994, USAID selected 10 missions to test these reengineering principles, concepts, and approaches prior to deployment worldwide. Based on observations of the experiments and feedback from the participating missions, USAID initiated full implementation of its reengineering approach on October 1, 1995. In fiscal year 1996, USAID deployed the New Management System, which was intended to provide an integrated database of financial and program information to facilitate the management of resources and monitoring of results. (See app. III for a chronology of USAID reforms.) USAID’s Administrator has noted that the agency’s reforms are consistent with the Government Performance and Results Act of 1993 (GPRA) and have positioned USAID well to meet the act’s requirements for strategic planning and performance monitoring. Overseas missions have changed their processes to streamline the way they provide assistance in support of USAID’s strategic goals. These changes include taking a team approach to management, accepting increased accountability through the authority and flexibility given them by USAID, and increasing the participation of development partners in the design and implementation of projects. USAID has supported the missions’ reform efforts with information and advice. Mission officials believe that these changes will result in more effective assistance programs. Moreover, our review indicated that the changes show promise that USAID is making progress in better managing the process of providing assistance. We could not fully assess the impact of the missions’ operational changes for the following reasons: (1) missions are at varying stages of implementing changes; (2) the impact of these changes may not be seen for several years, especially those that affect the design of new projects; and (3) the missions we visited had not established baseline information on the efficiency or effectiveness of their management operations before reengineering their processes and therefore could not demonstrate many measurable improvements. In some missions, officials indicated that some baseline data could be compiled from historical records but that doing so would demand an inordinate amount of scarce mission resources. Overseas missions we visited have reengineered their organizational structure to focus on achieving strategic objectives. Most missions have replaced their former office structure with a team approach to management, and each team is responsible for one of the mission’s strategic objectives. Team members at the missions we visited commented that the team approach has improved the management of their activities. USAID said that overseas missions have observed that teams are now more likely to (1) remain focused, (2) rethink tactics quickly when activities do not go as planned, and (3) terminate activities that only marginally contribute to results. Prior to reengineering, missions had several offices: technical offices, such as the education, health, and agriculture offices, which had day-to-day responsibility for implementing assistance projects, and support offices, such as the contracts, controller, and legal adviser’s offices, which assisted the technical offices. According to mission staff, in the past, each of these offices had its own management hierarchy and separately reviewed proposed actions such as expenditure requests or new project starts. This review was often done sequentially, and as each office raised problems or issues, action was delayed until they could be resolved; the proposal was then passed to the next office for review. The four missions that we visited had reorganized their offices into teams to manage assistance activities that support each mission’s strategic objectives. Staff members that previously worked for different technical and support offices now work together on a team toward a specific common objective. In most missions, the teams replaced much of the traditional office structure. For example, the mission in El Salvador combined the staff of its economic, education, and productive resources offices into a single economic growth strategic objective team, which also includes members from the contracts and controller offices. At the missions we visited, both technical and support staff said that issues that had previously delayed progress under the old process can now be resolved within the teams. For example, contract office staff are now involved in the early stages of project design and implementation, suggesting appropriate procurement instruments and helping draft scopes of work for proposed contracts. According to contract officers, this approach has reduced the number of sequential reviews and rewrites of proposals and the tensions that have historically existed between staff and technical offices. Under reengineering, USAID increased the authority and flexibility of missions in managing their activities by eliminating many administrative requirements previously imposed on the missions. For example, USAID eliminated the requirement for missions to prepare elaborate project papers and obtain bureau approval for new projects. Once a regional bureau approves a mission’s strategic objectives, the mission is authorized to develop activities to meet those objectives without the bureau’s further review and approval. USAID also eliminated the 10-year time limit on all projects, allowing the missions the flexibility to extend projects when it is advantageous to do so. For example, in Honduras, the mission extended an ongoing policy project in order to continue to provide high-level policy advice to the newly elected Honduran administration. Before reengineering, the mission would have had to design and authorize a completely new project or apply for a waiver from USAID to extend the project. Furthermore, USAID delegated to the missions a host of executive authorities previously held by officials at USAID headquarters. For example, missions may now independently issue implementation letters and negotiate and implement agreements with other U.S. government agencies. In accordance with the agency’s emphasis on staff empowerment and the assignment of responsibility and accountability under reengineering, mission directors delegated new authorities to their missions’ strategic objective teams. For example, the mission director in Honduras authorized team leaders to approve individual expenditures of up to $100,000 in program funds without her review. In the past, USAID was more concerned with the required inputs and expected outputs of a particular project than the results to be achieved. Projects had individual purposes that often related to an overall goal or goals in only a very general way, and the goals themselves were in broad categories rather than specific objectives against which progress could be measured. Also, missions reported on their activities in a variety of reports, including semiannual reviews and project implementation reviews, that did not necessarily focus on development results but rather focused on the status of project implementation. USAID is increasingly holding missions accountable for results. In fiscal year 1996, it began requiring the missions to report annually on the results of their programs in a report called the Results Review and Resource Request (R4). Missions must now report to high-level management on their progress in meeting agreed-upon strategic objectives and agency goals and quantify that progress through performance indicators developed by the missions, often with assistance from USAID’s Center for Development Information and Evaluation. Missions are to supplement the data in the annual R4 document with analyses of performance data and other evidence that the mission is making progress in meeting its strategic objectives. USAID has given its development partners a more active role in developing country strategies and designing and managing projects. Many of these partners were enthusiastic about this change. In some cases, the increased interaction has accelerated the design and management process. Also, with input from customers, the missions believe they have been better able to respond to a country’s needs and develop a strategy for meeting those needs. Missions previously presented completed project designs to host countries for acceptance. Today, however, the missions invite the host country, other development partners, and customers to participate in planning and designing activities. For example, according to several representatives from nongovernmental organizations in El Salvador, the mission has invited them to its planning and designing meetings, and their input was included in the mission’s development strategy. In Bangladesh, the mission brought together the customers and development partners to redesign a health and population program that for the first time combines services in both family planning and maternal child health care. Missions also include development partners as members of strategic objective teams. For example, Bangladesh government officials from the Rural Electrification Board told us that they actively participated with USAID team members, contractor staff, and representatives from local rural electric cooperatives in designing a rural power for the poor project and selecting the indicators by which results will be measured. Many nongovernment organizations and donors welcomed the missions’ new collaborative approach to designing and implementing activities and were very positive about the missions’ interaction with them. The benefit of this increased interaction can be seen in Bangladesh, for example, where the democracy strategic objective team, working with nongovernmental organizations and staff offices, designed a new project in 6 months compared to the 2 years that project designs had generally required before. Some donors have also observed increased cooperation with USAID missions. In Bangladesh, an official of the British Office of Development Assistance said that the mission was instrumental in influencing the host government’s development of a national health and population strategy. Increased cooperation with other donors was not always evident at the missions we visited, however. For example, in El Salvador, mission officials told us they have tried to coordinate with the Inter-American Development Bank but have received little response. Officials also said that some other donors had been reluctant to share information. Teams are increasingly relying on surveys of customers to determine their needs and interests when designing activities and developing strategies. For example, in Bangladesh, the mission has focused its democracy and governance activities on local government rather than the national government because customers indicated that local government was a more effective agent of change. In El Salvador, strategic objective teams, with the assistance of local contractors, held focus groups throughout the country and used the results to develop customer service plans and as input for the mission’s country strategy. USAID has established new systems and practices to support missions’ implementation of its reengineering efforts by providing advice and information on reengineering. However, it has not provided training needed to learn new skills in team dynamics and personnel matters. USAID provides information, advice, and answers to staff questions on reengineering through a monthly newsletter, best practices reports and other publications, and an electronic help desk. The missions can more easily access up-to-date agency guidance through an on-line computer system that has replaced 33 manuals on policies, procedures, and program operations guidance. Also, USAID’s Management Bureau sent a team to visit several missions in 1997 to identify “significant management, organizational, and personnel-related constraints being encountered by USAID missions as they embrace reengineering principles” and to provide assistance to resolve them. At the time of our visits to the four missions, USAID was not providing needed training in new job skills and team operations. For example, USAID has promoted the use of performance-based contracting without providing the requisite training to the missions. As a result, the mission contracts officer in Honduras was developing his own training course. Also, USAID support in personnel matters has lagged behind mission needs, according to mission managers. Thus, the missions have, on their own, developed foreign employees’ evaluations and position descriptions and classifications but are still waiting for USAID’s reengineered position descriptions for American staff. USAID acknowledged that it had not provided adequate guidance to the field in training and personnel matters. However, USAID noted that there is broad agreement that training requirements and Foreign Service position classification should be the next issues that the agency examines. Missions have begun to change their project portfolios in response to agency reengineering. However, three factors have constrained the restructuring of portfolios. First, reduced mission budgets have limited the availability of funds for new projects. Second, the missions do not have the authority, without headquarters approval, to retain deobligated funds and shift them from one project to another under a restructured portfolio, according to USAID officials. Third, during the first year of reengineering, the missions concentrated more on reorganization activities and the development of strategic frameworks, including defining performance indicators and obtaining baseline data, which limited the time available for portfolio restructuring and new project development. The missions we visited were still restructuring their portfolios to focus on projects that more directly support their strategic objectives. They were doing this largely by designing new projects specifically to help achieve each strategic objective and by shifting funds to activities that are more effective at achieving the objective. To date, only a few new projects have been started. In the Philippines, the mission is reshaping its portfolio by selecting one project for each strategic objective to serve as a funding mechanism for all activities under the objective. In Bangladesh, the mission initiated one new project for each of the mission’s strategic objectives, which were designed using reengineering principles. In 1996, USAID approved a new country strategy that refocused activities in El Salvador on the rural poor rather than war-to-peace transition activities and macroeconomic reform at the national level, which were the past focus. The mission in Honduras largely restructured its portfolio in 1994 and 1995, prior to the start of reengineering, and was awaiting approval of its new country strategy before it initiated any new projects. Reengineering efforts were undertaken during a period of severe budget reductions. Between 1994 and 1997, expenditures at the four missions we visited declined between 43 and 69 percent. A change in the number of projects can be used as a rough approximation of the change in mission activities and the restructuring of the portfolio. In general, the number of active projects declined at these missions, mostly between 1994 and 1995, before the formal start of reengineering agencywide. In three missions, a limited number of projects had been initiated since the start of reengineering. Because most current mission projects were designed and funded before the reengineering began and because USAID’s budget has been reduced, a limited amount of funding has been available for new projects. According to mission officials, the inability to automatically retain funds that are deobligated from a project limits a mission’s flexibility to use funds most effectively. It may also serve as a disincentive for terminating projects. Consequently, missions may allow projects to continue, even though they could identify other projects that may contribute more to their strategic objectives. During the first year of reengineering, the missions were busy reorganizing into strategic objective teams, learning how to function as teams, selecting measurable indicators, educating their partners about their new approach, and training staff in the use of the New Management System. When we visited the four missions in February 1997, the mission in El Salvador had just completed the development of a country strategy and was still refining its indicators for the economic growth strategic objective. The mission in Honduras was developing a new country strategy and had not implemented any new projects since the start of reengineering. Under reengineering, missions have made varying degrees of progress in developing performance indicators through which they can measure the short-term results of individual projects or groups of projects, but these indicators have not been in use long enough to have made a significant impact on program management. Missions have had difficulty in developing indicators that tally up the results of individual or groups of projects to demonstrate achievement of strategic objectives and overall, long-term development goals, such as stimulating broad-based economic growth. USAID officials acknowledge that in only a few cases have their programs been directly linked to changes in country-level results. USAID management indicated that the ability of a program to affect country-level indicators will depend on the size of the country, the budget available to support specific strategic objectives, and other factors such as the context in which programs are implemented. USAID is now developing common indicators through which it hopes to combine mission results into overall agency accomplishments worldwide. We recently reported that program evaluations can be valuable management tools for demonstrating the programs’ effectiveness. Such evaluations are an option for the agency to consider to help in its efforts to meet GPRA requirements. USAID missions have been developing performance indicators on two levels. First, missions are developing project-level indicators intended to monitor the performance and intermediate results of individual projects or groups of projects in their portfolio. Second, missions are developing strategic objective indicators that are intended to gauge progress in achieving the missions’ long-term or strategic objectives, such as increasing national rural household incomes or improving national systems of trade and investment. The four missions we visited were at various stages in developing and using performance measures for the projects intended to achieve one of their strategic objectives—economic growth. In most cases, the project-level indicators were too new for the missions to show impact on program management. Although one mission had demonstrated a thorough integration of its measures into its program management processes, another was still defining the indicators it would use. Some of the missions had fundamentally revised their project-level performance measures in 1996 to make them more closely correspond to their strategic objectives at the country level. In Honduras, the mission has been using indicators to measure project results since before reengineering. For example, to measure the results of its agricultural assistance activities, the mission has monitored the volume and value of exports for six crops: sweet onions, ginger, okra, snow peas, asparagus, and plantains. The mission has several activities aimed at promoting these exports, including providing technical assistance to farmers for production and marketing. Indicators the mission uses to measure intermediate results for other activities include the number of land titles issued by the government and the number of vocational center graduates employed. Before adopting these types of results indicators, the mission focused primarily on measuring the physical outputs supported by USAID assistance, such as the number of agricultural research projects conducted or the number of schools built. The mission has been using results indicators extensively for program management, including incorporating them in annual work plans for contractors and other development partners and in its annual performance review. In El Salvador, the mission recently established performance indicators that are intended to measure the short-term results of its assistance projects. In 1996, the missions relied almost exclusively on macroeconomic statistics to report on the results of its economic growth-related projects. The mission revised its indicators in 1996 to better reflect the direct outcomes of its program activities. For example, new indicators include the number of people in rural areas that are active clients (that have an outstanding loan and/or a savings account) in participating financial institutions and the number of people in rural areas receiving assistance in management, agricultural technology, and marketing. However, because these indicators are new, they have not been used extensively by the mission for program management. In the Philippines, the mission had developed an extensive set of performance indicators, which it reported in 1996. However, mission officials acknowledged that these indicators measured development results that were beyond the scope of their activities in that country. Since then, the mission has improved many indicators. For example, indicators for projects in Mindanao have been revised to more accurately reflect the desired results of the individual activities. The indicators include the total value of USAID-facilitated or -assisted private investments and the number of government policies or practices modified to facilitate rapid and equitable economic growth. Although many of the mission’s indicators are new, the mission has collected historical data against which to measure future progress. In Bangladesh, the mission was restructuring its strategic objectives and revising its indicators. Many of the mission’s indicators for its economic growth objective had focused on results at the national level and not specifically on areas targeted in USAID’s activities. The mission is now more closely aligning its indicators with its activities to reflect what can feasibly be achieved, given its economic growth resources. Some of the mission’s more meaningful indicators include the amount of fertilizer and improved seed marketed nationally and the percentage of the population with access to disaster relief supplies within 72 hours. Some historical data were available for comparison with the current indicators, but the mission plans to use a new data collection methodology for one key indicator being developed, and baseline data are not yet available. Projects are expected to contribute to the achievement of missions’ strategic objectives and USAID’s overall, long-term goals. USAID is working to develop better ways to measure the extent of its contribution to a country’s development. According to USAID officials, internal and external factors, including political instability and the level of other donors’ assistance, affect the missions’ country-level indicators. According to USAID management, it is not possible to measure, in a quantifiable and precise way, the impact of USAID activities alone on country-level development indicators. Therefore, missions’ strategic objective indicators are intended to reflect the results not only of mission projects but also of the efforts of other development partners. The collective impact of activities at the four missions we visited was not significant enough to have a measurable impact on country-level indicators. USAID does not claim responsibility for the development results measured by its strategic objective indicators but rather claims a “plausible association” with the results. Missions usually measure progress toward their strategic objectives using country-level development indicators. In the countries we visited, indicators of progress toward economic growth objectives included the number of people employed nationally in the agricultural, industrial, and service sectors in Honduras; percentage of the total population in El Salvador with access to potable national ratios of total exports and imports to gross domestic product in per capita gross domestic product in Bangladesh. At each of the four missions we visited, mission documents showed or USAID officials acknowledged that the activities did not significantly affect country-level indicators of economic growth. For example, for 1997, the Bangladesh mission reported that the per capita gross domestic product growth and agricultural and industrial employment had improved but that “given USAID’s relatively small investments in this strategic objective, we cannot claim major achievements at this national level.” In El Salvador, the mission reported that its projects were not sufficient to achieve its economic growth strategic objective. The mission’s economic growth portfolio is composed of a limited number of projects that focus primarily on microfinance, small business, basic education, small-scale agriculture, and policy reform. The mission in El Salvador reported that achieving the mission’s strategic objective depends on major contributions from other partners, especially international banks, which are expected to provide most of the funding required for activities relating to land, policy, and infrastructure. Internal and external factors can have a profound impact on macroeconomic conditions and thus on country-level indicators. Such factors include political instability, the commitment of political leaders to necessary reforms, the magnitude and effectiveness of assistance from other bilateral and multilateral donors, weather conditions that affect crop yields, and the stability of international markets. Recent mission reports for both Bangladesh and the Philippines cited external factors as explanations for the failure to achieve country-level development results. The Bangladesh mission reported that political turmoil and power shortages were the principal causes of shortfalls in expected economic growth. The Philippines mission reported that the value of direct exports from Mindanao was below targeted levels, primarily due to the decline in pineapple prices and the loss of European markets for bananas. Other observers have recently noted USAID’s difficulty in demonstrating its impact on broad development indicators. In March 1997, USAID’s Inspector General concluded that in most cases, USAID’s goals and objectives exceeded the agency’s span of influence and that it was therefore extremely difficult for USAID to take credit for improving country-level results. The Office of Management and Budget similarly noted in 1996, after reviewing USAID’s strategic plan and indicators, that it was very difficult to credit USAID projects with progress in country-level indicators. USAID is developing common indicators that missions will use to consistently measure progress in key areas worldwide. These measures are intended to help the agency aggregate the results of its various missions’ programs to show progress in achieving overall agency goals. However, the use of common indicators will not resolve USAID’s difficulty in attributing gains to its programs at the country level. According to a senior agency official, a key criterion for using these common indicators, as with the mission-specific strategic objective indicators currently used, will be that USAID can show a “plausible association” with the results, not that the results are attributable to USAID assistance. The extent of USAID’s plausible association with the country-level results is not reflected in the missions’ documents. In some cases, especially in countries where USAID is the largest donor and projects are showing some results, this association may be very strong, while in other cases, the association may be tenuous, based only on token USAID involvement. The agency has not clearly and consistently differentiated between levels of association with development results in its mission performance reports. USAID management acknowledged that it needs to do a better job of making the plausible associations clear in its strategic documents. Citing broad development indicators can result in misleading reports on USAID’s performance. The reports we reviewed showed that when missions reported that specific strategic objective performance targets were met, they rarely mentioned other factors outside of USAID’s control that contributed to the results, such as assistance from other donors, actions taken by the host governments independently, or favorable international market developments. Furthermore, the USAID Inspector General recently reported that USAID seemed to be taking credit in its 1996 Agency Performance Report for some high-level impact merely because it had projects in that program area. Although some of the missions we visited had done program evaluations that could be used to link project results with country-level development indicators, USAID’s current performance measurement system relies largely on indicators to assess the agency’s impact. However, in our recent report on GPRA implementation, we noted that demonstrating the impact of a government program on outcome indicators is difficult and a common problem among federal agencies that are implementing performance measurement systems. The most difficult aspect of analyzing and reporting performance data is separating the impact of a program from the impact of external factors to measure the program’s effect. We noted that in these cases “it may be important to supplement performance measurement data with impact evaluation studies to provide an accurate picture of program effectiveness.” Systematic evaluations of how a program was implemented can provide managers with important information about a program’s success or failure. For example, in Honduras, a USAID contractor’s evaluation of a basic education project confirmed that USAID activities in that country, implemented from 1986 to 1995, had a significant impact on key country-level indicators of improvement of the educational system in that country relative to other factors. Before October 1995, missions were required to evaluate each project upon completion and submit the results to USAID’s Center for Development Information and Evaluation. However, current USAID guidance allows missions to decide whether they will do performance evaluations of their activities. In one mission we visited, a senior USAID official said that the mission would rely on performance indicators, not on evaluations, to monitor its projects’ progress and results. He said that evaluations would be done primarily when implementation problems arise. At other missions, USAID officials also suggested that evaluations would probably be done less often than in the past. According to the Director of USAID’s Center for Development Information and Evaluation, the agency’s revised policy on evaluation was not necessarily intended to reduce the frequency of evaluations at the missions, but to better target them to the management needs of the mission. He said that missions are expected to do adequate evaluations to support statements about performance included in mission reports. However, documents we reviewed rarely cited evaluations to support their descriptions of performance in economic growth strategic objectives. Officials from the Center and a mission we visited told us that they believed the agency needed to reassess its policies on evaluation and mission practices to ensure that the agency is sufficiently and appropriately using evaluations for effective management. According to USAID management, as missions improve performance monitoring, there will be less need for routine impact evaluations at the activity level. Eventually, fully operational performance monitoring will result in an adequate assessment of the impact of USAID’s activities in a country. In addition, missions could supplement performance measurement data by obtaining reviews of other donor programs and host-country efforts. Nevertheless, periodic, independent evaluations of USAID’s funded activities can be a useful tool to validate program accomplishments. USAID has begun using program performance in its resource allocation process in a more systematic manner. However, the process continues to evolve, and resource allocation decisions are affected primarily by foreign policy considerations of the executive branch and congressional priorities. Each of USAID’s regional bureaus used different formulas and assigned different weights for performance when constructing their position on USAID’s fiscal year 1998 budget request. However, guidance for the fiscal year 1999 budget cycle attempts to standardize the bureaus’ assessment processes, including the weight accorded to performance in budget deliberations. A basic principle underlying USAID’s reform efforts is to divide its assistance program funding among its strategic goals in line with carefully crafted strategic plans and systems for performance measurement. USAID now includes more explicit and systematic consideration of program performance in its resource allocation process. Mission officials told us USAID’s emphasis on performance and results was good, but expressed mixed views on USAID’s allocation process. Officials generally said political and foreign policy considerations continue to dominate the budget process. While mission officials said the process is more transparent than in the past, some said that USAID should be more consultative with the missions regarding the allocation of earmarked and/or directed funding. One mission director said there is little linkage between the mission’s performance, as measured against the mission’s “results contract” with USAID, and the allocation of resources to the mission. USAID’s process for assessing performance and allocating resources continues to evolve, and modifications continue to be made in reporting requirements and procedures for weighing program results. Performance has been a factor in allocating resources. For example, when we observed headquarters technical sessions for rating performance, we learned that USAID was eliminating an agricultural loan program in Guinea due to low rates of loan disbursements. We also learned that USAID was cutting assistance to El Salvador’s electoral commission due to its poor performance in implementing needed reforms. Performance assessments may become more meaningful in the process as missions provide more complete performance information. In the Asia and Near East Bureau, the number of activities that reported complete performance information rose from 31 percent in 1995 to 59 percent in 1996. USAID allowed regional bureaus latitude in how they implemented the reporting process for the 1998 fiscal year budget request, as well as for identifying priorities for fiscal year 1997 resource allocations. The Asia and the Near East Bureau scored and ranked, within each of the agency’s four sustainable development goals, each strategic objective by performance, contribution to agency and bureau priorities, contribution to foreign policy goals, and the extent to which the host country had been a good development partner for that objective. The ranking became a starting point for further discussions on what activities should be funded. The Bureau for Africa also used a similar approach, scoring and ranking strategic objectives within each sector on performance, pipeline, host-country performance in the sector, sectoral need and magnitude of the problem, and contribution to regional initiatives and priorities for that sector. The Bureau for Latin America and the Caribbean scored and ranked each strategic objective by performance, then scored and ranked countries by performance, foreign policy interest (such as funding Haitian democratization efforts or the Guatemalan peace process), severity of need, and commitment to free market policies and democratic governance. These scores were used to place countries in four funding categories; those in the highest category received funding priority, after allowing for funding needs based on analysis of the pipeline and for meeting targets for earmarks and directives. The USAID Administrator issued guidance in January 1997 to standardize the bureaus’ resource requests, performance assessments, and resource allocation processes for the fiscal year 1999 budget request. Among other things, resource requests and allocations are expected to be based on strategic objective performance and its significance to the strategic plan. Bureaus are to use three common factors and common weights to measure performance and allocate resources: performance (35 percent), contribution to agency goals (30 percent), and contribution to development initiatives (35 percent). While this guidance provides a more standard ranking and scoring format, these groupings still allow wide latitude for including factors that are still considered in the allocation process. For example, contribution to agency goals includes foreign policy objectives, and contribution to development initiatives includes bureau initiatives, country and/or sectoral need, and the quality of the development partnership in general and within specific goal areas. We attended several USAID program review meetings during this year’s deliberations on the fiscal year 1999 budget at the Africa, Asia/Near East, and Latin America bureaus. Generally, the initial discussions on strategic objective performance were chaired by technical staff from the regional bureaus with varying representation from the Global, Management, and Policy and Program Coordination bureaus. The Asia/Near East and Latin America bureaus held several levels of reviews in Washington for each country, often with senior mission management and mission program staff. Due to the large number of missions in Africa, the Washington teams conducted the initial assessment based on the submissions and, in some cases, additional information sent from the field. The Africa Bureau held subsequent 1-day program reviews with staff from full sustainable development missions and with missions and operating units with upcoming strategy reviews. Following the rounds of technical and country meetings, bureaus conducted regional wrap-up sessions to reach resource allocation decisions and prepare for budget reviews with agency management. At the technical meetings we attended, we noted that the teams had reviewed the documents, and, in some cases, received additional information from the missions. Team members that had been given guidance on the criteria for scoring performance appeared well prepared, usually supporting their scores with specific examples of how strategic objectives were or were not meeting targets or making an impact. In some cases, team members discussed the quality and relevance of performance indicators and targets and the extent to which impact could be attributed to USAID assistance. We also heard some limited discussion on the quality of the data used to measure performance. This is only the second year of USAID’s revised budget process, and USAID officials acknowledged that modifications will continue to be made as best practices are identified. We noted at several meetings that the team members and mission staff were confused about some of the mission reporting. For example, the guidance asked missions to report on the intermediate results that were most central to achieving the goals of the strategic objectives. However, there was discussion about whether the reported information was the best reflection of performance. Some team members cabled to missions for additional information and used this in their scoring; others did not. The quality of the reports and the extent to which they provide an accurate and complete story of each strategic objective are key factors in performance assessment. Although USAID will have spent about $100 million by the end of fiscal year 1998 to develop its New Management System (NMS), the system does not work as intended and has created serious problems in mission operations and morale. The system is key to successful information sharing required under reengineering for accountability and control. The agency deployed the system worldwide, knowing that it was not fully operational or adequately tested. Indeed, USAID’s Inspector General reported 6 months after implementation of the system that it was not complete, had not been demonstrated to work effectively, and had not been adequately tested.Because of problems with the system, USAID suspended use of part of the system in the missions in April 1997 and is now taking corrective steps recommended by the Inspector General. Since 1994, USAID has been developing the NMS to support its organizational reforms. The system is designed to consolidate USAID’s accounting, budget, personnel, procurement, program operations, and property management into a single, integrated network that the agency’s missions and offices worldwide can access and to aid in the effective management and monitoring of its programs. The purpose of the system is to make financial management more efficient by streamlining business processes, eliminating paper forms, ensuring its compliance with federal accounting and financial management requirements, and providing managers with information needed to make appropriate decisions and reliably report the status of USAID projects; facilitate missions’ program delivery by providing the means for missions and offices to share information on-line about program management; and empower missions and provide USAID management with a means for maintaining accountability. The system was activated at USAID headquarters in July 1996 and at the missions in October 1996. As of October 1996, four of the six system modules—the accounting, budget, operations, and procurement modules—were operational, with some limitations, at USAID headquarters. The personnel and property management modules are still in development. According to USAID’s Office of the Inspector General, the agency will have invested about $89 million by the end of fiscal year 1997 and about $100 million by the end of fiscal year 1998 to develop the system. The four missions we visited could not routinely use NMS to successfully execute financial management functions such as obligating funds and recording procurement actions. According to mission officials, having a dysfunctional system limits the benefits of the missions’ reengineered operations. The Honduras mission director told us that “the reform process has now reached a point where some reforms as contemplated simply cannot go much further without a functioning NMS.” For several months, the mission in Honduras could not obligate allocated funds for disaster relief on the country’s north coast due to problems in the system’s accounting and procurement modules. Also, because of the system’s long response times, entering information into the system was often very time-consuming and frustrating for mission staff. Officials in Honduras attempted to run some procurement actions while we observed: the actions took from 2 to 6 hours to process, and none could be successfully completed. In fact, because of these problems, some mission officials we visited had two personal computers on their desks: one for using NMS and one to do other work. In El Salvador, some mission personnel told us that all other work frequently stopped while NMS was in use and the system was having a negative impact on implementation of mission projects. In addition to citing excessive system response times as a major problem, the agency’s Chief of Staff noted potential problems with the system development effort. He said that because each module was independently developed, the database used for NMS had some inconsistencies. For example, in one module, for some data fields the system will accept eight characters, while another module will accept only six characters for the same information. Such inconsistencies can cause problems when attempting to transfer data from one module to another. In March 1997, USAID’s Inspector General reported that the system’s premature deployment had increased the risk of fraud and abuse, had not met users’ needs, and did not meet basic federal financial system requirements. These findings were consistent with our observations. The Inspector General made a number of recommendations to USAID management, including suspension of part of the system until many of its problems were resolved. In April 1997, following the Inspector General’s report and comments from system users and us, USAID suspended the NMS accounting and procurement modules and left the budget and operations modules operational in the missions. These systems are expected to remain shut down until at least fiscal year 1998. All four modules will still be used by headquarters offices, which have not had as much trouble in using NMS as have the missions. USAID said it is taking the following steps to address problems with the NMS: A new Director of Information Resources Management, who will focus exclusively on managing information resource activities, has been hired. The Administrator has announced that he will designate a senior official as the agency’s Chief Information Officer, who will be certified by the Office of Management and Budget as qualified for the position and report directly to the Administrator. A full-time NMS project manager has been selected to supervise the development efforts until the system is operational. Efforts have been undertaken to analyze the technical and implementation problems that currently limit NMS from achieving its full potential. A goal to achieve level 2 of the Software Engineering Institute’s Capability Maturity Model (CMM) for software development has been established. A statement of work has been developed to assess the accounting and procurement components of NMS and their quality and to identify the risks and opportunities in the application code that the components use. A transition to performance-based contracting for NMS and a reduction in both the number of contract entities and contractors are underway. Development activities are limited to the portions of NMS that are still operational and needed to establish the core functionality in the accounting module. It is too early to discern the full extent of the impact of reengineering efforts at USAID missions, as they have only recently made the investment and operational changes necessary to bring about long-term change. However, USAID’s reengineering efforts have produced some tangible benefits in the areas of planning, implementing, and monitoring assistance projects. Missions are adapting to the strategic approach to planning, but missions’ progress in changing their portfolios of projects has been constrained by factors such as funding limitations and obligation authority. The extent to which USAID can capitalize on operational changes will depend on continued vigilance by agency management to ensure that these changes take hold and continue. The ultimate test will be whether USAID can achieve sustained efficiencies in project planning and implementation and demonstrate the impact of its projects. With limited resources available for foreign assistance, USAID’s ability to target its assistance effectively and deliver it efficiently is especially important. Since information-sharing is critical to USAID’s new operating system, diligence in properly deploying NMS is key to the reform process. While NMS promises to aid in more efficient management of USAID resources, many of the efficiencies expected to result from this system have been delayed and offset by unsuccessful implementation. USAID has taken positive steps to address the concerns discussed in this report and the recommendations in its Inspector General’s report. However, whether these actions will correct the NMS’ problems will depend on the successful implementation of these efforts. In addressing the problems with NMS, it is USAID management’s responsibility to ensure that NMS facilitates, rather than impedes, the reform process. In commenting on a draft of this report, USAID stated that the report was “a fair, balanced, and thoughtful assessment of the state of overseas missions’ reengineering efforts.” USAID comments provided some suggested clarifications to the draft report and additional information on agency efforts to address problems associated with the New Management System, which we have incorporated in the text of our report, as appropriate. USAID’s letter, without attachments, is reprinted in appendix IV. As requested, we examined how USAID missions have reformed their operations and developed a results-oriented program. In making our selections for overseas fieldwork, we asked USAID to recommend five countries in each of its regions where reengineering has progressed to the point that we could see results and the economic growth objective was a priority. USAID provided us program budget data on all countries in which it operates and five country recommendations per region, along with explanations for their selections. Following discussions with USAID officials and congressional Committee staff, we selected El Salvador, Honduras, Bangladesh, and the Philippines for fieldwork. We based our country selections on the size of the USAID program, importance of the economic growth strategic objective in the overall program, and pipeline size. We had done fieldwork in the Dominican Republic for the first part of the assignment. We focused on Latin America and Asia because of the greater emphasis on economic growth in these regions; we excluded Central Europe and the former Soviet Union region from consideration since these countries’ programs are relatively new, transitional, and expected to end soon. To assess (1) how USAID missions have reengineered their operations and (2) how USAID monitors and evaluates the results of its projects, we interviewed USAID officials in the Management, Global, and regional bureaus and the Bureau for Policy and Program Coordination, including the Center for Development Information and Evaluation in Washington, D.C.; reviewed agency policies, practices, and procedures; and analyzed agency documentation, including evaluations, reporting documents, and Inspector General reports. At the missions visited, we interviewed USAID management and staff, held discussions with foreign service nationals on the impact of reforms and reengineering, and collected and analyzed relevant documentation. We also interviewed representatives of nongovernmental and private voluntary organizations and USAID contractors that implement USAID programs and officials of international financial institutions, other donor countries, and the host governments. To assess how reengineering has affected the delivery of development assistance, we interviewed USAID officials, partners, and customers about reengineered program design and delivery and reviewed documentation on mission strategies, plans, and projects. In reviewing portfolio changes at the missions we visited, we assessed the active, completed, terminated, or initiated projects for each year between fiscal year 1994 and 1997. To determine how USAID allocates funds for its projects, we interviewed USAID officials, attended regional bureau budget sessions, examined the budget process, and reviewed recent and current budget documents and agency guidance. We assessed how the budget process relates the various factors that the agency uses to measure and rank its performance to provide some insight as to whether the budget process is meeting expectations. To assess how USAID’s NMS supports mission operations, we met with officials of USAID’s Office of the Inspector General and reviewed their reports on NMS. We reviewed agency documentation, including agency cable traffic and e-mails in Washington, D.C., and the field missions visited; USAID’s general notice system and policy directives, guidance, and statements; and congressional testimony by the Administrator. We interviewed USAID officials and agency staff who use NMS in Washington, D.C., and in the field missions and observed mission officials’ and staff members’ use and/or tests of NMS. We reviewed our prior reports: a December 1992 report identifying weaknesses in federal information resources management that frequently led to costly projects that show disappointing results, numerous reports and testimonies since 1993 on information management overall, and a February 1997 report noting that failure to follow disciplined management and system development practices could lead to costly failures. We performed our work from October 1996 to May 1997 in accordance with generally accepted government auditing standards. We are providing copies of this report to the Chairmen and Ranking Minority Members of the House and Senate Committees on Appropriations, the Senate Committee on Foreign Relations, the House Committee on Government Reform and Oversight, and the Senate Committee on Governmental Affairs, and to the Ranking Minority Member of the House Committee on International Relations. We are also sending copies to the Administrator, U.S. Agency for International Development, and to the Director, Office of Management and Budget. Copies will be made available to others upon request. Please contact me at (202) 512-4128 if you or your staff have any questions about this report. Major contributors to this report are listed in appendix V. 1997 (est.) Population, health, and nutrition. The amounts for “Other” include the $1.2 billion provided to Israel each year. Includes the U.S. Agency for International Development (USAID) and USAID Office of Inspector General operating expenses and Foreign Service Retirement and Disability funds. Not applicable. September: One-year experiment on reengineered program operation processes completed September: Agency strategic framework and indicators for FY 1995-96 issued October: New program operations procedures issued October: Agency Automated Directives System implemented October: Partial deployment of New Managment System (NMS) July: NMS activated in Washington, D.C. October: NMS deployed worldwide February: REFORM Advisory Group visits selected missions to assist with reengineering 3 mission closings planned Staff reduced by 417 (as of 6/30/97) April: Two modules of NMS suspended at missions $5.9 billion (estimated) 7,609 employees (as of 6/30/97) Ernie E. Jackson The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the effect of the U.S. Agency for International Development's (USAID) reforms on its overseas missions' operations and delivery of assistance, focusing on how: (1) USAID missions have reengineered their operations; (2) reengineering has affected the content of USAID's assistance program; (3) USAID monitors and evaluates the results of its projects; (4) USAID allocates funds for its projects; and (5) USAID's New Management System supports mission operations. GAO noted that: (1) in reengineering the way assistance is delivered, USAID missions have only recently made the investment and operational changes necessary to bring about long-term change; (2) to date, USAID's reengineering efforts to improve its delivery of assistance have shown some benefits in the areas of planning, implementing, and monitoring projects; (3) notwithstanding this progress, USAID still has major obstacles to overcome in deploying its New Management System and establishing valid and reliable performance measures; (4) overseas missions have changed the way they manage assistance in support of USAID's strategic goals; (5) however, because missions had no baseline data on operations management prior to reengineering their processes, neither they nor GAO can identify measurable increases in efficiency or effectiveness of the delivery of assistance; (6) USAID missions have, to varying degrees, begun to establish more results-oriented indicators and report the results of their projects annually; (7) however, the missions still have difficulty linking their activities to the broad indicators of development--such as a country's rate of economic growth; (8) one way to provide a more complete picture of program performance could be to supplement performance measurement data with impact evaluation studies; (9) although the relative performance of mission programs is clearly a factor in USAID's resource allocation decisions, these decisions are still largely driven by other considerations, such as contributions to foreign policy and agency priorities, country need and commitment, and funding priorities of the executive branch and Congress; (10) USAID's New Management System, one of the agency's key tools in reforming its operations, is not working as intended; (11) this computer system, which is expected to cost at least $100 million by the end of fiscal year 1998, was designed to enable the agency to manage its resources and monitor results more effectively by consolidating accounting, budgeting, personnel, procurement, and program operations into a single, integrated network that can be accessed worldwide; (12) despite warnings that the system had not been tested and did not meet basic federal requirements, USAID activated the system in July 1996 in Washington and deployed it in October 1996 in its missions; (13) USAID suspended much of the system's operation in April 1997, after it failed to work properly; and (14) correcting system deficiencies will be critical to continued progress of the agency's reform effort. |
The department is facing near-and long-term internal fiscal pressures as it attempts to balance competing demands to support ongoing operations, rebuild readiness following extended military operations, and manage increasing personnel and health care costs as well as significant cost growth in its weapon systems programs. For more than a decade, DOD has dominated GAO’s list of federal programs and operations at high risk of being vulnerable to fraud, waste, abuse. In fact, all of the DOD programs on GAO’s High-Risk List relate to business operations, including systems and processes related to management of contracts, finances, supply chain, and support infrastructure, as well as weapon systems acquisition. Long-standing and pervasive weaknesses in DOD’s financial management and related business processes and systems have (1) resulted in a lack of reliable information needed to make sound decisions and report on the financial status and cost of DOD activities to Congress and DOD decision makers; (2) adversely impacted its operational efficiency and mission performance in areas of major weapons system support and logistics; and (3) left the department vulnerable to fraud, waste, and abuse. Because of the complexity and long-term nature of DOD’s transformation efforts, GAO has reported the need for a chief management officer (CMO) position and a comprehensive, enterprisewide business transformation plan. In May 2007, DOD designated the Deputy Secretary of Defense as the CMO. In addition, the National Defense Authorization Acts for Fiscal Years 2008 and 2009 contained provisions that codified the CMO and Deputy CMO (DCMO) positions, required DOD to develop a strategic management plan, and required the Secretaries of the military departments to designate their Undersecretaries as CMOs and to develop business transformation plans. DOD financial managers are responsible for the functions of budgeting, financing, accounting for transactions and events, and reporting of financial and budgetary information. To maintain accountability over the use of public funds, DOD must carry out financial management functions such as recording, tracking, and reporting its budgeted spending, actual spending, and the value of its assets and liabilities. DOD relies on a complex network of organizations and personnel to execute these functions. Also, its financial managers must work closely with other departmental personnel to ensure that transactions and events with financial consequences, such as awarding and administering contracts, managing military and civilian personnel, and authorizing employee travel, are properly monitored, controlled, and reported, in part, to ensure that DOD does not violate spending limitations established in legislation or other legal provisions regarding the use of funds. Before fiscal year 1991, the military services and defense agencies independently managed their finance and accounting operations. According to DOD, these decentralized operations were highly inefficient and failed to produce reliable information. On November 26, 1990, DOD created the Defense Finance and Accounting Service (DFAS) as its accounting agency to consolidate, standardize, and integrate finance and accounting requirements, functions, procedures, operations, and systems. The military services and defense agencies pay for finance and accounting services provided by DFAS using their operations and maintenance appropriations. The military services continue to perform certain finance and accounting activities at each military installation. These activities vary by military service depending on what the services wanted to maintain in-house and the number of personnel they were willing to transfer to DFAS. As DOD’s accounting agency, DFAS records these transactions in the accounting records, prepares thousands of reports used by managers throughout DOD and by the Congress, and prepares DOD-wide and service-specific financial statements. The military services play a vital role in that they authorize the expenditure of funds and are the source of most of the financial information that allows DFAS to make payroll and contractor payments. The military services also have responsibility for most of DOD assets and the related information needed by DFAS to prepare annual financial statements required under the Chief Financial Officers Act. DOD accounting personnel are responsible for accounting for funds received through congressional appropriations, the sale of goods and services by working capital fund businesses, revenue generated through nonappropriated fund activities, and the sales of military systems and equipment to foreign governments or international organizations. DOD’s finance activities generally involve paying the salaries of its employees, paying retirees and annuitants, reimbursing its employees for travel- related expenses, paying contractors and vendors for goods and services, and collecting debts owed to DOD. DOD defines its accounting activities to include accumulating and recording operating and capital expenses as well as appropriations, revenues, and other receipts. According to DOD’s fiscal year 2012 budget request, in fiscal year 2010 DFAS processed approximately 198 million payment-related transactions and disbursed over $578 billion; accounted for 1,129 active DOD appropriation accounts; and processed more that 11 million commercial invoices. DOD financial management was designated as a high-risk area by GAO in 1995. Pervasive deficiencies in financial management processes, systems, and controls, and the resulting lack of data reliability, continue to impair management’s ability to assess the resources needed for DOD operations; track and control costs; ensure basic accountability; anticipate future costs; measure performance; maintain funds control; and reduce the risk of loss from fraud, waste, and abuse. Other business operations, including the high-risk areas of contract management, supply chain management, support infrastructure management, and weapon systems acquisition are directly impacted by the problems in financial management. We have reported that continuing weaknesses in these business operations result in billions of dollars of wasted resources, reduced efficiency, ineffective performance, and inadequate accountability. Examples of the pervasive weaknesses in the department’s business operations are highlighted below. DOD invests billions of dollars to acquire weapon systems, but it lacks the financial management processes and capabilities it needs to track and report on the cost of weapon systems in a reliable manner. We reported on this issue over 20 years ago, but the problems continue to persist. In July 2010, we reported that although DOD and the military departments have efforts underway to begin addressing these financial management weaknesses, problems continue to exist and remediation and improvement efforts would require the support of other business areas beyond the financial community before they could be fully addressed. DOD also requests billions of dollars each year to maintain its weapon systems, but it has limited ability to identify, aggregate, and use financial management information for managing and controlling operating and support costs. Operating and support costs can account for a significant portion of a weapon system’s total life-cycle costs, including costs for repair parts, maintenance, and contract services. In July 2010, we reported that the department lacked key information needed to manage and reduce operating and support costs for most of the weapon systems we reviewed—including cost estimates and historical data on actual operating and support costs. For acquiring and maintaining weapon systems, the lack of complete and reliable financial information hampers DOD officials in analyzing the rate of cost growth, identifying cost drivers, and developing plans for managing and controlling these costs. Without timely, reliable, and useful financial information on cost, DOD management lacks information needed to accurately report on acquisition costs, allocate resources to programs, or evaluate program performance. In June 2010, we reported that the Army Budget Office lacked an adequate funds control process to provide it with ongoing assurance that obligations and expenditures do not exceed funds available in the Military Personnel–Army (MPA) appropriation. We found that an obligation of $200 million in excess of available funds in the Army’s military personnel account violated the Antideficiency Act. The overobligation likely stemmed, in part, from lack of communication between Army Budget and program managers so that Army Budget’s accounting records reflected estimates instead of actual amounts until it was too late to control the incurrence of excessive obligations in violation of the act. Thus, at any given time in the fiscal year, Army Budget did not know the actual obligation and expenditure levels of the account. Army Budget explained that it relies on estimated obligations—despite the availability of actual data from program managers—because of inadequate financial management systems. The lack of adequate process and system controls to maintain effective funds control impacted the Army’s ability to prevent, identify, correct, and report potential violations of the Antideficiency Act. In our February 2011 report on the Defense Centers of Excellence (DCOE), we found that DOD’s TRICARE Management Activity (TMA) had misclassified $102.7 million of the nearly $112 million in DCOE advisory and assistance contract obligations. The proper classification and recording of costs are basic financial management functions that are also key in analyzing areas for potential future savings. Without adequate financial management processes, systems, and controls, DOD components are at risk of reporting inaccurate, inconsistent, and unreliable data for financial reporting and management decision making and potentially exceeding authorized spending limits. The lack of effective internal controls hinders management’s ability to have reasonable assurance that their allocated resources are used effectively, properly, and in compliance with budget and appropriations law. Over the years, DOD has initiated several broad-based reform efforts to address its long-standing financial management weaknesses. However, as we have reported, those efforts did not achieve their intended purpose of improving the department’s financial management operations. In 2005, the DOD Comptroller established the DOD FIAR Directorate to develop, manage, and implement a strategic approach for addressing the department’s financial management weaknesses for achieving auditability, and for integrating these efforts with other improvement activities, such as the department’s business system modernization efforts. In May 2009, we identified several concerns with the adequacy of the FIAR Plan as a strategic and management tool to resolve DOD’s financial management difficulties and thereby position the department to be able to produce auditable financial statements. Overall, since the issuance of the first FIAR Plan in December 2005, improvement efforts have not resulted in the fundamental transformation of operations necessary to resolve the department’s long-standing financial management deficiencies. However, DOD has made significant improvements to the FIAR Plan that, if implemented effectively, could result in significant improvement in DOD’s financial management and progress toward auditability, but progress in taking corrective actions and resolving deficiencies remains slow. While none of the military services has obtained an unqualified (clean) audit opinion, some DOD organizations, such as the Army Corps of Engineers, DFAS, the Defense Contract Audit Agency, and the DOD Inspector General, have achieved this goal. Moreover, some DOD components that have not yet received clean audit opinions are beginning to reap the benefits of strengthened controls and processes gained through ongoing efforts to improve their financial management operations and reporting capabilities. Lessons learned from the Marine Corps’ Statement of Budgetary Resources audit can provide a roadmap to help other components better stage their audit readiness efforts by strengthening their financial management processes to increase data reliability as they develop action plans to become audit ready. In August 2009, the DOD Comptroller sought to further focus efforts of the department and components, in order to achieve certain short- and long- term results, by giving priority to improving processes and controls that support the financial information most often used to manage the department. Accordingly, DOD revised its FIAR strategy and methodology to focus on the DOD Comptroller’s two priorities—budgetary information and asset accountability. The first priority is to strengthen processes, controls, and systems that produce DOD’s budgetary information and the department’s Statements of Budgetary Resources. The second priority is to improve the accuracy and reliability of management information pertaining to the department’s mission-critical assets, including military equipment, real property, and general equipment, and validating improvement through existence and completeness testing. The DOD Comptroller directed the DOD components participating in the FIAR Plan—the departments of the Army, Navy, Air Force and the Defense Logistics Agency—to use a standard process and aggressively modify their activities to support and emphasize achievement of the priorities. GAO supports DOD’s current approach of focusing and prioritizing efforts in order to achieve incremental progress in addressing weaknesses and making progress toward audit readiness. Budgetary and asset information is widely used by DOD managers at all levels, so its reliability is vital to daily operations and management. DOD needs to provide accountability over the existence and completeness of its assets. Problems with asset accountability can further complicate critical functions, such as planning for the current troop withdrawals. In May 2010, DOD introduced a new phased approach that divides progress toward achieving financial statement auditability into five waves (or phases) of concerted improvement activities (see appendix I). According to DOD, the components’ implementation of the methodology described in the 2010 FIAR Plan is essential to the success of the department’s efforts to ultimately achieve full financial statement auditability. To assist the components in their efforts, the FIAR guidance, issued along with the revised plan, details the implementation of the methodology with an emphasis on internal controls and supporting documentation that recognizes both the challenge of resolving the many internal control weaknesses and the fundamental importance of establishing effective and efficient financial management. The FIAR Guidance provides the process for the components to follow, through their individual Financial Improvement Plans, in assessing processes, controls, and systems; identifying and correcting weaknesses; assessing, validating, and sustaining corrective actions; and achieving full auditability. The guidance directs the components to identify responsible organizations and personnel and resource requirements for improvement work. In developing their plans, components use a standard template that comprises data fields aligned to the methodology. The consistent application of a standard methodology for assessing the components’ current financial management capabilities can help establish valid baselines against which to measure, sustain, and report progress. Improving the department’s financial management operations and thereby providing DOD management and the Congress more accurate and reliable information on the results of its business operations will not be an easy task. It is critical that the current initiatives being led by the DOD Deputy Chief Management Officer and the DOD Comptroller be continued and provided with sufficient resources and ongoing monitoring in the future. Absent continued momentum and necessary future investments, the current initiatives may falter, similar to previous efforts. Below are some of the key challenges that the department must address in order for the financial management operations of the department to improve to the point where DOD may be able to produce auditable financial statements. Committed and sustained leadership. The FIAR Plan is in its sixth year and continues to evolve based on lessons learned, corrective actions, and policy changes that refine and build on the plan. The DOD Comptroller has expressed commitment to the FIAR goals, and established a focused approach that is intended to help DOD achieve successes in the near term. But the financial transformation needed at DOD, and its removal from GAO’s high-risk list, is a long-term endeavor. Improving financial management will need to be a cross-functional endeavor. It requires the involvement of DOD operations performing other business functions that interact with financial management—including those in the high-risk areas of contract management, supply chain management, support infrastructure management, and weapon systems acquisition. As acknowledged by DOD officials, sustained and active involvement of the department’s Chief Management Officer, the Deputy Chief Management Officer, the military departments’ Chief Management Officers, the DOD Comptroller, and other senior leaders is critical. Within every administration, there are changes at the senior leadership; therefore, it is paramount that the current initiative be institutionalized throughout the department—at all working levels—in order for success to be achieved. Effective plan to correct internal control weaknesses. In May 2009, we reported that the FIAR Plan did not establish a baseline of the department’s state of internal control and financial management weaknesses as its starting point. Such a baseline could be used to assess and plan for the necessary improvements and remediation to be used to measure incremental progress toward achieving estimated milestones for each DOD component and the department. DOD currently has efforts underway to address known internal control weaknesses through three interrelated programs: (1) Internal Controls over Financial Reporting (ICOFR) program, (2) ERP implementation, and (3) FIAR Plan. However, the effectiveness of these three interrelated efforts at establishing a baseline remains to be seen. Furthermore, DOD has yet to identify the specific control actions that need to be taken in Waves 4 and 5 of the FIAR Plan, which deal with asset accountability and other financial reporting matters. Because of the department’s complexity and magnitude, developing and implementing a comprehensive plan that identifies DOD’s internal control weaknesses will not be an easy task. But it is a task that is critical to resolving the long-standing weaknesses and will require consistent management oversight and monitoring for it to be successful. Competent financial management workforce. Effective financial management in DOD will require a knowledgeable and skilled workforce that includes individuals who are trained and certified in accounting, well versed in government accounting practices and standards, and experienced in information technology. Hiring and retaining such a skilled workforce is a challenge DOD must meet to succeed in its transformation to efficient, effective, and accountable business operations. The National Defense Authorization Act for Fiscal Year 2006 directed DOD to develop a strategic plan to shape and improve the department’s civilian workforce. The plan was to, among other things, include assessments of (1) existing critical skills and competencies in DOD’s civilian workforce, (2) future critical skills and competencies needed over the next decade, and (3) any gaps in the existing or future critical skills and competencies identified. In addition, DOD was to submit a plan of action for developing and reshaping the civilian employee workforce to address any identified gaps, as well as specific recruiting and retention goals and strategies on how to train, compensate, and motivate civilian employees. In developing the plan, the department identified financial management as one of its enterprisewide mission-critical occupations. In July 2011, we reported that DOD’s 2009 overall civilian workforce plan had addressed some legislative requirements, including assessing the critical skills of its existing civilian workforce. Although some aspects of the legislative requirements were addressed, DOD still has significant work to do. For example, while the plan included gap analyses related to the number of personnel needed for some of the mission-critical occupations, the department had only discussed competency gap analyses for 3 mission-critical occupations—language, logistics management, and information technology management. A competency gap for financial management was not included in the department’s analysis. Until DOD analyzes personnel needs and gaps in the financial management area, it will not be in a position to develop an effective financial management recruitment, retention, and investment strategy to successfully address its financial management challenges. Accountability and effective oversight. The department established a governance structure for the FIAR Plan, which includes review bodies for governance and oversight. The governance structure is intended to provide the vision and oversight necessary to align financial improvement and audit readiness efforts across the department. To monitor progress and hold individuals accountable for progress, DOD managers and oversight bodies need reliable, valid, meaningful metrics to measure performance and the results of corrective actions. In May 2009, we reported that the FIAR Plan did not have clear results-oriented metrics. To its credit, DOD has taken action to begin defining results-oriented FIAR metrics it intends to use to provide visibility of component-level progress in assessment; and testing and remediation activities, including progress in identifying and addressing supporting documentation issues. We have not yet had an opportunity to assess implementation of these metrics—including the components’ control over the accuracy of supporting data—or their usefulness in monitoring and redirecting actions. Ensuring effective monitoring and oversight of progress—especially by the leadership in the components—will be key to bringing about effective implementation, through the components’ Financial Improvement Plans, of the department’s financial management and related business process reform. If the department’s future FIAR Plan updates provide a comprehensive strategy for completing Waves 4 and 5, the plan can serve as an effective tool to help guide and direct the department’s financial management reform efforts. Effective oversight holds individuals accountable for carrying out their responsibilities. DOD has introduced incentives such as including FIAR goals in Senior Executive Service Performance Plans, increased reprogramming thresholds granted to components that receive a positive audit opinion on their Statement of Budgetary Resources, audit costs funded by the Office of the Secretary of Defense after a successful audit, and publicizing and rewarding components for successful audits. The challenge now is to evaluate and validate these and other incentives to determine their effectiveness and whether the right mix of incentives has been established. Well-defined enterprise architecture. For decades, DOD has been challenged in modernizing its timeworn business systems. Since 1995, we have designated DOD’s business systems modernization program as high risk. Between 2001 and 2005, we reported that the modernization program had spent hundreds of millions of dollars on an enterprise architecture and investment management structures that had limited value. Accordingly, we made explicit architecture and investment management-related recommendations. Congress included provisions in the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 that were consistent with our recommendations. In response, DOD continues to take steps to comply with the act’s provisions and to satisfy relevant system modernization management guidance. Collectively, these steps address best practices in implementing the statutory provisions concerning the business enterprise architecture and review of systems costing in excess of $1 million. However, long-standing challenges that we previously identified remain to be addressed. Specifically, while DOD continues to release updates to its corporate enterprise architecture, the architecture has yet to be federated through development of aligned subordinate architectures for each of the military departments. In this regard, each of the military departments has made progress in managing its respective architecture program, but there are still limitations in the scope and completeness, as well as the maturity of the military departments’ architecture programs. For example, while each department has established or is in the process of establishing an executive committee with responsibility and accountability for the enterprise architecture, none has fully developed an enterprise architecture methodology or a well-defined business enterprise architecture and transition plan to guide and constrain business transformation initiatives. In addition, while DOD continues to establish investment management processes, the DOD enterprise and the military departments’ approaches to business systems investment management still lack the defined policies and procedures to be considered effective investment selection, control, and evaluation mechanisms. Until DOD fully implements these longstanding institutional modernization management controls, its business systems modernization will likely remain a high-risk program. Successful implementation of the ERPs. The department has invested billions of dollars and will invest billions more to implement the ERPs. DOD officials have said that successful implementation of ERPs is key to transforming the department’s business operations, including financial management, and in improving the department’s capability to provide DOD management and Congress with accurate and reliable information on the results of DOD’s operations. DOD has stated that the ERPs will replace over 500 legacy systems. The successful implementation of the ERPs is not only critical for addressing long-standing weaknesses in financial management, but equally important for helping to resolve weaknesses in other high-risk areas such as business transformation, business system modernization, and supply chain management. Over the years we have reported that the department has not effectively employed acquisition management controls to help ensure the ERPs deliver the promised capabilities on time and within budget. Delays in the successful implementation of ERPs have extended the use of existing duplicative, stovepiped systems, and continued funding of the existing legacy systems longer than anticipated. Additionally, the continued implementation problems can erode savings that were estimated to accrue to DOD as a result of modernizing its business systems and thereby reduce funds that could be used for other DOD priorities. To help improve the department’s management oversight of its ERPs, we have recommended that DOD define success for ERP implementation in the context of business operations and in a way that is measurable. Accepted practices in system development include testing the system in terms of the organization’s mission and operations—whether the system performs as envisioned at expected levels of cost and risk when implemented within the organization’s business operations. Developing and using specific performance measures to evaluate a system effort should help management understand whether the expected benefits are being realized. Without performance measures to evaluate how well these systems are accomplishing their desired goals, DOD decision makers, including program managers, do not have all the information they need to evaluate their investments to determine whether the individual programs are helping DOD achieve business transformation and thereby improve upon its primary mission of supporting the warfighter. Another key element in DOD efforts to modernize its business systems is investment management policies and procedures. We reported in June 2011 that DOD’s oversight process does not provide sufficient visibility into the military department’s investment management activities, including its reviews of systems that are in operations and maintenance made and smaller investments. As discussed in our information technology investment management framework and previous reports on DOD’s investment management of its business systems, adequately documenting both policies and associated procedures that govern how an organization manages its information technology projects and investment portfolios is important because doing so provides the basis for rigor, discipline, and repeatability in how investments are selected and controlled across the entire organization. Until DOD fully defines missing policies and procedures, it is unlikely that the department’s over 2,200 business systems will be managed in a consistent, repeatable, and effective manner that, among other things, maximizes mission performance while minimizing or eliminating system overlap and duplication. To this point, there is evidence showing that DOD is not managing its systems in this manner. For example, DOD reported that of its 79 major business and other IT investments, about a third are encountering cost, schedule, and performance shortfalls requiring immediate and sustained management attention. In addition, we have previously reported that DOD’s business system environment has been characterized by (1) little standardization, (2) multiple systems performing the same tasks, (3) the same data stored in multiple systems, and (4) manual data entry into multiple systems. Because DOD spends billions of dollars annually on its business systems and related IT infrastructure, the potential for identifying and avoiding the costs associated with duplicative functionality across its business system investments is significant. In closing, I am encouraged by the recent efforts and commitment DOD’s leaders have shown toward improving the department’s financial management. Progress we have seen includes recently issued guidance to aid DOD components in their efforts to address their financial management weaknesses and achieve audit readiness; standardized component financial improvement plans to facilitate oversight and monitoring; and the sharing of lessons learned. In addition, the DCMO and the DOD Comptroller have shown commitment and leadership in moving DOD’s financial management improvement efforts forward. The revised FIAR strategy is still in the early stages of implementation, and DOD has a long way and many long-standing challenges to overcome, particularly with regard to sustained commitment, leadership, and oversight, before the department and its military components are fully auditable, and DOD financial management is no longer considered high risk. However, the department is heading in the right direction and making progress. Some of the most difficult challenges ahead lie in the effective implementation of the department’s strategy by the Army, Navy, Air Force, and DLA, including successful implementation of ERP systems and integration of financial management improvement efforts with other DOD initiatives. GAO will continue to monitor the progress of and provide feedback on the status of DOD’s financial management improvement efforts. We currently have work in progress to assess implementation of the department’s FIAR strategy and efforts toward auditability. As a final point, I want to emphasize the value of sustained congressional interest in the department’s financial management improvement efforts, as demonstrated by this Panel’s leadership. Mr. Chairman and Members of the Panel, this concludes my prepared statement. I would be pleased to respond to any questions that you or other members of the Panel may have at this time. For further information regarding this testimony, please contact Asif A. Khan, (202) 512-9095 or khana@gao.gov. Key contributors to this testimony include J. Christopher Martin, Senior-Level Technologist; F. Abe Dymond, Assistant Director; Gayle Fischer, Assistant Director; Greg Pugnetti, Assistant Director; Darby Smith, Assistant Director; Steve Donahue; Keith McDaniel; Maxine Hattery; Hal Santarelli; and Sandy Silzer. The first three waves focus on achieving the DOD Comptroller’s interim budgetary and asset accountability priorities, while the remaining two waves are intended to complete actions needed to achieve full financial statement auditability. However, the department has not yet fully defined its strategy for completing waves 4 and 5. Each wave focuses on assessing and strengthening internal controls and business systems related to the stage of auditability addressed in the wave. Wave 1—Appropriations Received Audit focuses on the appropriations receipt and distribution process, including funding appropriated by Congress for the current fiscal year and related apportionment/reapportionment activity by the OMB, as well as allotment and sub-allotment activity within the department. Wave 2—Statement of Budgetary Resources Audit focuses on supporting the budget-related data (e.g., status of funds received, obligated, and expended) used for management decision making and reporting, including the Statement of Budgetary Resources. In addition to fund balance with Treasury reporting and reconciliation, other significant end-to-end business processes in this wave include procure-to-pay, hire- to-retire, order-to-cash, and budget-to-report. Wave 3—Mission Critical Assets Existence and Completeness Audit focuses on ensuring that all assets (including military equipment, general equipment, real property, inventory, and operating materials and supplies) that are recorded in the department’s accountable property systems of record exist; all of the reporting entities’ assets are recorded in those systems of record; reporting entities have the right (ownership) to report these assets; and the assets are consistently categorized, summarized, and reported. Wave 4—Full Audit Except for Legacy Asset Valuation includes the valuation assertion over new asset acquisitions and validation of management’s assertion regarding new asset acquisitions, and it depends on remediation of the existence and completeness assertions in Wave 3. Also, proper contract structure for cost accumulation and cost accounting data must be in place prior to completion of the valuation assertion for new acquisitions. It involves the budgetary transactions covered by the Statement of Budgetary Resources effort in Wave 2, including accounts receivable, revenue, accounts payable, expenses, environmental liabilities, and other liabilities. Wave 5—Full Financial Statement Audit focuses efforts on assessing and strengthening, as necessary, internal controls, processes, and business systems involved in supporting the valuations reported for legacy assets once efforts to ensure control over the valuation of new assets acquired and the existence and completeness of all mission assets are deemed effective on a go-forward basis. Given the lack of documentation to support the values of the department’s legacy assets, federal accounting standards allow for the use of alternative methods to provide reasonable estimates for the cost of these assets. In the context of this phased approach, DOD’s dual focus on budgetary and asset information offers the potential to obtain preliminary assessments regarding the effectiveness of current processes and controls and identify potential issues that may adversely impact subsequent waves. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | As one of the largest and most complex organizations in the world, the Department of Defense (DOD) faces many challenges in resolving serious problems in its financial management and related business operations and systems. DOD is required by various statutes to (1) improve its financial management processes, controls, and systems to ensure that complete, reliable, consistent, and timely information is prepared and responsive to the financial information needs of agency management and oversight bodies, and (2) produce audited financial statements. Over the years, DOD has initiated numerous efforts to improve the department's financial management operations and achieve an unqualified (clean) opinion on the reliability of its reported financial information. These efforts have fallen short of sustained improvement in financial management or financial statement auditability. The Panel requested that GAO provide its perspective on the status of DOD's financial management weaknesses and its efforts to resolve them; the challenges DOD continues to face in improving its financial management and operations; and the status of its efforts to implement automated business systems as a critical element of DOD's Financial Improvement and Audit Readiness strategy. DOD financial management has been on GAO's high-risk list since 1995 and, despite several reform initiatives, remains on the list today. Pervasive deficiencies in financial management processes, systems, and controls, and the resulting lack of data reliability, continue to impair management's ability to assess the resources needed for DOD operations; track and control costs; ensure basic accountability; anticipate future costs; measure performance; maintain funds control; and reduce the risk of loss from fraud, waste, and abuse. DOD spends billions of dollars each year to maintain key business operations intended to support the warfighter, including systems and processes related to the management of contracts, finances, supply chain, support infrastructure, and weapon systems acquisition. These operations are directly impacted by the problems in financial management. In addition, the long-standing financial management weaknesses have precluded DOD from being able to undergo the scrutiny of a financial statement audit. DOD's past strategies for improving financial management were ineffective, but recent initiatives are encouraging. In 2005, DOD issued its Financial Improvement and Audit Readiness (FIAR) Plan for improving financial management and reporting. In 2009, the DOD Comptroller directed that FIAR efforts focus on financial information in two priority areas: budget and mission-critical assets. The FIAR Plan also has a new phased approach that comprises five waves of concerted improvement activities. The first three waves focus on the two priority areas, and the last two on working toward full auditability. The plan is being implemented largely through the Army, Navy, and Air Force military departments and the Defense Logistics Agency, lending increased importance to the committed leadership in these components. Improving the department's financial management operations and thereby providing DOD management and Congress more accurate and reliable information on the results of its business operations will not be an easy task. It is critical that current initiatives related to improving the efficiency and effectiveness of financial management that have the support of the DOD's Deputy Chief Management Officer and Comptroller continue with sustained leadership and monitoring. Absent continued momentum and necessary future investments, current initiatives may falter. Below are some of the key challenges that DOD must address for its financial management to improve to the point where DOD is able to produce auditable financial statements: (1) committed and sustained leadership, (2) effective plan to correct internal control weaknesses, (3) competent financial management workforce, (4) accountability and effective oversight, (5) well-defined enterprise architecture, and (6) successful implementation of the enterprise resource planning systems. |
On May 17, 1954, in its Brown v. Board of Education of Topeka decision, the United States Supreme Court unanimously held that state laws establishing “separate but equal” public schools for Blacks and Whites were unconstitutional. Ten years after this decision, a relatively small percentage of Black children in the Deep South attended integrated schools. The Civil Rights Act of 1964 prohibited discrimination in schools, employment, and places of public accommodation, and created a new role for federal agencies. Both the Department of Education’s (Education) Office for Civil Rights and the Department of Justice’s (Justice) Civil Rights Division’s Educational Opportunities Section have some responsibility for enforcing Title VI of the Civil Rights Act of 1964, which prohibits discrimination on the basis of race, color, or national origin in programs or activities that receive federal funding, including educational institutions. In addition, Title IV of the Act authorizes Education to provide technical assistance to states or school districts in preparing, adopting, and implementing desegregation plans, to arrange for training for school personnel on dealing with educational problems caused by desegregation, and to provide grants to school boards for staff training or hiring specialists to address desegregation. Title IV of the Act also authorizes Justice to file suit in federal court to enforce the civil rights of students in public education, and Title IX of the Act authorizes Justice to intervene—that is, become a party—in federal discrimination lawsuits alleging constitutional violations. Further, Justice has responsibility for enforcing the Equal Educational Opportunities Act of 1974, which among other things, prohibits states from denying equal educational opportunity to individuals, including deliberate segregation of students on the basis of race, color, or national origin. To aid it in its enforcement and oversight of federal civil rights laws, Education also collects data from school districts about student characteristics and academic offerings, among other things, and compiles these data into a dataset referred to as the Civil Rights Data Collection (or Civil Rights Data). In school year 2011-12, for the first time in about a decade, Education collected these data from all K-12 public schools in the United States. It makes its Civil Rights Data available to the public so that researchers, states, and districts can conduct their own analyses. Beyond its enforcement of federal civil rights laws, Education funds several programs to support diversity in schools. Through its Magnet Schools Assistance Program, Education provides grants to local educational agencies to establish and operate magnet schools that are operated under an eligible desegregation plan. These grants are intended to assist in the desegregation of public schools by supporting the elimination, reduction, and prevention of minority group isolation in elementary and secondary schools with substantial proportions of minority group students. Additionally, through its Excellent Educators of All Initiative, Education launched a 50-state strategy to enforce a statutory provision that required states to take steps to ensure that poor and minority students are not taught by inexperienced, unqualified, or out-of- field teachers at higher rates than other students. Justice also monitors and enforces the implementation of any open school desegregation court order to which Justice is a party. In court cases where school districts were found to have engaged in segregation or discrimination, courts may issue orders requiring the districts to take specific steps to desegregate their schools or otherwise comply with the law. These “desegregation orders” may include various requirements, such as creating special schools and redrawing attendance zones in such a way as to foster more racial diversity. A federal desegregation order may be lifted when the court determines that the school district has complied in good faith with the order since it was entered and has eliminated all vestiges of past unlawful discrimination to the extent practicable, which is commonly referred to as achieving unitary status. According to Justice officials, the onus is on the school district, not Justice, to seek unitary status because Justice cannot compel a district to ask the court to lift its order. In general, if a district seeks to have a desegregation order lifted, it must file a motion for unitary status with the court. According to information we reviewed, some districts may choose to keep their order in place, even though they have successfully desegregated. Among other things, these orders, according to experts, can help to ensure that schools will not resegregate. Some of the cases that originally ordered districts to desegregate their schools back in the 1960s and 1970s are still open today. School districts that are not subject to a desegregation order may voluntarily take actions to increase the racial diversity of their schools. Court decisions have also shaped such efforts. For example, in 2007, in Parents Involved in Community Schools v. Seattle School District No. 1, the U.S. Supreme Court struck down several school districts’ student assignment plans that relied on racial classification. The Court held that the districts failed to show that the use of race in their student assignment plans was necessary to achieve their goal of racial diversity, noting among other things that the racial classifications used had minimal effect on student assignments and that the districts had failed to consider race- neutral alternatives to increase diversity. The composition of the student population in U.S. K-12 public schools has changed significantly over time. In 1975, approximately a decade after enactment of the Civil Rights Act of 1964, Black students were the largest minority group in schools, comprising 14 percent of students and with a poverty rate of about 40 percent. In school year 2013-14, Hispanic students were the largest minority group in schools (25 percent Hispanic students compared to 16 percent Black students), and both groups continue to have poverty rates two to three times higher than the rates of White students. The link between racial and ethnic minorities and poverty is long-standing, as reflected in these data. According to several studies, there is concern about this segment of the population that falls at the intersection of poverty and minority status in schools and how this affects their access to quality education. Of the approximately 93,400 K-12 public schools in the United States, in school year 2013-2014 90 percent of them were traditional schools (which are often located within a neighborhood or community to serve students residing there), 7 percent were charter schools, and 3 percent were magnet schools. An extensive body of research over the past 10 years shows a clear link between schools’ socioeconomic (or income) composition and student academic outcomes. That is, the nationally representative studies we reviewed (published from 2004 to 2014) showed that schools with higher concentrations of students from low-income families were generally associated with worse outcomes, and schools with higher concentrations of students from middle- and high-income families were generally associated with better outcomes. For example, one study we reviewed showed that as the average family income of a school increased, the academic achievement and attainment of students of all racial backgrounds increased. The converse was also true. For example, another study found that students attending schools with lower average family income learned at a slower pace than students attending schools where income was higher. The studies, however, paint a more nuanced picture of the effects of schools’ racial composition on student academic outcomes. Specifically, while some of the studies found that having higher percentages of Black or Hispanic students resulted in weaker student outcomes, those effects were often confounded by other factors, including family income, and sometimes the racial composition of schools affected students differently. For example, one study concluded that the average family income of a school had a stronger and more negative effect on academic outcomes, but it also found that, after controlling for other factors, as the percentage of minority students increased in a school, Hispanic students were more likely to graduate from high school, and Asian students were less likely to graduate compared to White students. In another example, a 2010 study found that, after controlling for characteristics such as average family income in the neighborhood, the percentage of Black students in a school had no effect on the likelihood of high school graduation for students of all racial groups and had a small positive effect for all students’ chances of earning a bachelor’s degree. See appendix III for the list of studies we reviewed. Over time, there has been a large increase in schools that are the most isolated by poverty and race. From school years 2000-01 to 2013-14 (most recent data available), both the percentage of K-12 public schools that were high poverty and comprised of mostly Black or Hispanic students (H/PBH) and the students attending these schools grew significantly. In these schools 75 to 100 percent of the students were eligible for free or reduced-price lunch, and 75 to 100 percent of the students were Black or Hispanic. As shown in figure 1, the percentage of H/PBH schools out of all K-12 public schools increased steadily from 9 percent in 2000-01 (7,009 schools) to 16 percent in 2013-14 (15,089 schools). See table 3 in appendix II for data separately breaking out these schools by the percent that are majority Black students and the percent that are majority Hispanic students. While H/PBH schools represented 16 percent of all K-12 public schools, they represented 61 percent of all high-poverty schools in 2013-14. See table 4 in appendix II for additional information on high-poverty schools. Further, at the other end of the spectrum, the percentage of schools that were low poverty and comprised of fewer Black or Hispanic students (L/PBH) decreased by almost half over this same time period. In L/PBH schools, 0 to 25 percent of the students were eligible for free or reduced- priced lunch, and 0 to 25 percent were Black or Hispanic. In addition, more students are attending H/PBH schools than in the past. As shown in figure 2, the number of students attending H/PBH schools more than doubled, increasing by about 4.3 million students, from about 4.1 million to 8.4 million students (or from 10 percent to 17 percent of all K-12 public school students). Also, the percentage of Hispanic students is higher than that of Black students in these schools. Hispanic students tend to be “triply segregated” by race, income, and language, according to subject matter specialists we interviewed and, according to Education data, are the largest minority group in K-12 public schools. The U.S. Census Bureau projects that by 2044, minorities will be the majority in the United States. Further, among H/PBH schools, there is a subset of schools with even higher percentages of poverty and Black or Hispanic students, and growth in these schools has been dramatic. Specifically, according to our analysis of Education’s data, the number of schools where 90 to 100 percent of the students were eligible for free or reduced-price lunch and 90 to 100 percent of the students were Black or Hispanic grew by 143 percent from school years 2000-01 to 2013-14. In school year 2013-14, these schools represented 6 percent of all K-12 public schools, and 6 percent of students attended them (see appendix II for additional information on this subset of schools). H/PBH schools are largely traditional schools; however, the percentage of H/PBH schools that are traditional schools decreased from 94 percent to 81 percent from school years 2000-01 to 2013-14. In contrast, the percentage of such schools that were charter schools and magnet schools increased over that time period from 3 percent to 13 percent and from 3 percent to 5 percent, respectively (see fig. 3). In addition, with respect to the socioeconomic and racial composition of charter schools and magnet schools, both are disproportionately H/PBH schools. For example, in 2013-14, 13 percent of H/PBH schools were charter schools, while 5 percent of L/PBH schools were charter schools. To comply with federal law, some districts may have converted low-performing public schools to charter schools, which may have contributed, in part, to the growth among high-poverty and minority populations in charter schools. Further, 5 percent of H/PBH schools were magnet schools, while 2 percent of L/PBH schools were magnet schools. In terms of school type, the percentage of students who attended H/PBH schools decreased for traditional schools but increased among charter and magnet schools. For traditional schools the percentage of students dropped from 95 percent to 83 percent, even though there was an absolute increase in the number of students at H/PBH traditional schools (from 3.9 million to 6.9 million students, according to our analysis of Education’s data). The percentage of students who attended H/PBH charter schools increased from 1 percent to 9 percent (55,477 to 795,679 students), and those who attended H/PBH magnet schools increased from 4 percent to 8 percent (152,592 to 667,834) (see fig. 4). Research shows that lower levels of income were generally associated with worse student educational outcomes (see app. III). Our analysis of Education data also showed that schools that were highly isolated by poverty and race generally had fewer resources and disproportionately more disciplinary actions than other schools. As shown in figures 5 through 9, when comparing H/PBH schools to L/PBH schools and all other schools (i.e., schools that fall outside of these two categories), disparities existed across a range of areas in school year 2011-12, the most recent year for which these data were available. Further, disparities were even greater for the subset of H/PBH schools in which 90 to 100 percent of the students were eligible for free or reduced-price lunch and 90 to 100 percent of the students were Black or Hispanic, across most areas analyzed. In addition, comparing just the H/PBH traditional, charter, and magnet schools, we also found differences. (See app. II for additional data, including data comparing schools in which 90 to 100 percent of the students were eligible for free or reduced-price lunch and 90 to 100 percent of the students were Black or Hispanic to other schools). As previously mentioned, although our analyses of Education’s data showed disparities across a range of different areas, these analyses, taken alone, should not be used to make conclusions about the presence or absence of unlawful discrimination. The Importance of Middle School Algebra, STEM courses, and AP and GATE Programs Several academic courses and programs are especially beneficial in preparing students for college and successful careers. Among these are middle school algebra; courses in Science, Technology, Engineering, and Mathematics (STEM) fields; Advanced Placement (AP) courses; and Gifted and Talented Education (GATE) programs. According to the Department of Education, access to algebra in middle school—that is, in 7th or 8th grade—positions students to complete higher-level courses in math and science in high school, which is critical to preparing students for college and careers. Therefore, access to a full range of STEM courses in high school, such as calculus, chemistry, and physics, is important in preparing students for college and careers in high-demand fields. In addition, rigorous academic programs, such as AP and GATE, can improve student achievement and build skills that help students move toward college- and career-readiness. AP courses help prepare high school students for college-level courses and upon passing the AP exam, may enable students to receive college credit. According to our analysis of Education’s data, lower percentages of H/PBH schools offered a range of math courses, with differences greatest for 7th or 8th grade algebra and calculus, and differences less evident for algebra II and geometry compared to L/PBH schools and all other schools (see fig. 5). According to Education, access to algebra in 7th or 8th grade positions students to complete higher-level courses in math and science in high school, which is critical to preparing students for college and careers. Among just the H/PBH schools, a higher percentage of magnet schools offered these four math courses. Between just H/PBH traditional schools and charter schools, a higher percentage of traditional schools offered 7th or 8th grade algebra and calculus, while a higher percentage of charter schools offered algebra II and geometry (see app. II for additional data). Similarly, with respect to science courses—biology, chemistry, and physics—our analyses of Education data show disparities, with a lower percentage of H/PBH schools offering these courses compared to L/PBH schools and all other schools, with differences most evident for physics. Among just the H/PBH schools, a higher percentage of magnet schools offered all three science courses. Between just H/PBH traditional schools and charter schools, a higher percentage of charter schools offered biology and chemistry (see fig. 6). With respect to AP courses, there were also disparities, as a lower percentage of H/PBH schools offered these courses compared to L/PBH schools and all other schools. Differences were the greatest between H/PBH schools (48 percent of these schools offered AP courses) and L/PBH schools (72 percent of these schools offered these courses). Among just the H/PBH schools, a higher percentage of magnet schools (83 percent) offered AP courses than did the traditional schools (50 percent) or charter schools (32 percent) (see fig. 7). In addition, among schools that offered AP courses, a lower percentage of students of all racial groups (Black, Hispanic, White, Asian, and Other) attending H/PBH schools took AP courses compared to students of all racial groups in L/PBH schools and all other schools. Specifically, among schools that offered AP courses, 12 percent of all students attending H/PBH schools took an AP course compared to 24 percent of all students in L/PBH schools and 17 percent of all students in all other schools. In addition, with respect to Gifted and Talented Education programs, or GATE, a lower percentage of H/PBH schools offered these programs compared to all other schools; however, a higher percentage of H/PBH schools offered GATE programs compared to L/PBH schools. Looking at just H/PBH schools, almost three-quarters of magnet schools and almost two-thirds of traditional schools offered this program, while less than one- fifth of charter schools offered it (see fig. 7). Students in H/PBH schools were held back in 9th grade, suspended (out- of-school), and expelled at disproportionately higher rates than students in L/PBH schools and all other schools. Specifically, although students in H/PBH schools were 7 percent of all 9th grade students, they were 17 percent of all students retained in 9th grade, according to our analysis of Education’s data (see fig. 8). Further, with respect to suspensions and expulsions, there was a similar pattern. Specifically, although students in H/PBH schools accounted for 12 percent of all students, they represented 22 percent of all students with one or more out-of-school suspensions and 16 percent of all students expelled (see fig. 9 and fig. 10). For additional information comparing students in schools with different levels of Black, Hispanic, and poor students, and by school type (traditional, charter, and magnet schools), see tables 20 and 21 in appendix II. H/PBH schools have large percentages of Hispanic students and, as expected, have a disproportionately greater percentage of students who were English Learners (EL). With respect to students with disabilities, our analysis of Education’s data showed small differences across two of the school groupings we analyzed. Specifically, L/PBH schools had 19 percent of all students and 17 of the students with disabilities, and all other schools had 69 percent of all students and 71 percent of the students with disabilities, according to our analysis of Education’s data. Further, while these comparisons show some slight differences by school in the percent of students with disabilities, Education’s own analysis of these data by race showed there are differences among racial groups, with Black students overall being overrepresented among students with disabilities. Because their schools were largely isolated by race and poverty or had experienced large demographic shifts, the three school districts we reviewed—located in the Northeast, South, and West—reported implementing a variety of actions in an effort to increase racial and socioeconomic diversity in their schools. However, in implementing these efforts aimed at increasing diversity, school districts struggled with providing transportation to students and obtaining support from parents and the community, among other things. School District in the Northeast. The district in the Northeast, an urban, predominantly low-income, Black and Hispanic district surrounded by primarily White suburban districts, had tried for over two decades to diversify its schools, according to state officials. Despite these efforts, continued racial isolation and poverty among schools in the district prompted a group of families to file a lawsuit against the state in state court, alleging that the education students received in the urban district was inferior to that received in the more affluent, largely White suburban schools. The plaintiffs argued that the state’s system of separate city and suburban school districts, which had been in place almost a century, led to racially segregated schools. The state supreme court ruled that the conditions in the district violated the state constitution, requiring the state to take action to diversify the urban district and its surrounding suburban schools. In response, the state and district took a variety of actions. In particular the state provided funding to build several new or completely renovated state-of-the-art magnet schools within the region to attract suburban students. To attract students from the city and suburbs, the magnet schools used highly specialized curriculum. For example, one newly renovated environmental sciences magnet school we visited offered theme-based instruction that allowed students to work side-by-side with resident scientists to conduct investigations and studies using a variety of technologies and tools. Other magnet schools in this area offered different themes, such as aerospace and engineering or the performing arts. To further facilitate its efforts at diversity, the state provided funding for transportation to magnet schools, enabling suburban and urban students to more easily attend these schools. In addition, according to officials, consistent with the court order, the state required the district’s magnet schools to maintain a student enrollment of no more than 75 percent minority students. However, the district faced several challenges with respect to its magnet schools. For example, officials said maintaining a certain ratio of non- minority students posed challenges. According to the district superintendent, even if there were openings, many minority students in the district were unable to attend certain magnet schools because doing so would interfere with the ratio of minorities to non-minorities the state was attempting to achieve. In addition, because assignment to magnet schools was done through a lottery, students were not guaranteed a slot in a magnet school. Officials told us that in those cases where there was not enough space in a magnet school or where admitting more minority students would disrupt the ratio of minorities to non-minorities, these students would attend their traditional neighborhood school. Because the lottery did not guarantee all students in the urban district a magnet school slot, a student also had to designate four other school options. However, without a similar infusion of funds that was available for the magnet schools, officials we spoke to said that the neighborhood schools in the urban district declined. As a result, families that did not gain access to well-supported magnet schools resented resources spent on these schools, according to officials. Also, because the neighborhood schools were not required to maintain a specified percentage of minority students like the magnets, they, as well as the charter schools in the urban district, continued not to be very diverse, according to officials. The state also enabled students from the urban district to enroll in traditional schools (non-magnet) in the suburbs by drawing four attendance zones around the urban district. Creation of these zones reduced bus travel times for students and facilitated relationships between parents in the community whose children were attending the same suburban school, according to officials. Parents could apply for these traditional, suburban schools through the lottery, selecting up to five participating suburban school districts that are designated within their zone. If a student was not placed in one of these schools, they would attend a school in their urban district. In addition to providing transportation so that students could attend suburban schools, the state offered suburban schools grants of up to $8,000 per student, an academic and social support grant of up to $115,000 per school district, and a capital funds grant of up to $750,000 per school district. Despite these incentives, according to officials we interviewed, some families chose not to enroll their children in the suburban schools and instead opted to stay in close-by neighborhood schools, dampening the effects of the efforts to diversify. School District in the South. The district in the South had previously been under a federal desegregation order and experienced major demographic changes going from a district serving primarily Black and White students to one serving many other races and ethnicities as well as immigrant populations. Students in the district represented about 120 different nationalities and languages, and according to officials, this included students from Somalia and Coptic Christians and Kurds from Egypt. To address the major demographic changes and help achieve diversity across more schools in the district, the district did away with its previous school attendance zones, which had generally assigned students to schools located in their geographic area or neighborhood. In its place, the district created new student assignment zones for its schools, and also hired an outside expert to help implement a new diversity plan. Specifically, under the new student assignment plan, the new zones were intended to provide greater socioeconomic and racial diversity nearer to where students lived, according to school district officials we interviewed. Under the new plan, parents were allowed to choose among schools within their attendance zones, which allow greater choice of schools for children closer to their neighborhoods. The plan also supported students who chose to attend schools outside of these zones by providing public transit passes, while school bus transportation was provided to students who attended schools within their attendance zones. According to documents we reviewed, this district experienced challenges implementing its revised student assignment plan. Parents’ choices of schools resulted in resegregation of students, prompting a complaint leading to a Department of Education investigation, as well as a federal lawsuit. According to Education officials, their investigation of the complaint found that after the school choice period was completed and students were enrolled for the school year, there was a significant increase in racial isolation in some of the schools in particular urban and suburban areas. In addition, several families and a nonprofit organization filed a federal lawsuit alleging that the implementation of the school district’s revised student assignment plan was causing unconstitutional racial segregation in the district. The court upheld the plan, finding that although the plan had caused a “segregative effect” in the district, there was no discriminatory intent by the officials in adopting and implementing the plan. To address the concerns raised in the lawsuit, the district hired an expert to refine and develop a school diversity plan. Under this diversity plan, student diversity was defined broadly, to include language and disability, as well as race/ethnicity and income (see text box). However, even after implementing the new diversity plan, officials told us that some families in their district sent their children to private schools, rather than attend the district’s public schools. These officials also said that, in their opinion, some White families in their district were less eager to have their children attend diverse schools. Diversity Plan in a School District in the South According to district documents, a school in the district is “diverse” if it meets at least one of the following measures: enrolls multiple racial/ethnic groups, and no single group represents more than 50 percent of the school’s total enrollment; enrolls at least three racial/ethnic groups, and each represents at least 15 percent of the school’s total enrollment; or enrolls at least two racial/ethnic groups, and each represents at least 30 percent of the school’s total enrollment; and at least two of the following measures: percentage of students eligible for free or reduced meals is at least two-thirds the average of other schools, percentage of English Learners is at least two-thirds the average of other schools, or percentage of students with a disability is at least two-thirds the average of other schools. The district measures schools within their grade tier level. The typical grade tier levels are elementary school (Pre-K–4th grade), middle school (5th-8th grade), and high school (9th- 12th grade). As part of the new diversity plan, the district is also hiring staff that reflect, to the extent possible, the diversity of the student body. Further, when making decisions about a range of matters, such as drawing school boundary lines, placement of new schools, providing student transportation, and recruiting and training school staff, the plan calls for them to consider the impact of those decisions on diversity. In addition, the district is in the process of allocating school resources with a goal of better reflecting the different needs of students in the schools (e.g., English Learners). School District in the West. The district we visited in the West is located in a state with an “open-enrollment” law, which gives parents a significant degree of choice in determining the schools their children attend, including schools outside of their neighborhoods. District officials told us that, in their opinion, as a result of the state law, White students often choose not to attend certain schools in the district. District officials told us that this left a largely Hispanic and low-income student population in those schools, prompting the district to implement several actions in an attempt to diversify. Specifically, the district, led by the school board, converted some of its existing public schools into magnet schools. Further, to meet diverse student needs, the state provided additional funds for high-needs students, such as those eligible for free or reduced- price lunch, English Learners, or foster care youth. According to officials, this district struggled to diversify because parents have a significant degree of choice in where to enroll their children, magnet schools give priority to children in their neighborhood, and funding was limited for some schools. After the district implemented its diversity efforts, district officials told us that, in their opinion, some White families continued to choose schools outside the district and many other families chose to keep their children in neighborhood schools where diversity was low. In addition, the magnet schools gave priority to neighborhood children, which further hampered attempts at diversity. Further, although the district converted some of its schools to magnet schools to attract students, they provided no transportation for students, and some of the schools were converted without any upgrades to the facilities, as state funding for education declined due to an economic recession. One principal we interviewed at a converted magnet school expressed frustration that his school did not have the proper signage or visual appeal to attract families. Further, principals and other school district officials we interviewed said that they struggled to reach capacity in some of their schools. In contrast, one of the magnet schools we visited had a waiting list and was a state-of-the-art facility, with Wi-Fi, computers for every student, and 3D printers. Unlike the other magnet schools, this school has been operating as a magnet for nearly 20 years, and at the time of our review, had a waiting list. In further contrast, this school received most of its funding from private donations at a level significant enough to fund the technology focus of this school. Education has taken a range of actions to address racial discrimination in schools. For example, Education has conducted investigations on its own initiative as well as investigations in response to complaints or reports of possible discrimination. Depending on the outcome of these investigations, Education may enter into agreements, called resolution agreements, which establish the actions a school or school district agrees to take to address issues found during an investigation. Education also may withhold federal funds if a recipient is in violation of the civil rights laws and Education is unable to reach agreement with the parties, although officials told us that this rarely happens. Education’s agency-initiated investigations, which are called compliance reviews, target problems that appear particularly acute. Education’s Office for Civil Rights launched 32 compliance reviews in fiscal years 2013 and 2014 across a range of issues related to racial discrimination. For example, in 2014 Education completed a compliance review of an entire district’s disciplinary practices. As a result of that review, Education found that Black students were disproportionately represented among students subject to suspensions, other disciplinary actions, and referrals to law enforcement and that Black students were disciplined differently from White students for similar offenses. In one instance, Education cited an example of an 8th-grade White student who was given detention for leaving class without permission while an 8th-grade Black student was suspended 3 days for skipping a class even though this student had no such prior incidents. Education entered into a resolution agreement with the district to resolve the issues it identified, which, among other things, required the district to collect data to monitor its disciplinary practices for potential discrimination. The agreement also required the district to assign a staff person responsible for ensuring that disciplinary practices are equitable and to provide training for teachers and staff. In 2013, another compliance review initiated by Education of a district found that Black and Hispanic students were under-represented in high school honors and AP courses, as well as elementary and middle school advanced courses and gifted and talented programs. To resolve these issues, Education entered into a resolution agreement with the district which, among other things, required the district to identify potential barriers to student participation in these courses, such as eligibility and selection criteria, hire a consultant to help address this issue, and provide training for district and school staff on how to encourage and retain student participation in these courses. The agreement also required the district to collect and evaluate data on an ongoing annual basis of its enrollment policies, practices, and procedures to determine whether they are being implemented in a non- discriminatory manner. Further, Education has also conducted more narrowly-focused investigations in response to complaints of discrimination, which can be filed by anyone who believes that an educational institution that receives federal funds has discriminated against someone on the basis of race, color, or national origin. According to Education, it received about 2,400 such complaints in fiscal year 2014. For example, in response to a 2011 complaint alleging that a high school’s football coach subjected Black players to racial harassment and that the district failed to address it, Education launched an investigation of the district. Education found that the football coach directed racial slurs at Black players, and players who complained were harassed by their fellow students and staff, who supported the coach. Education also found that the coach did not assist Black players with obtaining athletic scholarships, even stating that athletic scholarships are for White players and financial aid is for Black players. To resolve these findings, Education negotiated a resolution agreement with the district that required the district to review and revise its harassment and discrimination policies and take appropriate steps to remedy the harassment by the coach, including appointing a new coach and offering counseling for the students. Education has also issued guidance to schools on their obligations under the federal civil rights laws, and its decision to issue such guidance may be prompted by factors such as its findings from investigations or developments in case law. For example, Education issued guidance jointly with Justice in 2014 on school discipline to assist states, districts, and schools in developing practices and strategies to enhance the atmosphere in the school and ensure those policies and practices comply with federal law. The guidance included a letter on applicable federal civil rights laws and discipline that describes how schools can meet their obligations under federal law to administer student discipline without discriminating against students on the basis of race, color, or national origin. Also in that year, Education issued guidance addressing the issue of equitable access to educational resources. Specifically, in its guidance, Education states that chronic and widespread racial disparities in access to rigorous courses, academic programs, and extracurricular activities and in other areas “hinder the education of students of color today” and strongly recommends that school districts proactively assess their policies and practices to ensure that students are receiving educational resources without regard to their race, color, or national origin. In addition, Education issued guidance jointly with Justice in 2011 following the 2007 U.S. Supreme Court decision in Parents Involved that addressed districts’ voluntary use of race to diversify their schools. This guidance sets forth examples of the types of actions school districts could take to diversify their schools or avoid racial isolation, consistent with this decision and the federal civil rights laws. It states that districts should first consider approaches that do not rely on the race of individual students, for example, by using race-neutral criteria such as students’ socioeconomic status, before adopting approaches that rely on individual racial classifications. For approaches that do consider a students’ race as a factor, districts should ensure their approach closely fits their goals and considers race only as one factor among other non-racial considerations. Further, Education also offers technical assistance, through various means, such as conducting webinars, sponsoring and presenting at conferences, and disseminating resource guides to schools and school districts. For example, at a 2015 magnet school workshop, Education officials discussed the benefits to improving diversity in the schools and the ramifications of relevant court decisions related to diversifying schools. They also offered examples of actions schools can take consistent with these court decisions to promote greater school diversity. Education uses its Civil Rights Data to identify patterns, trends, disparities, and potential discrimination by performing analysis of particular groups of students, such as by race and ethnicity, and could further enhance its current efforts by also more routinely analyzing data by school types and groupings. Analyzing data by schools may help discern patterns and trends occurring in different types of schools, such as the disparities our analysis revealed in high-poverty schools comprised of mostly Black or Hispanic students. For example, through its analysis of its Civil Rights Data, Education identified an issue nationwide with disproportionately high suspension and expulsion rates of certain groups of students by race, among other characteristics. Education uses these analyses to inform its investigations and guidance. For example, its analysis of its Civil Rights Data, which showed disparities across groups of students by race and other factors in students’ access to academic courses (such as algebra and AP courses), helped inform an investigation and resulted in guidance. According to Education, it typically analyzes its data by student groups to help it identify disparities or potential discrimination against students on the basis of race, color, or national origin, consistent with the civil rights laws it enforces. While these analyses, by specific groups of students, are important to its enforcement responsibilities, by also more routinely analyzing data by different types and groupings of schools, other patterns might be revealed, as our own analyses show. In addition, although socioeconomic status is not a protected class under the U.S. Constitution or federal civil rights laws, research has shown that poverty (socioeconomic status) and race overlap (see app. III). By examining these two phenomena in tandem, Education has another lens for examining any possible issues at the school level. Education has used its Civil Rights Data to publish a 2014 “data snapshot” on school discipline that highlighted disparities by race, ethnicity, and English Learner status, among other characteristics. To illustrate where Education might enhance such an analysis, our analysis of the same data also found disparities and differences between groups of schools—with disparities most evident for H/PBH schools. Further, Education’s data snapshot on college and career readiness, also based on its analysis of Civil Rights Data, showed disparities in access to core subjects, such as algebra I and II, geometry, biology, chemistry, and AP courses by various student groups. Again, analyzing the same data, we also found these disparities, but we found them among schools grouped by level of poverty and among Black and Hispanic students, with disparities most acute among H/PBH schools. In addition, our analyses showed further disparities when we grouped schools by types— traditional, charter, and magnet schools. For example, one of our analyses of Education’s school year 2011-12 data showed that, among H/PBH schools, a higher percentage of magnet schools (83 percent) offered AP courses than did the traditional schools (50 percent) or charter schools (32 percent). While Education’s analyses of its Civil Rights Data provide critical information to aid its enforcement of civil rights laws, also analyzing these data by different groupings and types of schools could provide Education with an additional layer of information that, as we found, further illuminates disparities and could enhance their efforts. Federal internal control standards state that agencies should use operational data to ensure effective and efficient use of agency resources. By analyzing its data by groupings and types of schools, Education has an opportunity to enhance its efforts and better inform guidance and technical assistance to the groups and types of schools that need it most. The Department of Justice’s Educational Opportunities Section of the Civil Rights Division has taken several actions to address racial discrimination against students. Similar to Education, Justice conducts investigations in response to complaints or reports of possible violations. Depending on the outcome of its investigation and the circumstances of the case, Justice may take a number of actions, which could include entering into a settlement agreement with the district or initiating litigation to enforce the civil rights laws. For example, Justice investigated complaints in 2011 alleging that a student had been subject to racial harassment at a high school, which included receiving race-based death threats and retaliation for reporting the harassment. The investigation found that the district failed to adequately investigate, address, and prevent recurrence of the harassment, which resulted in the student leaving the district out of fear for her safety, and that other Black students had experienced racial harassment and retaliation. Justice entered into a settlement agreement with the district that included making revisions to the policies and procedures for handling racial harassment complaints. Justice has also intervened, that is joined in and became a party, in discrimination lawsuits. For example, in 2000 Justice intervened in a civil rights lawsuit against a district, alleging the district failed to appropriately address harassment of a pair of students by other students. The alleged harassment included racial slurs, including some within earshot of teachers, and racial graffiti on walls and desks. Further, one of the students was the victim of a racially motivated assault. The parties negotiated an agreement, which was adopted by the court as an order, that required the district to, among other things, maintain written records of each harassment allegation received, investigation conducted, and corrective action taken by the district to ensure a consistent and effective review of allegations. Further, as previously mentioned, Justice has issued guidance jointly with Education to ensure states and school districts understand their responsibilities to prevent and address racial discrimination in schools. Justice also monitors and enforces open federal school desegregation cases where Justice is a party to the litigation. According to Justice officials, as of November 2015 there were 178 of these cases. Justice officials told us they routinely work with districts (and other parties to the desegregation case) to close out those cases where the school district has met its statutory and Constitutional duty to desegregate. For example, in January 2015, Justice completed its compliance monitoring visits for a school district that had been operating under a series of consent orders since 1970, most recently one from 2012. Justice determined that the district had complied with the terms of the desegregation order. The parties agreed, and in May 2015 the court declared the district unitary, thus allowing the desegregation order to be lifted. Justice has also recently engaged in active litigation in several open desegregation cases. For example, in 2011, as a party to another long- standing desegregation case, Justice filed a motion asking the court to find that the district had violated its obligations under several prior desegregation orders. In 2012, the court determined, among other things, that although the district had made significant progress, two predominantly Black schools had never been desegregated, and the court ordered the district to draft a plan to improve integration at those schools. Justice officials said that they initiate action on an open desegregation case in response to various factors, including requirements from the court, complaints or inquiries they receive, or issues raised in media reports. According to Justice officials, the agency also conducts agency-initiated “affirmative reviews” of districts under open desegregation orders, which could include requests for additional supplemental data, site visits, and initiation of negotiations if compliance issues are identified, among other things. As noted above, Justice is responsible for monitoring and enforcing the 178 open federal desegregation orders to which it is a party—many of which originated 30 or 40 years ago. However, it does not systematically track important summary information on these orders. As a consequence, the potential exists that some cases could unintentionally languish for long periods of time. For example, in a 2014 opinion in a long-standing desegregation case, the court described a long period of dormancy in the case and stated that lack of activity had taken its toll, noting, among other things, that the district had not submitted the annual reports required under the consent order to the court for the past 20 years. Although the court found certain disparities in educational programs and student test results, based on the record at the time it was unable to determine when the disparities arose or whether they were a result of discrimination. The court noted that had Justice “been keeping an eye” on relevant information, such as disparities in test scores, it could have brought it to the court’s attention more quickly, allowing the court and district to address the issue in a timely fashion. While Justice officials told us that they maintain a system to track certain identifying information about each case, which includes the case name, the court docket number, the identification number generated by Justice, and the jurisdiction where the case originated, officials were unable to provide more detailed summary information across all of the open cases, such as the date of the last action, or the nature of the last action taken. Justice officials said that to obtain such information they would have to review each individual case file, some of which are voluminous and many of which are not stored electronically. Thus, Justice officials were unable to respond with specificity as to when or the nature of the last action taken on the open orders within broad time frames of 5 years, 10 years, or 20 years ago. According to Justice’s Strategic Plan, the agency has a goal to protect the rights of the American people and enforce federal law. This Plan includes an objective for implementing this goal—to promote and protect American civil rights by preventing and prosecuting discriminatory practices. According to this Plan, Justice seeks to address and prevent discrimination and segregation in elementary and secondary schools. The Plan states that the extent to which societal attitudes and practices reflect a continuing commitment to tolerance, diversity, and equality affect the scope and nature of Justice’s work. In addition, federal internal control standards state that routine monitoring should be a part of normal operations to allow an agency to assess how the entity being monitored is performing over time. These standards also state that agencies should use information to help identify specific actions that need to be taken and to allow for effective monitoring of activities. Specifically, the standards state that information should be available on a timely basis to allow effective monitoring of events and activities and to allow prompt reaction. Also, the standards state that information should be summarized and presented appropriately and provide pertinent information while permitting a closer inspection of details as needed. In addition, the standards state that agencies should obtain any relevant external information that may affect achievement of missions, goals, and objectives. Without a systematic way to track key information about all of the open desegregation cases, such as the date of the last action or receipt of required reports, Justice may lack the summary information needed to monitor the status of its orders. This may affect the agency’s ability to effectively manage its caseload and to promote and protect civil rights. More than 60 years after the Brown decision, our work shows that disparities in education persist and are particularly acute among schools with the highest concentrations of minority and poor students. Further, Black and Hispanic students are increasingly attending high-poverty schools where they face multiple disparities, including less access to academic offerings. Research has shown a clear link between a school’s poverty level and student academic outcomes, with higher poverty associated with worse educational outcomes. While the districts we contacted in different areas across the nation have efforts under way to help improve the quality of education for students, the Departments of Education and Justice have roles that are critical because they are responsible for enforcing federal laws that protect students from racial discrimination and ensuring schools and districts provide all students with equitable access. In doing so, both agencies can better leverage data available to them to aid their guidance, enforcement, and oversight efforts. Education has ongoing efforts to collect data that it uses to identify potential discrimination and disparities across key groups of students, but it has not routinely analyzed its data in a way that may reveal larger patterns among different types and groups of schools. As a result, the agency may miss key patterns and trends among schools that could enhance its efforts. In addition, Justice is a party to 178 federal desegregation orders that remain open, but Justice does not track key summary information about the orders that would allow them to effectively monitor their status. Without systematically tracking such information, the agency may lack information that could help in its enforcement efforts. We recommend that the Secretary of Education direct Education’s Office for Civil Rights to more routinely analyze its Civil Rights Data Collection by school groupings and types of schools across key elements to further explore and understand issues and patterns of disparities. For example, Education could use this more detailed information to help identify issues and patterns among school types and groups in conjunction with its analyses of student groups. We recommend that the Attorney General of the United States direct the Department of Justice’s Civil Rights Division to systematically track key summary information across its portfolio of open desegregation cases and use this data to inform its monitoring of these cases. Such information could include, for example, dates significant actions were taken or reports received. We provided a draft of this report to the Departments of Education and Justice for their review and comment. Education’s written comments are reproduced in appendix IV, and Justice’s written comments are reproduced in appendix V. Education also provided technical comments, which we incorporated into the report, as appropriate. In its written comments, Education stated that its Office for Civil Rights already analyzes its Civil Rights Data Collection (Civil Rights Data) in some of the ways we recommend, and in light of our recommendation, it will consider whether additional analysis could augment the Office for Civil Rights’ core civil rights enforcement mission. Specifically, Education said it is planning to conduct some of the analysis suggested in our recommendation for future published data analysis based on the 2013- 2014 Civil Rights Data and will consider whether additional analysis would be helpful. Education also stated it is committed to using every tool at its disposal to ensure all students have access to an excellent education. In addition, Education stated that when appropriate, the Office for Civil Rights often uses the types of analyses recommended by GAO in its investigations. It also noted that racial disparities are only one potential element for investigations of potential discrimination. Education also said that it publishes reports based on the Civil Rights Data, referring to the Office for Civil Rights’ published data snapshots on College and Career Readiness and Teacher Equity, which we reviewed as part of this study. We found they do provide some important information about schools with high and low levels of minority populations. Further, Education stated that the disaggregations of the data that we presented in our report were the type of specialized analysis that the Office for Civil Rights encourages users outside the agency to explore. While we recognize the important ways Education is currently using its data and the additional analyses it is considering and planning in the future, it was our intent in making the recommendation that Education more routinely examine the data for any disparities and patterns across a key set of data elements by the school groupings we recommended. Further, while we support the engagement of researchers and other interested stakeholders outside the agency, we also believe that Education should conduct these analyses as part of its mission to provide oversight. We believe that by doing so, Education will be better positioned to more fully understand and discern the nature of disparities and patterns among schools. In light of Education’s response about its data analysis efforts, which we agree are consistent with good practices to use agency resources effectively and efficiently, we modified the recommendation and report accordingly. We now specify in the recommendation that Education should “more routinely” analyze its Civil Rights Data across key elements in the ways recommended by our report to help it identify disparities among schools. We believe that such analysis will enhance current efforts by identifying and addressing disparities among groups and types of schools—helping, ultimately, to improve Education’s ability to target oversight and technical assistance to the schools that need it most. In its written comments, Justice stated it believes its procedures for tracking case-related data are adequate. Nevertheless, consistent with our recommendation, Justice said it is currently developing an electronic document management system that may allow more case-related information to be stored in electronic format. Justice agreed that tracking information concerning its litigation docket is important and useful and that it shares our goal of ensuring it accurately and adequately tracks case-related information. However, Justice also stated that our report fails to appreciate the extensive amount of data the agency maintains on its desegregation cases, which it maintains primarily for the purpose of litigation. Justice stated that it tracks and preserves information received from school districts and all case-related correspondence and pleadings, and because the data it collects are used to litigate each individual case, it does not track such data across cases. We understand Justice’s need to maintain voluminous case-specific evidentiary files, some of which are maintained in hard copy. It was out of recognition for the extensive nature of these files that we recommended Justice also have a way to track key, summary information across its cases. Such summary information would allow for timely and effective monitoring and for prompt reaction, in accordance with federal standards for internal control. Further, Justice said various terms in our recommendation, such as “systematically” or “key” were not clear or well defined. In deference to the agency’s expertise, in making the recommendation, we intentionally used broad language that would allow Justice to make its own judgments about what would best serve its mission. Justice also said it is concerned that the report could be read to suggest that racial disparities within a public school district constitute per se evidence of racial discrimination. Although our report does not make this statement, we have added additional language to further clarify that data on disparities alone are not sufficient to establish unlawful discrimination. With respect to the report’s description of a selected desegregation case, Justice stated it was concerned with the emphasis we placed on one comment in the lengthy court opinion (“…if Justice had ‘been keeping an eye’ on relevant information…”), which it said was based solely on the absence of entries on the court’s docket sheet. Justice said in this case and in many others, it is engaged in a range of related activities, such as site visits and settlement agreements, which are not recorded on the courts’ docket sheets. We appreciate that courts may not be aware of all of Justice’s activities in any one case; however, we believe this case illustrates how important it is for Justice to have timely information about its cases and how better information tracking could help the agency better manage and oversee its caseload. Also, with respect to this case, Justice commented that the existence of disparities in test scores alone is not sufficient to trigger a remedy under Justice’s legal authority, and Justice must consider multiple factors before taking action in a case. We have clarified in the report that data on disparities taken alone are insufficient to establish unlawful discrimination. While we understand that tracking such information may not necessarily trigger action by Justice in any particular case, the case described was selected to serve as an example of the potential benefits of more proactive tracking of information in these cases. Further, Justice said it was concerned the report could be read to suggest that some cases have remained dormant or languished for long periods of time as a result of Justice’s tracking system, without sufficient appreciation for the responsibilities of the school districts and courts in advancing and resolving the cases (such as by achieving unitary status). In the draft report on which Justice commented, we stated that the onus is on the district, not Justice, to seek unitary status. We have amended the final report to state this more prominently. However, while we acknowledge the key roles of the districts and the courts in resolving and advancing a desegregation case, the focus of our report is on the federal role, and Justice, too, plays an important role in litigating these cases—a role we believe would be enhanced by improving its tracking of information about the cases. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Education and the Attorney General, and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (617) 788-0580 or nowickij@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. The objectives of this study were to examine: (1) how the percentage of schools with high percentages of poor and Black or Hispanic students has changed over time and the characteristics of these schools, (2) why and how selected school districts have implemented actions to increase student diversity, and (3) the extent to which the Departments of Education (Education) and Justice (Justice) have taken actions to identify and address issues related to racial discrimination in schools. To answer our objectives, we analyzed the (1) poverty level of schools and (2) Black and Hispanic student composition of schools, as a basis for grouping and comparing schools. We measured poverty level at the school level using the percentage of students eligible for free or reduced- price lunch. A student is generally eligible for free or reduced-price lunch based on federal income eligibility guidelines that are tied to the federal poverty level and the size of the family. We focused on Black and Hispanic students because they are the two largest minority groups in U.S. K-12 public schools, and existing research has suggested that these groups experience disparities in school. The thresholds and measure of poverty discussed here and below was commonly used in the literature and also aligns with how Education analyzes its data. We categorized schools for our analysis based on both the percent of students in a school eligible for free or reduced-price lunch and the percent of Black or Hispanic students collectively in a school (see table 1). We divided our data into three school groups as follows: 1. Schools whose student populations were comprised of 0 to 25 percent students eligible for free or reduced-price lunch (i.e., low-poverty) and 0 to 25 percent Black or Hispanic students (referred to as “L/PBH schools”), 2. Schools whose student populations were comprised of 75 to 100 percent students eligible for free or reduced-price lunch (i.e., high- poverty) and 75 to 100 percent Black or Hispanic students (referred to as “H/PBH schools”), and 3. Schools that fall outside of these two categories (referred to as “all other schools”). Because the literature also suggests that schools with even higher levels of Blacks and Hispanics and poverty face disparities that are even more acute, we also analyzed the group of schools in which 90 to 100 percent of the students were eligible for free or reduced-price lunch and 90 to 100 percent of the students were Black or Hispanic. These schools represent 6 percent of all K-12 public schools and are included in appendix II for further comparison. Our analyses of Education’s data in this report are intended to describe selected characteristics of these schools; they should not be used to make conclusions about the presence or absence of unlawful discrimination. To describe how the percentage and characteristics of schools with different levels of poverty among students and Black or Hispanic students has changed over time, we analyzed schools with both the highest and lowest percentages of poverty and Blacks or Hispanics and schools with all other percentages of these groups (see table 1). We used Education’s Common Core of Data (CCD) from school years 2000-01, 2005-06, 2010- 11, and 2013-14, the most recent year of data available for these analyses. CCD is administered by Education’s National Center for Education Statistics, which annually collects non-fiscal data about all public schools, as well as fiscal and non-fiscal data on public school districts, and state education agencies in the United States. The data are supplied by state education agency officials describing their schools and school districts. Data elements include name, address, and phone number of the school or school district; demographic information about students and staff; and fiscal data, such as revenues and current expenditures. To assess the reliability of these data, we reviewed technical documentation and interviewed relevant officials from Education. Based on these efforts, we determined that these data were sufficiently reliable for our purposes. The data in the CCD represent the full universe of all U.S. K-12 public schools. To further understand the trends underlying the growth or decline of these categories of schools, we examined whether any variation in growth existed by region (Northeastern, Midwestern, Southern, and Western areas of the United States) and school type (traditional neighborhood schools, charter schools, and magnet schools). For our analysis of the CCD, we excluded schools that did not report information on (1) free or reduced-price lunch, which we used as a proxy to categorize the poverty level of the school or (2) the number of Black or Hispanic students, which we used to categorize the level of Black or Hispanic students in the school. For school year 2000-01, we included 78,194 schools and excluded 16,520 schools; for school year 2005-06, we included 91,910 schools and excluded 8,717 schools; for school year 2010-11, we included 94,612 schools and excluded 7,413 schools; and for school year 2013-14, we included 93,458 schools and excluded 7,633 schools. Because CCD collects information on the universe of schools, these exclusions would not affect our overall findings. There are several sources of non-sampling error associated with the CCD, which is self-reported and collected from the universe of schools and school districts. Non-sampling errors can be introduced in many ways. For example, they can result from data processing or data entry, when respondents misinterpret survey questions, do not follow survey instructions, or do not follow the item definitions correctly. Further, while CCD’s coverage of traditional public schools and school districts is very complete, coverage of publicly funded education outside of traditional school districts has varying levels of coverage within different states and jurisdictions. Some states do not report schools that are administered by state organizations other than state educational agencies. Examples include charter schools authorized by an organization that is not a school district, schools sponsored by health and human services agencies within a state, and juvenile justice facilities. In recent years, Education has increased efforts to identify schools that may be underreported by state educational agencies. Further, because this information is self-reported, there is also the potential for misreporting of information. Education attempts to minimize these errors in several ways, including through training, extensive quality reviews, and data editing. To examine additional characteristics about schools the students attended, we analyzed data from the public use file of Education’s Civil Rights Data Collection (referred to as the Civil Rights Data in this report) for school year 2011-12, which was the most recent year of data available. The Civil Rights Data—collected on a biennial basis—consists of data on the nation’s public schools, including student characteristics and enrollment; educational and course offerings; disciplinary actions; and school environment, such as incidences of bullying. To assess the reliability of these data, we reviewed technical documentation, and interviewed relevant officials from Education. Based on these efforts, we determined that these data were sufficiently reliable for our purposes. The Civil Rights Data is part of Education’s Office for Civil Rights’ overall strategy for administering and enforcing the federal civil rights statutes for which it is responsible. While this information was collected from a sample of schools in previous years, it was collected from the full universe of all U.S. K-12 public schools in 2011-12. By analyzing these data across the school categories in table 1, we were able to present data on the differences in the availability of courses offered among schools with different levels of poverty among students and Black or Hispanic students. For example, we were able to analyze differences among schools with respect to school offerings, such as advanced math and science courses—as well as advanced academic programs, Advanced Placement courses, and Gifted and Talented Education programs. We were also able to examine differences in the level of disciplinary incidents—such as more than one out-of-school suspension, arrests related to school activity, and bullying—and the percentage of English Learners and students with disabilities. We also examined the numbers of full-time teachers with more than one year of experience, licensed and certified teachers, and teacher absences. The data also allowed us to analyze differences by type of school—traditional neighborhood schools, charter schools, and magnet schools (see app. II). For this analysis we matched schools in the Civil Rights Data for school year 2011-12 (the most year recent for which Civil Rights Data are available) to schools in the CCD for school year 2011-12 and excluded schools for which there was not a match. Further, from the Civil Rights Data, we also excluded schools that did not report (1) free or reduced-price school lunch, which we used as a proxy to categorize the poverty level of the school or (2) the number of Black or Hispanic students, which we used to categorize the level of Black or Hispanic students in the school. As a result, our analysis of the Civil Rights Data for school year 2011-12 included 95,635 schools and excluded 5,675 schools. In the report, we present different years for the Civil Rights Data and CCD and, as a result, the numbers and percentages of schools and students derived from these two sets of data will not match. As with the CCD, the school year 2011-12 Civil Rights Data collected the full universe of schools and districts, with 99.2 and 98.4 percent response rate, respectively. These data are also subject to non-sampling error, and because these data are self-reported, there is also the potential for misreporting of information. For these data, Education put in place quality control and editing procedures to reduce errors. Further, for the school year 2011-12 Civil Rights Data, respondents were to answer each question on the Civil Rights Data survey prior to certification. Null or missing data prevented a school district from completing their Civil Rights Data submission to Education’s Office for Civil Rights. Therefore, in cases where a school district may not have complete data, some schools or districts may have reported a zero value in place of a null value. It is not possible to determine all possible situations where this may have occurred. As such, it may be the case that the item response rates may be positively biased. Further, within this dataset there are outliers that likely represented misreported values. These outliers had the potential to heavily influence state or national totals. To ensure the integrity of the state and national totals, the Office for Civil Rights suppressed outliers identified by data quality rules. These rules flagged inconsistent and implausible values for suppression. To mitigate the potential for suppressions that distort aggregate totals, suppressed data were replaced with imputed data where possible. For example, where the number of students disciplined exceeded the number in membership, the number was set to the number of students in membership. We selected a school district in each of three states (one in the Northeast, South, and West) and interviewed officials to describe why and how selected school districts have taken actions to address the diversity of their schools. We selected states to include different regions of the country, and we selected school districts within these states that had taken action to increase diversity. Within these districts, the schools we visited were selected to include a mix of grade level (elementary, middle, and high school), school type (traditional public and magnet), and location (urban and suburban). To select districts, we relied on recommendations from subject matter specialists and a review of available information. For example, we reviewed the school districts that had participated in Education’s Voluntary Public School Choice grant program. Information from the districts we contacted is illustrative and not meant to reflect the situation in other districts with similar efforts. In the districts we selected, we interviewed different stakeholders, such as school district superintendents, school board members, state education officials, community leaders, and school officials. We conducted these interviews in person (in two locations) or by phone. During our interviews, we collected information about issues related to racial and socioeconomic diversity in public schools, including types of actions implemented to increase diversity, reasons for implementing the actions, challenges faced in implementing the actions, and comments about federal actions in this area. In addition to interviewing officials, in some locations we toured schools to learn more about how and why various actions were implemented at those schools. We provided the relevant sections of a draft of this report to the appropriate officials from each district for their review. We did not assess the extent to which the selected districts have achieved any diversity goals or complied with any applicable court orders. Because we selected the school districts judgmentally, we cannot generalize the findings about the actions officials took to address diversity to all school districts and schools nationwide. To assess the actions taken by the Departments of Education and Justice to address issues related to racial discrimination in schools, we interviewed agency officials and reviewed relevant federal laws, regulations, and agency documents. With both agencies, we interviewed officials about each agency’s responsibilities with respect to federal civil rights laws and regulations, as well as the actions the agencies took to enforce them. With Education officials, we discussed the agency’s investigations, guidance, and data collection, and we reviewed agency procedures, selected documents from recently concluded investigations, and guidance documents. With Justice officials, we discussed the agency’s litigation activities, investigations, and guidance and reviewed agency procedures and guidance documents, as well as certain documents from selected court cases, including selected desegregation orders. We assessed agencies’ actions using guidance on internal controls in the federal government related to oversight and monitoring as well as agency guidance and strategic plans. We also interviewed representatives of civil rights organizations and academic experts to discuss issues related to racial and socioeconomic diversity in public schools, including actions taken by school districts to increase diversity and federal actions to enforce federal civil rights laws with respect to race in public schools. We identified studies about the effect that the racial and socioeconomic composition of K-12 public schools has on various student outcomes, using specific terms to search several bibliographic databases. From these searches, we used studies published between 2004 and 2014 on U.S. students, as these studies are more reflective of current students and their outcomes. We looked at studies concerned primarily with the effect of socioeconomic composition of schools, or racial composition of schools, or both factors together. The studies selected were based on nationally representative samples of students that allowed us to examine the socioeconomic or racial composition of the schools, and the studies analyzed the effect these school-level characteristics had on student academic outcomes, such as test scores, grade point average, high school graduation or dropout rates, and/or college enrollment using research methodologies that controlled for potentially confounding factors. We excluded from consideration some studies based on factors including outdated data, limited scope, or research methods that failed to control for multiple factors when assessing outcomes. Although the findings of the studies we identified are not representative of the findings of all studies looking at whether a school’s racial or socioeconomic composition affects student outcomes, they provide examples of published and peer-reviewed research that used strong research designs to assess these effects. See appendix III for the list of studies we reviewed. We conducted this performance audit from November 2014 through April 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix contains the results of our additional analyses to examine trends and disparities among schools with different levels of poverty among students and Black or Hispanic students. For these analyses, we used school- and student-level data from both the Common Core of Data (CCD) for selected school years from 2000-01 to 2013-14 and the Civil Rights Data Collection (Civil Rights Data) for school year 2011-12. This information is presented as a supplement to the findings presented in this report; however, we noted in the report when the information in these tables helped inform our findings. These tables present the results of our additional analyses that used school- and student-level data from the Common Core of Data for students attending K-12 public schools. The tables include data on schools by different poverty levels and different concentrations of Black or Hispanic students, and data on students who attend these schools. For both schools and students, we present additional data by school type (traditional, charter, and magnet schools) and by region of country. These tables present the results of our additional analyses that used school- and student-level data from the Civil Rights Data Collection. The tables provide data on academic courses and programs offered, including advanced math and science courses and Advanced Placement and Gifted and Talented Education Programs. We also present school- and student-level data on retention and disciplinary incidents, including out-of- school suspensions, expulsions, reports of bullying, and school-related arrests, as well as data on special populations, such as English Learners and students with disabilities. We also present data on teaching-related variables, including teacher experience, certification and licensing, and absences. We present these data by different levels of poverty, Black or Hispanic students, and school type (traditional, charter, and magnet schools). The following studies examined the effects of poverty and/or racial composition of schools on student outcomes: Aikens, Nikki L. and Oscar Barbarin. “Socioeconomic Differences in Reading Trajectories: The Contribution of Family, Neighborhood, and School Contexts.” Journal of Educational Psychology, vol. 100, no. 2 (2008): 235-251. Berends, Mark and Roberto Peñaloza. “Increasing Racial Isolation and Test Score Gaps in Mathematics: A 30-Year Perspective.” Teachers College Record, vol. 112, no. 4 (2010): 978-1007. Borman, Geoffrey D. and Maritza Dowling. “Schools and Inequality: A Multilevel Analysis of Coleman’s Equality of Educational Opportunity Data.” Teachers College Record, vol. 112, no. 5 (2010): 1201-1246. Condron, Dennis J. “Social Class, School and Non-School Environments, and Black/White Inequalities in Children’s Learning.” American Sociological Review, vol. 74, no. 5 (2009): 683-708. Crosnoe, Robert. “Low-Income Students and the Socioeconomic Composition of Public High Schools.” American Sociological Review, vol. 74, no. 5 (2009): 709-730. Goldsmith, Pat Rubio. “Schools or Neighborhoods or Both? Race and Ethnic Segregation and Educational Attainment.” Social Forces, vol. 87, no. 4 (2009): 1913-1941. Harris, Douglas N. “Lost Learning, Forgotten Promises: A National Analysis of School Racial Segregation, Student Achievement, and ‘Controlled Choice’ Plans.” Center for American Progress. Washington, D.C; 2006. Logan, John R., Elisabeta Minca, and Sinem Adar. “The Geography of Inequality: Why Separate Means Unequal in American Public Schools.” Sociology of Education, vol. 85, no. 3 (2012): 287-301. McCall, Martha S., Carl Hauser, John Cronin, G. Gage Kingsbury, and Ronald Houser. “Achievement Gaps: An Examination of Differences in Student Achievement and Growth.” Northwest Evaluation Association. Portland, OR; 2006. Mickelson, Roslyn Arlin, Martha Cecilia Bottia, Richard Lambert. “Effects of School Racial Composition on K–12 Mathematics Outcomes: A Metaregression Analysis.” Review of Educational Research, vol. 83, no. 1 (2013): 121-158. Owens, Ann. “Neighborhoods and Schools as Competing and Reinforcing Contexts for Educational Attainment.” Sociology of Education, vol. 83, no. 4 (2010): 287-311. Palardy, Gregory J. “High School Socioeconomic Segregation and Student Attainment.” American Educational Research Journal, vol. 50, no. 4 (2013): 714-754. Palardy, Gregory J. “Differential School Effects Among Low, Middle, and High Social Class Composition Schools: A Multiple Group, Multilevel Latent Growth Curve Analysis.” School Effectiveness and School Improvement: An International Journal of Research, Policy and Practice, vol. 19, no. 1 (2008): 21-49. Riegle-Crumb, Catherine and Eric Grodsky. “Racial-Ethnic Differences at the Intersection of Math Course-Taking and Achievement.” Sociology of Education, vol. 83, no. 3 (2010): 248-270. Rumberger, Russell W., “Parsing the Data on Student Achievement in High-Poverty Schools.” North Carolina Law Review, vol. 85 (2007): 1293- 1314. Rumberger, Russell W. and Gregory J. Palardy. “Does Segregation Still Matter? The Impact of Student Composition on Academic Achievement in High School.” Teachers College Record, vol. 107, no. 9 (2005): 1999- 2045. Ryabov, Igor. “Adolescent Academic Outcomes in School Context: Network Effects Reexamined.” Journal of Adolescence, vol. 34 (2011): 915-927. Ryabov, Igor and Jennifer Van Hook. “School Segregation and Academic Achievement Among Hispanic Children.” Social Science Research, vol. 36 (2007): 767-788. van Ewijk, Reyn and Peter Sleegers. “Peer Ethnicity and Achievement: A Meta-Analysis Into the Compositional Effect.” Tier Working Paper Series (2010). In addition to the contact named above, Sherri Doughty (Assistant Director), Linda Siegel (Analyst-in-Charge), Rachel Beers, Lisa Brown, Grace Cho, Sarah Cornetto, Camille Henley, John Mingus, Anna Maria Ortiz, and David Reed made key contributions to this report. Also contributing to this report were Deborah Bland, Holly Dye, Farrah Graham, Kirsten Lauber, Mimi Nguyen, and Cady Panetta. | Recent literature shows that poor and minority students may not have full access to educational opportunities. GAO was asked to examine poverty and race in schools and efforts by the Departments of Education and Justice, which are responsible for enforcing federal civil rights laws prohibiting racial discrimination against students. This report examined (1) how the percentage of schools with high percentages of poor and Black or Hispanic students has changed over time and the characteristics of these schools, (2) why and how selected school districts have implemented actions to increase student diversity, and (3) the extent to which the Departments of Education and Justice have taken actions to identify and address issues related to racial discrimination in schools. GAO analyzed Education data for school years 2000-01 to 2013-14 (most recent available); reviewed applicable federal laws, regulations, and agency documents; and interviewed federal officials, civil rights and academic subject matter specialists, and school district officials in three states, selected to provide geographic diversity and examples of actions to diversify. The percentage of K-12 public schools in the United States with students who are poor and are mostly Black or Hispanic is growing and these schools share a number of challenging characteristics. From school years 2000-01 to 2013-14 (the most recent data available), the percentage of all K-12 public schools that had high percentages of poor and Black or Hispanic students grew from 9 to 16 percent, according to GAO's analysis of data from the Department of Education (Education). These schools were the most racially and economically concentrated: 75 to 100 percent of the students were Black or Hispanic and eligible for free or reduced-price lunch—a commonly used indicator of poverty. GAO's analysis of Education data also found that compared with other schools, these schools offered disproportionately fewer math, science, and college preparatory courses and had disproportionately higher rates of students who were held back in 9th grade, suspended, or expelled. In the three districts GAO reviewed as case studies, officials reported implementing various actions to increase economic and racial diversity to address racial or other demographic shifts in school composition. For example, in one predominantly low-income, Black and Hispanic school district, the state and district created state-of-the-art magnet schools to attract students from more economically and racially diverse groups. However, these three districts faced challenges. For example, one state devoted funding to magnet schools while the district's traditional schools declined in quality, according to local officials. Further, according to officials, some magnets with openings could not accept minority students because doing so would interfere with the ratio of minority to non-minority students that the district was trying to achieve. The Departments of Education and Justice have taken a range of actions to identify and address racial discrimination against students. Education has investigated schools, analyzed its data by student groups protected under federal civil rights laws, and found discrimination and disparities in some cases. GAO analyzed Education's data among types of schools (charters, magnets, and traditional public schools) by percentage of racial minorities and a proxy for poverty level and found multiple disparities, including in access to academic courses. Education does not routinely analyze its data in this way. Conducting this type of analysis would enhance Education's ability to target technical assistance and identify other disparities by school types and groups. The Department of Justice (Justice) has also investigated discrimination claims, and it monitors and enforces 178 open federal desegregation court cases to which it is a party, many of which originated 30 or 40 years ago to remedy segregation. However, GAO found that Justice does not track key summary case information, such as the last action taken in a case. As a result, some may unintentionally remain dormant for long periods. For example, in one case the court noted there had been a lack of activity and that if Justice had “been keeping an eye” on relevant information, such as test score disparities, the issue could have been addressed in a more timely way. Federal internal control standards state that agencies should use information to help identify specific actions that need to be taken to allow for effective monitoring. Without tracking key information about open cases, Justice's ability toward effectively monitor such cases is hampered. GAO recommends that Education more routinely analyze its civil rights data to identify disparities among types and groups of schools and that Justice systematically track key information on open federal school desegregation cases to which it is a party to better inform its monitoring. In response, both agencies are considering actions in line with GAO's recommendations. |
Before commencing operations, new airlines must obtain two separate authorizations from DOT—“economic” authority from the Office of the Secretary of Transportation (OST) and “safety” authority from FAA. Within OST, the Air Carrier Fitness Division is responsible for assessing whether applicants have the managerial competence, disposition to comply with regulations, and financial resources necessary to operate a new airline. FAA’s Flight Standards Service uses a multiphase process to determine whether an applicant’s manuals, aircraft, facilities, and personnel meet federal safety standards. Once airlines begin actual operations, FAA is responsible for monitoring the operations, primarily by conducting safety inspections. FAA conducts two types of inspections: routine and special. Routine inspections are generally spot checks performed by individual inspectors on a periodic basis. FAA’s special inspections complement routine inspections by providing more comprehensive evaluations of airlines’ operations. To analyze the safety performance of new airlines, we used three sets of data—data on accidents from the National Transportation Safety Board (NTSB), FAA’s data on incidents, and FAA’s data on enforcement actions initiated against airlines. We discussed the selection of these data sets with officials from FAA, DOT, and NTSB, who agreed that they were appropriate for our analysis. However, it should be noted that all three have limitations. Specifically, some of NTSB’s files on accidents did not definitively specify the airline that was operating the aircraft; FAA’s data on incidents may be subject to some underreporting; and the data on the number of enforcement actions initiated, while complete, may reflect differences among FAA field offices in the emphasis they placed on initiating enforcement actions. We reviewed and made refinements to these data, where appropriate, to address these concerns. NTSB, the official source of information on airline accidents, defines accidents as events in which individuals are killed or suffer serious injury, or the aircraft is substantially damaged. By NTSB’s definition, accidents can range from fatal crashes in which the aircraft is destroyed and all crew and passengers aboard are killed, to events in which only one person suffers a broken bone and the aircraft is not damaged, to still others in which there is substantial aircraft damage, but no fatalities or serious injuries. FAA generally defines incidents as occurrences other than accidents associated with the operation of an aircraft that affect or could affect the safety of operations. Among the commonly recorded types of incidents are engine malfunctions, system failures, landing gear collapses, and losses of directional control. Other types of incidents include collisions with various structures, such as runway lights, fences, wires, or poles; fires; and in-flight turbulence resulting in damage to the aircraft or less serious personal injury. FAA may initiate enforcement actions in response to apparent or alleged violations of the Federal Aviation Act or federal aviation regulations. The actions that can be taken under FAA’s compliance and enforcement program include administrative actions, such as warning notices and letters of correction, and legal enforcement remedies, such as revoking, suspending, or amending an airline’s operating authority. Examples of violations that can lead to enforcement actions range from an airline’s failure to perform proper aircraft maintenance to a pilot’s failure to maintain the altitude directed by air traffic control. Another example is a pilot who possesses a valid pilot certificate but inadvertently pilots an aircraft without the certificate in his or her possession. The available data show that both new and established airlines experience accidents infrequently. Nevertheless, from 1990 through 1994, new airlines had an average accident rate of 0.60 per 100,000 departures compared with the established airlines’ average rate of 0.36 per 100,000 departures.However, NTSB’s definition of accident can range from fatal crashes in which the aircraft is destroyed and all crew and passengers aboard are killed, to events where there is substantial damage to the aircraft but no fatalities or serious injuries, and to still others where only one person may suffer a broken bone, but the aircraft suffers no substantial damage. As a result, the use and interpretation of accident data require caution. Of the 201 accidents that occurred in 1990 through 1994, 45 involved fatalities, of which 5 involved new airlines. Both new and established airlines had a higher number of incidents and enforcement actions from 1990 through 1994, thus providing much more information for analyzing safety trends. During 1990 through 1994, there were a total of 2,879 incidents and 3,982 enforcement actions. Both new large and commuter airlines experienced higher average rates of incidents and enforcement actions, as a group, than established large and commuter airlines. In particular, for new airlines, the rates of incidents and enforcement actions peaked during their early years of operations. However, there was some clustering of these events among the new airlines. More than half of the new airlines had no incidents during the period of our analysis, and 42 percent of the new airlines had no enforcement actions initiated against them. Thus, while these rates provide useful information for analysis, it would not be appropriate to conclude that new airlines provide unsafe service. (Detailed information on new airlines’ and established airlines’ departures, accidents, incidents, FAA-initiated enforcement actions, and their respective rates is contained in app. II.) In 1990 through 1994, NTSB reported 201 accidents by commercial airlines that provide scheduled service. Most airlines—both new and established—had no accidents during 1990-94. For example, among the 29 new large airlines in our review, 3 had accidents; the other 26 had no accidents during the 5-year period. Similarly, of the 50 new commuters, 7 had accidents. Of the 203 established airlines, 69 had accidents. The remaining 134 had no accidents. Of the 201 accidents, 45 involved fatalities. These accidents ranged from 1 accident in which 132 people on board the aircraft were killed to 12 separate accidents in which 1 person was killed; in 8 of those 12 accidents, the person killed was not on board the aircraft. In one case, for example, an airline employee was killed after walking into a rotating propeller blade. The remaining 156 accidents involved serious injury and/or substantial aircraft damage. New airlines experienced 13 of the 201 total accidents and 5 of the 45 fatal accidents. The new airlines’ accidents resulted in a rate of 0.60 per 100,000 departures, while the established airlines’ accidents resulted in a rate of 0.36 per 100,000 departures. More specifically, new large airlines had an accident rate of 1.35 per 100,000 departures, while large established airlines had a rate of 0.30 per 100,000 departures. In contrast, new commuters had an accident rate of 0.48 per 100,000 departures, while established commuters had a rate of 0.46 per 100,000 departures. Aware that the current definition of accident does not distinguish among the varying degrees of accidents’ severity, NTSB and FAA have undertaken an effort to develop new subclassifications of aviation accidents. One option that has been explored is to define accidents according to the significance of damage, recording and grouping data accordingly. However, according to officials in FAA’s Office of Accident Investigations and NTSB’s Office of Research and Engineering, the results of the joint effort have not yet been completed, and no completion date has been set. During 1990 through 1994, new large and commuter airlines had incident rates that were, on average, 52 percent higher than those of established airlines (overall, a rate of 8.1 incidents per 100,000 departures compared with a rate of 5.4 incidents per 100,000 departures for established airlines). For new large airlines, the incident rate was over twice that of large established airlines (a rate of 11.5 incidents per 100,000 departures compared with a rate of 5.1 incidents per 100,000 departures for large established airlines). The average incident rate for new commuters during 1990 through 1994 was also higher than that of established commuters, although the difference was not as great. (See table 1.) As with our analysis of accidents, these rates represent the combined experiences of the airlines in each of the different categories over the entire 5-year period. Of the new airlines, 38 (48.1 percent) experienced at least one incident sometime during 1990-94, while the other 41 experienced no incidents. Of the new airlines that experienced incidents, the incident rates ranged from 2.8 to 666.7 incidents per 100,000 departures. Of the 203 established airlines, 162 (79.8 percent) had one or more incidents during the same period, while the other 41 experienced no incidents. At certain times during their first 5 years of operations, new airlines that experienced incidents had rates that greatly exceeded the average rates for established airlines. For new large airlines, these times were during their second, fourth, and fifth years of operations. For example, the rate for new large airlines more than tripled between their first and second years of operations. Of the 18 new large airlines that had their second year of operations sometime during 1990 through 1994, 7 (38.9 percent) had incidents. The other 11 second-year new airlines had no incidents. In commenting on a draft of this report, DOT noted that one adverse event for a new airline with a limited number of departures can significantly affect accident, incident, or enforcement rates. We agree that because new airlines have fewer departures, the rates at which they experience problems must be viewed with caution. Nevertheless, our review included the entire data sets of departures, accidents, incidents, and enforcement actions for new and established airlines for a 5-year period, and thus these data are important pieces of information in FAA’s efforts to oversee the airline industry. The purpose of our analysis of these data was to assess analytically whether there were differences between new and established airlines overall that might warrant FAA’s increased oversight of new airlines. Figure 1 shows the change in the incident rates for new large airlines over their first 5 years of operations and compares them with the average rate for large established airlines. For new commuters, the average incident rate during their first year of operations was about the same as for established commuters. But by their third year of operations, new commuters had an incident rate that was twice as great as the rate for established commuters (11.6 versus 5.8 per 100,000 departures) and more than twice the rate they experienced in their first year of operations. (Of the 23 new commuters that operated for at least 3 years during 1990 through 1994, 10 experienced incidents in their third year.) During the new commuters’ fourth and fifth years of operations, the incident rate declined. Figure 2 shows the change in the incident rates for new commuters over their first 5 years of operations and compares them with the average rate for established commuters. Our analysis did not specifically identify the reasons why new airlines experienced higher levels of incidents during certain periods of their first 5 years of operations. We discussed the results of our analysis with FAA officials. They said that they were unaware of these trends—they had not done an analysis similar to ours for new airlines—nor were they aware of any other studies addressing this issue. Nevertheless, they theorized that new airlines may encounter more incidents because their fleets expanded faster than their organizational ability to absorb the growth, train their staff, and maintain their fleets. Other factors can also be a cause for concern and may warrant closer scrutiny. These include precarious financial conditions (which some new airlines encountered) or the level at which major functions, such as maintenance, are contracted out, which can lead to a loss of control or oversight—a concern that FAA recently acknowledged in its review of ValuJet Airlines. FAA’s compliance and enforcement program is designed to promote compliance with both statutory and regulatory requirements. Under this program, the agency may initiate enforcement actions in response to apparent or alleged violations of the laws governing federal aviation or of federal aviation regulations. Enforcement actions may be initiated on the basis of FAA’s inspection results or on information provided by other sources such as air traffic controllers or employees in the airline industry. Enforcement actions include administrative actions, such as warning notices and letters of correction; legal enforcement remedies, such as amending, suspending, or revoking airlines’ operating certificates; and punitive actions, such as imposing civil (financial) penalties and temporarily suspending certificates. For example, FAA may pursue civil penalties against an airline that operates aircraft that are not airworthy, repairs equipment using unacceptable methods, or violates regulations on the transportation of hazardous materials. When an immediate safety need exists, FAA inspectors can also issue an emergency revocation order—the most severe action that can be taken against a domestic airline—to prevent an airline from conducting flight operations. In 1990 through 1994, FAA initiated twice the rate of enforcement actions against new airlines as a group than it initiated against established airlines. FAA initiated 14.8 enforcement actions per 100,000 departures against new airlines and 7.3 per 100,000 departures against established airlines. In addition, just as both new large and commuter airlines experienced elevated rates of incidents during their early years of operations, they also experienced higher rates of enforcement actions during their early years of operations. FAA initiated considerably higher rates of enforcement actions against new large airlines, as a group, than it did against large established airlines. In 1990 through 1994, new large airlines had 8 times more enforcement actions than their established counterparts—an average of 64.3 actions initiated against them per 100,000 departures compared with 7.8 actions per 100,000 departures for large established airlines. Figure 3 shows the change in the rate of enforcement actions initiated against new large airlines during their first 5 years of operations. Most of the enforcement actions that FAA initiated against new large airlines were concentrated among relatively few airlines. Of the 190 total enforcement actions initiated against new large airlines during the period, FAA initiated 141 (74.2 percent) against 10 airlines and 49 against 11 other airlines. FAA initiated no enforcement actions against eight airlines that were new airlines during the period. FAA initiated relatively fewer enforcement actions against both new and established commuters, and the difference in the average number of enforcement actions initiated was smaller. In 1990 through 1994, FAA initiated an average of 7.0 enforcement actions against new commuters per 100,000 departures compared with 6.2 against established commuters per 100,000 departures. As with incident rates, new commuters tended to experience rising rates of enforcement actions until after their third year of operations. Figure 4 shows the incidence of FAA-initiated enforcement actions during the new commuters’ first 5 years of operations. FAA initiated an average of 10.7 enforcement actions against new commuters during their third year of operations—more than 70 percent higher than the average rate for established commuters. During the new commuters’ fourth and fifth years of operations, the rate of enforcement actions initiated declined markedly. Similar to the pattern observed for new large airlines, most of the enforcement actions were initiated against relatively few new commuters. Of the 130 total actions initiated against new commuters in 1990 through 1994, FAA initiated 106 (81.5 percent) against 10 airlines; the other 24 enforcement actions were divided among another 15 airlines. FAA initiated no enforcement actions against the remaining 25 new commuters. FAA’s data reveal that most enforcement cases initiated against scheduled airlines resulted in administrative actions, rather than other actions. Of the total 2,286 enforcement cases that had been initiated in 1993 for which data on final action are available, 1,538 (67.3 percent) concluded with an administrative action, 84 (3.7 percent) concluded with a civil penalty, 79 (3.5 percent) concluded with a certificate suspension, and 18 (0.8 percent) concluded with a revocation. In another 567 cases (24.8 percent), FAA took no action. FAA is responsible for promoting safety in air transportation, and the airlines are responsible for operating their aircraft safely in compliance with the requirements in title 14 of the Code of Federal Regulations that cover the aircraft and its systems, maintenance, and personnel and training. FAA oversees the airlines’ programs by monitoring the safety of all operating airlines and conducting periodic inspections. FAA’s national inspection guidelines in effect during the period of our review, which set priorities and established a minimum standard for the number and type of inspections, did not call for new airlines to be inspected any differently from established airlines. However, the guidelines grant latitude to FAA’s regional and district offices to identify the areas that they determine to be important in the interest of safety. This discretionary surveillance allows inspectors and their supervisors at FAA’s field offices to develop work programs that can be tailored to their particular environments and be balanced against such competing priorities as accident investigations. Over the years, FAA has targeted specific airlines and areas of commercial airline operations for increased surveillance on the basis of a variety of factors. For example, FAA has used an increased frequency of noncompliance with federal aviation regulations, an increased frequency of incidents by individual airlines, the deteriorating financial conditions of individual airlines, and non-airline-specific attributes (such as aging aircraft) to target its surveillance activities. However, FAA has not compared the performance characteristics of new airlines, as a group, with those of established airlines to determine whether new airlines should be targeted for increased surveillance. In general, we found that in 1990 through 1994, FAA’s field offices inspected new large airlines, as a group, more frequently than large established airlines. On average, for large airlines, FAA conducted one inspection for every 20.3 new airline departures and one for every 65.5 established airline departures. For new commuters FAA conducted, on average, one inspection for every 113.1 departures and for established commuters, one inspection for every 107.8 departures. However, there was considerable variation in the relative frequency with which FAA inspected individual airlines. At the extremes, the data showed that a few airlines received more than one inspection for every departure, while a few others made hundreds of flights between inspections. FAA’s inspection effort also varied widely among the new airlines that had the greatest average annual number of departures. Of the 10 new large airlines with the highest average number of departures, inspection rates ranged from once every 8 departures to once every 92 departures. Similarly, of the 10 new commuters with the highest average number of departures, the data indicate that FAA’s inspection rates ranged from once every 38 departures to once every 340 departures. We also found no clear pattern between inspection rates and the airlines’ rate of incidents or FAA-initiated enforcement actions. For example, among the 17 new large airlines responsible for 85 percent of the incidents and enforcement actions in 1990 through 1994, the frequency of inspections varied from one inspection for every two departures to one inspection for every 66 departures. Similarly, among the 13 new commuters that accounted for approximately 80 percent of the incidents and enforcement actions initiated against that group, the frequency of inspections varied from one inspection for every 21 departures to one inspection for every 188 departures. On the other hand, some airlines that had had no accidents, incidents, or enforcement actions initiated against them were inspected by FAA once every several hundred departures. One other, however, was inspected every two departures. More specifically, of the seven new large airlines that were inspected less frequently than the average for all new large airlines, one—ValuJet—had an incident rate that was 40 percent higher than average, but it was inspected only about one-third as frequently as all new large airlines through calendar year 1994. For new commuters, 8 of the 17 that were inspected less frequently than average had incident or enforcement action rates that were higher than average. FAA officials told us that the low inspection rates for new airlines with relatively high problem rates may be due to the fact that some new airlines, particularly new commuters, may serve airports that are not closely located to the field office where their inspectors are assigned. The recent disclosures about safety problems at ValuJet Airlines and FAA’s oversight of ValuJet illustrate the need for FAA to closely monitor new airlines. ValuJet began operations in October 1993 with 2 aircraft and expanded its operations to 47 aircraft about 2 years later. In October 1994, FAA conducted a detailed inspection of ValuJet and found 35 violations of FAA’s air safety regulations. The two most serious violations—flying an aircraft with broken forward and aft cargo door locks and flying an aircraft over 140 flights with a leaking hydraulic line—resulted in a fine of $8,500. In September 1995, FAA conducted another detailed inspection of ValuJet and found 58 violations, including the absence of a continued analysis and surveillance program, conflicts between the airline’s general maintenance manual and the federal aviation regulations, and the conduct of maintenance with unapproved procedures. In February 1996, FAA initiated a “special emphasis program” for ValuJet. The May 6, 1996, preliminary report on this effort identified 130 findings on several aspects of ValuJet’s operations, including flight operations training, crew qualifications, manuals and procedures, and maintenance. After the May 11, 1996, crash, which killed all 110 passengers and crew, FAA intensified its special emphasis review through an intensive 30-day review of ValuJet and its fleet. That review led to a June 1996 consent order, under which ValuJet agreed to suspend its operations. FAA’s announcement of ValuJet’s agreement cited multiple quality assurance shortcomings, systemwide maintenance deficiencies, the inability to establish the airworthiness of aircraft, and a lack of engineering capability. On August 29, 1996, FAA returned ValuJet’s operating certificate, permitting it to resume operations if the airline was found managerially and financially fit by DOT. On the same day, DOT issued an order tentatively finding ValuJet fit, willing, and able to provide domestic scheduled air service. Under agreement with FAA, upon returning to service, ValuJet would operate a substantially smaller fleet, starting with up to nine aircraft and adding up to six more within the following days. ValuJet resumed limited flight operations on September 30, 1996. FAA’s 90 Day Safety Review recognized that FAA’s surveillance system does not differentiate between established airlines and newly certificated airlines and stated that additional surveillance during the first several years of operations is warranted. The safety review recommended a heightened level of surveillance of newly certificated airlines for at least the first 5 years of the companies’ operations. To do its job effectively, and because its resources are limited, FAA must target its inspectors to the areas of greatest risk. To do so, FAA needs to have performance-based criteria to gauge various aspects of aviation safety, and the criteria or measures of safety must be underpinned by reliable data. Even if FAA inspectors are targeted to the areas of greatest risk, they must be adequately trained to effectively carry out their responsibilities. For nearly a decade, we have reported on long-standing shortcomings in these two areas. Although FAA has agreed with most of our recommendations and taken actions to implement them, until all of the these problems are effectively resolved, the effectiveness of FAA’s inspection program will be limited. In 1987, we reported on the need for FAA to develop criteria for targeting safety inspections to airlines with characteristics that may indicate safety problems. In 1991, FAA began designing a resource-targeting system called the Safety Performance Analysis System, but it is not yet fully operational. As of August 1996, SPAS was in place and undergoing operational tests at 47 field offices. FAA expects the next version of SPAS to be available to inspectors in late 1997 and the system to be fully operational in 1999. When fully operational, SPAS could rely on over 25 databases within FAA, other government agencies, and the aviation industry, including, potentially, the Improved Accident/Incident Data Subsystem and the Enforcement Information Subsystem. The current SPAS version uses four: the Program Tracking and Reporting Subsystem (in which inspection results are entered), the Vital Information Subsystem (which contains key data on such items as airlines, pilot and mechanic schools, and repair stations), the Service Difficulty Reporting Subsystem (which contains data on instances of abnormal and potentially unsafe mechanical conditions aboard aircraft), and a non-FAA database of information and analyses on financial credit risks. Building on inspection results and other data, SPAS is intended to assist FAA in applying its limited inspection resources to those entities and areas that pose the greatest risk to aviation safety. The system is also expected to highlight particular types of aircraft or particular airlines for increased surveillance (inspection) or oversight if they are experiencing problems at rates that exceed the averages for that group. Specifically, if problems in a particular inspection category are found at rates exceeding 50 percent of the average experience for that group, the SPAS will trigger “advisory” notifications to the inspector that he or she should look into the situation. If problems are found at rates exceeding 100 percent of the average, the system will trigger a notice of “concern” (alert) to the principal inspectors, who are to respond with a written plan of action. In a 1995 report, however, we concluded that SPAS will not be effective if the quality of its source data is not improved. Specifically, we reported that SPAS may rely on data from numerous databases that contain incomplete, inconsistent, and inaccurate data. To address these concerns, we recommended that FAA develop and implement a comprehensive strategy to improve the quality of those data. FAA agreed to this recommendation and stated that such a strategy would be developed by the end of 1995. In August 1996, FAA reported that this strategy would not be completed until October 1996. The strategy is to provide clear and measurable data quality objectives, accurate assessments of the quality of the current data in each database (including an analysis and possible redirection of FAA’s existing data quality improvement initiatives), milestones for attaining the stated quality objectives, and estimates of the resources required. An FAA official said that implementation would begin immediately afterward. Until FAA implements its data quality improvement strategy, problems with data quality may limit SPAS’ usefulness and prevent it from realizing its full potential to target resources to higher-risk activities. Although FAA management officials told us that inspectors generally have the experience and basic training necessary to accomplish their mission, we and others have reported for several years that FAA’s aviation safety inspectors are not receiving needed training. For example, in 1989 we reported that (1) pilot flight checks were being made by operations inspectors who had not received recurrent flight training and whose qualifications to make pilot flight checks had expired and (2) airworthiness inspectors received only about 50 percent of the training that was planned for them. Recognizing that some of its employees had received expensive training they did not need to do their jobs while others did not receive essential training, in 1992 FAA developed a centralized process to determine, set priorities for, and fund its technical training needs. This centralized process is intended to ensure that funds are first allocated for the training that is essential to fulfilling FAA’s mission. In accordance with this process, each FAA entity has developed a needs assessment manual tailored to the entity’s activities and training needs. In addition, FAA is also providing training through such alternative methods as computer-based instruction, interactive classes televised via satellite, and computer-based training materials obtained from manufacturers. Although these initiatives can help improve the efficiency of FAA’s training programs, we testified in 1996 that the adequacy of inspector training continues to be a concern. During the course of our work on new airlines, we interviewed 37 FAA inspectors who were involved with the initial certification or continuing surveillance of new airlines. Although the results of these interviews are not projectable to the universe of inspectors, they do indicate a continuing concern among FAA safety inspectors about the adequacy of the training they receive. Sixteen of the inspectors said they had gaps in training that affected their effectiveness in doing their jobs. For example, one inspector requested training on Airbus aircraft when the airline he inspected began using that aircraft, but he did not receive the training until 2 years after the airline went out of business. In another case, a maintenance inspector told us he was responsible for inspecting several commuter airlines but had never attended maintenance training school for the types of aircraft he inspects. Instead, FAA sent the maintenance inspector to training on Boeing 727s and composite materials, which were not related to the aircraft he was responsible for. Finally, several inspectors told us that despite their responsibility to approve global positioning system receivers, a navigational system increasingly being used in aircraft, they have had no formal training on this equipment. We also reported that in fiscal years 1993 through 1996, decreases in FAA’s overall budget have significantly reduced the funding available for technical training. FAA’s overall training budget decreased from $147 million to $85 million (42 percent) during this period. FAA’s reduced funding for technical training has occurred at a time when it had received congressional direction to hire over 230 additional safety inspectors in fiscal year 1996. Because new staff must be provided with initial training to prepare them to perform their duties effectively, the cost of this training, combined with overall training budget reductions, may further constrain FAA’s ability to provide training to its existing inspectors in the future. The Federal Managers’ Financial Integrity Act of 1982 (FMFIA) requires that executive agencies prepare an annual statement on the adequacy of internal controls based on assessments conducted in accordance with Office of Management and Budget (OMB) Circular A-123. FMFIA and the circular require that the agency’s annual statement on internal controls include a description of any material weaknesses (and related plans for corrective actions) identified as part of the internal control assessment process. Under OMB Circular A-123, agency managers are requested to use Inspector General reviews and GAO reports to help them identify and correct deficiencies in management controls. In addition, the circular states that the agency should pay particular attention to the views of the agency’s Inspector General in identifying and assessing the relative importance of deficiencies in management controls. According to OMB’s guidelines, management control weaknesses are material when the weaknesses meet one or more of the following criteria, among others: Weaknesses are significant enough to be reported to the President or the Congress; resources are not being used consistently with the agency’s mission; reliable and timely information is not being obtained, maintained, reported, and used for decision-making; and a failure to report a known deficiency may reflect adversely on the agency. In December 1993, the DOT Inspector General stated that FAA’s oversight and inspection program represented both a material weakness and a high-risk area reportable to the President and the Congress. The Inspector General cited several GAO and Inspector General reports as the basis for this conclusion and identified the need for FAA to (1) target inspection resources to areas posing the greatest risks, (2) accomplish planned/targeted inspections, (3) perform quality inspections, (4) record deficiencies and ensure that they are corrected, (5) resolve inspection staff imbalances and retrain or refocus inspectors where necessary, and (6) enforce certification requirements relating to aviation parts. The Secretary of Transportation’s 1993 FMFIA report to the President stated that the DOT Inspector General and GAO had identified deficiencies in some program areas administered by the FAA (e.g., Aviation Inspection and Airport Security) and that, taken as a whole, the deficiencies that were identified may constitute “material weaknesses” in a “high-risk” area. The report, however, did not identify FAA’s oversight and inspection program as a “high-risk” area. The Secretary stated that FAA was actively reviewing all of the issues within the context of FMFIA reporting requirements and that these issues would be reflected in future FMFIA reports, as appropriate. In December 1994, the Inspector General again identified FAA’s aviation oversight and inspection activities as a “high-risk” area and recommended that the Secretary of Transportation include FAA’s safety oversight and inspection activities as a “high-risk” area in DOT’s 1994 FMFIA Report to the President and the Congress. The FAA Administrator, however, disagreed with the Inspector General’s position, stating that there was an insufficient basis to conclude that the FAA’s safety and inspection program was a “material weakness” as defined by FMFIA. The Secretary of Transportation’s 1994 FMFIA report to the President stated that he continued to be concerned about ensuring that the aviation oversight and inspection program meets the highest standards, but did not designate this program as “high risk,” concluding that no new areas of “material weakness” were reported that year. For 1995, the DOT Inspector General did not specifically cite FAA’s aviation oversight and inspection activities in her December 1995 letter to the Secretary on FMFIA issues. However, she stated that past and ongoing work indicated that significant management weaknesses existed in many of the Department’s safety programs and recommended that safety oversight be reflected in the Secretary’s FMFIA report as a “problem area.” An official of the DOT Inspector General’s office told us that a “problem area” is not as serious a designation as a “high risk” or “material weakness.” The Secretary’s 1995 FMFIA report, however, did not discuss safety oversight. Beginning August 1, 1996, OMB no longer requires agencies to designate “high-risk” areas in their FMFIA reports. Agencies will still be required, however, to report any “material weaknesses” in their internal controls. However, as discussed in the following section, DOT and FAA have recently undertaken a number of initiatives that, taken together, have the potential to address these concerns. In a May 14, 1996, memorandum for the President, the Secretary of Transportation outlined several initiatives to strengthen FAA’s inspection operations. These initiatives included accelerating the hiring of additional aviation safety inspectors; examining FAA’s computer systems and developing a comprehensive strategy for upgrading FAA’s computer tracking and data systems; and conducting a comprehensive review of FAA’s inspection operations, including reviewing inspector training and work assignments. Between May 28 and June 7, 1996, FAA’s Flight Standards Service conducted a self-assessment that looked at various issues, including the effectiveness of inspector training. A number of recommendations to improve training resulted from the process, including defining requirements for the currency and recurrent training needs of safety inspectors. FAA plans to implement all of these recommendations within 2 years. On June 18, 1996, the FAA Administrator initiated a safety review on “lessons learned” from FAA’s oversight experience with ValuJet—the FAA 90 Day Safety Review. On September 16, 1996, FAA’s Deputy Administrator issued a report that addressed the certification of new airlines, resource targeting to address safety risks, newly certificated airlines’ operations and growth, contracting out, inspector training and guidance material, and inspector resources. The report made over 30 recommendations and included proposed implementation strategies. For example, the report noted that FAA could improve its resource targeting to address safety risks and that the only way to significantly improve aviation safety is through changing FAA’s methods of assessing risk and using new analysis techniques on more complete data. The report said that using systems such as SPAS will allow FAA to more effectively use inspection, surveillance, and enforcement resources where they are most likely to improve safety. While recognizing that the inspector workforce is central to FAA’s ability to ensure compliance and maintain a high level of safety, the report also acknowledged that inspector levels have historically been understaffed. It also recognized that FAA’s training programs do not always provide the frequency of training or meet the specific needs identified by employees, managers, and industry. It included recommendations to ensure that FAA’s resources and training are adequate to meet safety requirements. As noted in the 90 Day Safety Review, an effective inspection program requires a stable source of financing. The recently signed Federal Aviation Reauthorization Act of 1996 creates a National Civil Aviation Review Commission that will analyze financial needs and safety trends and make specific recommendations for change. Recent experience with the lack of authority to collect aviation excise taxes underscores the need to develop a long-term financing solution for FAA that will ensure adequate funding of aviation inspectors and required training. Similarly, the report concluded that no guidance directs FAA to maintain heightened surveillance during a new airline’s formative years, when it may be the most unstable. The report recommended heightened levels of surveillance of newly certificated airlines during the first 5 years of the companies’ operations and periodic reviews of new airlines that assess management, financial, and operational capabilities. The Administrator endorsed the recommendations and called for the development of a strategy and timetable to implement the recommended actions. Once implemented, he wrote, these actions will enhance FAA’s ability to target resources more strategically and to respond more rapidly to changes in the aviation industry. Following the crash of TWA Flight 800 on July 25, 1996, the President established a commission headed by the Vice President (commonly known as the Gore Commission) to review aviation security and safety. The Commission is scheduled to issue its final report early next year. In our opinion, these initiatives, taken together, have the potential to address several of FAA’s long-standing problems. DOT regularly publishes certain consumer-related information on individual airlines—such as information on on-time performance and lost luggage. Consumer advocates, academics, and some Members of Congress have expressed an interest in having FAA publish airline-specific safety data. The aviation system safety indicators that FAA already publishes, such as accident rates, incident rates, near mid-air collisions, and pilot deviations, are aggregated rather than published on an individual airline basis. The FAA Administrator and other FAA officials have raised concerns about the potential negative effect of publishing airline-specific safety data. For example, under the Flight Operations Quality Assurance (FOQA) initiative, FAA is encouraging the airlines to monitor and analyze flight data recorder information to determine aviation system weaknesses before they become incidents or accidents. Because the airlines might react negatively to how such data would be used, FAA officials have said that airlines might be hesitant to share such information, which would impair FAA’s efforts to improve the system’s overall safety. We recognize FAA’s desire to obtain such information from the airlines on a voluntary basis. However, FAA’s mission to promote air safety argues that it should have access to whatever data that can help it to better improve air safety. If the airlines do not choose to share such data voluntarily, FAA could pursue the appropriate regulatory or legislative remedies to gain such access. Before publishing airline-specific safety data, FAA would need to address a number of issues. First, FAA would need to develop a consensus among the affected and interested parties (airlines, passengers, aviation safety system analysts, etc.) on the most appropriate criteria for measuring airline safety performance. Second, FAA would need to gather and analyze the data and develop a monitoring system to verify the completeness and accuracy of the data. Third, FAA would need to take appropriate measures, including enforcement actions, where necessary to ensure that airlines comply with data requirements. While such an endeavor is a formidable task, the benefits could be substantial. It would not only allow FAA to publicly disclose airline-specific safety data to help the public in making transportation decisions but, just as importantly, better equip FAA to identify and preemptively act on emerging aviation safety trends. FAA’s current effort to develop a strategy to improve the quality of SPAS databases is an important step that can help solidify the foundation on which an airline-specific safety analysis and a public reporting system would potentially be based. New airlines face a formidable challenge in beginning and sustaining operations, managing growth, and developing their management and maintenance infrastructures. The recent disclosures about ValuJet and FAA’s oversight of this airline reinforce this point. Our analysis of new airlines over a 5-year period shows that, on average, they experienced higher rates of incidents and FAA-initiated enforcement actions than established airlines, particularly during their early years of operations. While such information can be useful for better targeting FAA’s inspection resources, it does not mean that new airlines are unsafe. FAA’s policies that were in effect during the period of our review did not call for new airlines to be monitored any differently from established airlines, and actual inspection rates varied widely among new airlines—some airlines with high incident and enforcement action rates were being inspected less frequently than airlines with few or no such problems. We believe that the basic challenges of starting a new airline, and the overall results of our analysis, argue for closely monitoring the performance of new airlines during their first several years of operations and conducting increased or comprehensive inspections of those airlines with elevated rates of safety-related concerns. The recent disclosures about ValuJet reinforce this argument. FAA’s 90 Day Safety Review recommended heightening the level of surveillance of newly certificated airlines for at least the first 5 years of the airlines’ operations. This recommendation is consistent with our observations and, if properly implemented, would largely address our concerns in this area. On a broader scale, serious problems that hamper the effectiveness of FAA’s aviation safety inspection program have remained unresolved for nearly a decade. While FAA has taken steps to better target its inspection resources and has evaluated safety inspector training and work assignments, concerns in those areas have persisted for years and a number of unresolved issues remain. DOT and FAA have recently undertaken a number of initiatives to address these and other problems, with the FAA 90 Day Safety Review making over 30 recommendations for improvement. We believe that these initiatives have the potential to significantly improve FAA’s inspection program, but only if they are effectively implemented. We believe that, to be effective, DOT’s and FAA’s implementation strategy must be underpinned by (1) clear goals and objectives with measurable performance elements, (2) a monitoring and evaluation element to measure progress, and (3) a reporting mechanism to keep the Secretary of Transportation and the Congress informed about progress and problems. Resource constraints resulting from budgetary reductions in such areas as safety inspector training provides a continuing challenge for FAA. Evaluating the use of and managing existing resources as efficiently as possible is important given the current tight budget situation. Such evaluations could also provide the basis for reprogramming funds to meet critical safety-related needs, or to justify the need for additional resources should they be found necessary. Public concern about the safety of the nation’s aviation system has escalated over the last several months as a result of the ValuJet and TWA crashes, and several groups have expressed interest in having FAA publish airline-specific safety data. While FAA would have to address a number of issues—including gaining consensus on safety parameters, obtaining and verifying data, and ensuring that airlines comply with requirements—before publishing such data, we believe that the time has come for FAA to begin the process that can lead to publishing such data. One step in this process would involve NTSB’s and FAA’s ongoing effort to refine the definition of accident, but the completion date for this effort has not been established. We recommend that the Secretary of Transportation instruct the Administrator of FAA to (1) closely monitor the performance of new airlines, particularly during the early years of operations, and conduct increased and/or comprehensive inspections of those new airlines that experience elevated rates of safety-related problems; (2) evaluate the impact of recent budget reductions on FAA’s critical safety-related functions, including—but not limited to—inspector training, and report the results to the Congress through the appropriations process; and (3) study the feasibility of developing measurable criteria for what constitutes aviation safety, including those airline-specific safety-related performance measures that could be published for use by the traveling public. Furthermore, to ensure the timely and effective implementation of the recommendations included in FAA’s 90 Day Safety Review, we recommend that the Secretary of Transportation require the Administrator of FAA to establish (1) clear goals and objectives addressing the safety review’s identified problem areas; (2) measurable performance criteria to assess how the goals and objectives are being met; and (3) a monitoring, evaluation, and reporting system so that FAA’s implementation of the recommendations contained in FAA’s 90 Day Safety Review can be reported to the Secretary and the Congress on a regular basis. We also recommend that the Chairman of NTSB and the Administrator of FAA jointly establish a date for completing the ongoing reevaluation of the definition of accident. DOT and FAA generally agreed with our findings, conclusions, and recommendations. However, they raised concerns about the statistical foundation of the report. Specifically, they noted that the number of accidents, incidents, and departures for new airlines is small in comparison to the number for established airlines and produces substantial negative bias in comparing accident and incident rates for new and established airlines. We agree that accident and incident rates based on relatively few departures are susceptible to large fluctuations and may not accurately predict longer-term performance, and we have noted that prominently in the report. However, our calculations included 100 percent of these events and not just a sample and therefore show the actual rates as of the period of our analysis. The analysis that is of concern to DOT and FAA provides additional evidence on how FAA might want to target inspection resources and, therefore, does not impact any of our conclusions or recommendations. We have made a number of changes to the report on the basis of the events that have occurred since the draft was provided to DOT for comment on September 6, 1996, as well as DOT’s written comments. Most notable among these events was FAA’s publication of its 90 Day Safety Review on September 16, 1996. That review confirmed the validity of the major issues discussed in our report—the need to closely monitor the performance of new airlines during their early years of operations, as well as the need to better target FAA’s resources, improve data quality, and ensure that FAA’s resources and training programs are adequate to meet safety requirements. Our September 6, 1996, draft of this report contained a proposed recommendation calling for FAA’s aviation safety inspection program to be designated an area of material weakness in DOT’s Federal Managers’ Financial Integrity Act report. In light of the fact that FAA’s 90 Day Safety Review recognized the long-standing concerns that gave rise to our proposed recommendation and made over 30 recommendations that, if properly implemented, have the potential to correct these problems, we have deleted that recommendation from our final report. However, we believe there is a need for continued vigilance on the part of DOT, FAA, and the Congress to ensure that the recommendations in the 90 Day Safety Review are effectively implemented in a timely manner. Consequently, we have added a recommendation that calls for FAA to report periodically to the Secretary of Transportation and the Congress on its progress in implementing the recommendations from the 90 Day Safety Review. A copy of DOT’s comments is included as appendix III. We conducted our review from August 1995 through September 1996 in accordance with generally accepted government audit standards. A detailed discussion of our objectives, scope, and methodology appears in appendix I. We will send copies of this report to the Secretary of Transportation; the Administrator, FAA; the Chairman, NTSB; the Director, Office of Management and Budget; and other interested parties. We will also make copies available on request. This report was prepared under the direction of John H. Anderson, Jr., Director, Transportation Issues, who can be reached at (202) 512-2834 if you have any questions. Other major contributors to this report are listed in appendix IV. The former Chairman, Subcommittee on Aviation, House Committee on Public Works and Transportation, asked us to examine, as the second segment of work addressing issues concerning the Federal Aviation Administration’s (FAA) oversight of new airlines, the agency’s efforts to ensure that new airlines meet safety standards. As agreed with the Subcommittee’s staff, we also addressed this report to the current Chairman and Ranking Democratic Member of the Subcommittee on Aviation. To address this issue, we focused on three questions: Did new airlines perform differently from established airlines during the 5-year period between January 1, 1990, and December 31, 1994, with regard to accidents, incidents, and enforcement actions? At what frequency does FAA inspect new airlines compared with established airlines? And what impediments hinder the effectiveness of FAA’s overall safety inspection program? Before we were able to answer the first question, we had to determine which airlines were “new airlines.” We defined a new airline as one that provided scheduled domestic air service for 5 or fewer years at any time from the beginning of 1990 to the end of 1994. For example, an airline that began service in 1994 would be considered a new airline, since its first year of operations was within the study period. Similarly, an airline that began operating in 1986 would also be considered a new airline in our analysis of 1990 data, because that airline’s fifth year of operations occurred in 1990. However, beginning with the analysis of 1991 data, that same airline’s operations would then be included in the comparison group of established airlines—those that had provided scheduled domestic service for more than 5 years during the 1990-94 period. Thus, we considered any airline that began scheduled operations between January 1986 and December 1994 to be a new airline during relevant portions of the 1990-94 period. This definition of new airline differs from that normally applied in other aviation safety research. Those studies have tended to define new airlines as being airlines that began interstate operations following the Airline Deregulation Act of 1978. However, airlines such as Southwest Airlines that began interstate operations immediately after that act have now been operating for nearly two decades. We believe that a review that focuses more on airlines with considerably fewer years of experience would provide more insight into the safety performance of new airlines. We discussed our definition of new airlines with FAA, the Department of Transportation (DOT), and the National Transportation Safety Board (NTSB), none of whom raised any objection or concern. To determine which specific airlines should be designated as new airlines and which should be designated as established airlines, we reviewed records from DOT’s Airline Fitness Division within the Office of the Secretary of Transportation (OST), the Bureau of Transportation Statistics (BTS), and FAA to develop a list of airlines subdivided into large and commuter new airlines and large and commuter established airlines. First, we obtained historical information from OST’s files on airlines that it had found “fit” and to which DOT had issued operating authority. We initially included as new airlines those that OST had recertificated following a substantial change in their operations. Second, we used industry financial and operating records from BTS to help determine the year in which airlines began scheduled operations, and divided the airline list into “new” and “established” by the year indicated in the records. Because none of the automated databases we analyzed recorded any specific distinction between scheduled commuter airlines and on-demand air taxi services (i.e., chartered airlines), we relied on FAA officials to provide this distinction. As a result, we eliminated on-demand airlines from our list. However, some commuters that operated as both commuters and on-demand airlines at different points during our 5-year period are included among our group of established commuters. BTS and FAA verified our airline lists. At FAA’s suggestion, we made two additional adjustments to our list of new airlines. First, we reclassified as established airlines those airlines that DOT had newly authorized to provide scheduled service at some point between 1986 and 1994 but which had earlier operated as on-demand air taxis. Second, we reclassified as established airlines those that DOT had recertificated following a substantial change of operations. FAA suggested that those airlines should be considered established because they had essentially maintained an unbroken chain of operations from a previous status. To determine which airlines to categorize as “large” or as “commuters,” we analyzed information from OST, BTS, and FAA. OST and BTS use definitions of large and commuter aircraft that differ from FAA’s. According to DOT’s regulations, a large certificated airline is one that holds a certificate issued under section 401 of the Federal Aviation Act of 1958 and that operates aircraft designed to have a maximum passenger seating capacity of more than 60 seats or a maximum payload capacity of more than 18,000 pounds, or that conducts international operations. Small certificated airlines and commuter airlines (“commuters”) generally operate only aircraft with 60 seats or fewer or a payload capacity of 18,000 pounds or less. FAA’s definitions follow the distinction made by parts 121 and 135 of the Federal Aviation Regulations, which basically define an aircraft as “large” or as a “commuter” depending upon whether or not it seats more than 30 passengers. While we relied on FAA to indicate exactly which airlines it considered to be commuters, our distinction between large and commuter airlines was also consistent with DOT’s definitions. This occurred because FAA’s list of commuter airlines included not just those that operated “part 135” aircraft exclusively, but also airlines that operated “part 121” aircraft (“split certificate” airlines). According to information from FAA, those airlines’ part 121 aircraft were turboprop aircraft, such as the De Havilland Dash-8, that may seat between 36 and 56 passengers. FAA’s list of large airlines included only airlines that exclusively operated large aircraft. Most of those large airlines operated jet aircraft in 1994. As a result, we analyzed data for all new airlines and established airlines that provided scheduled domestic service during the 1990 through 1994 period and that reported data to DOT. We excluded air taxis and other airlines providing nonscheduled service. Our universe of 265 airlines comprised 29 new large airlines, 60 large established airlines, 50 new commuters, and 123 established commuters. During the review period, 20 new airlines reached their sixth year of operations and were then analyzed as established airlines. To answer the first question regarding the airlines’ experiences with accidents, incidents, and enforcement actions, we analyzed three different sets of data. First, to analyze data on all airline accidents that occurred from 1990 through 1994, we reviewed information from NTSB, the official source of information on airline accidents. Some of NTSB’s accident data included ambiguous information about the airline operator’s identity. To resolve the uncertainty, we reviewed more extensive information on each accident in question. Still, of the 201 accidents that occurred from 1990 through 1994, for 8 accidents we were unable to determine with certainty which company operated the aircraft involved. For example, NTSB’s files include information on a commuter airline accident in January 1991 involving US Air Express. However, more than one airline company conducts business as US Air Express, and because NTSB did not record the airline’s designator code, which FAA assigns to individual operators, we were unable to assign the accident to any specific company. Second, we analyzed FAA’s data on aviation incidents that occurred during the period. FAA records data on various airline incidents, which the agency defines as an occurrence other than an accident associated with the operation of an aircraft, that affects or could affect the safety of operations. To improve the data’s reliability and the relevance of the analysis, we excluded certain categories of incidents clearly outside the control of the airline, such as birds’ being ingested into jet engines and lightning strikes. We made these changes at the suggestion, and with the assistance, of FAA. Third, we analyzed data on enforcement investigations initiated from FAA’s Enforcement Information System (EIS). EIS includes information on all enforcement actions taken by FAA, whether administrative or legal. FAA’s Assistant Chief Counsel processes reports requiring legal enforcement action or referral for possible criminal investigation and prosecution. Because such actions may take years to conclude (for example, FAA closed its last enforcement actions against Eastern Air Lines in August 1995, although Eastern ceased operations in January 1992), we used the actions initiated to measure enforcement activity. We did not assess the reliability of the incident or enforcement data. However, we discussed the issue with FAA officials, who told us that while there may be omissions in these data, they were the best available for the purposes of our review. For example, the officials told us that although FAA’s incident data may be subject to some underreporting, those data were preferable to NTSB’s airline safety incident data, because NTSB exercises great discretion in deciding which events to investigate. Similarly, the data on the number of enforcement actions initiated, while complete, may be underreported because of differences in how FAA field offices implement the agency’s enforcement program. That is, confronted with similar sets of factual circumstances, some field offices may recommend that FAA initiate an enforcement action while others would not. To provide the basis for comparing the number of accidents, incidents, and enforcement actions across airlines, we divided all such data points by a base of 100,000 (domestic) departures, a common comparative measure of aviation safety. According to FAA and NTSB, since most accidents occur during arrival or departure, the number of departures is considered to be the best normalizing variable. We obtained the departure data from BTS, which received those data directly from individual airlines. However, we did not independently verify the data sent by the airlines or review BTS’ procedures for handling those data. Also, in our calculations of the various rates for each group of airlines, we included data on accidents, incidents, and enforcement actions only if an airline also reported departure data for that year. For example, Eastern Air Lines stopped reporting departure data to BTS in 1991; however, FAA’s data indicate that it initiated an enforcement action against Eastern in 1992. Our calculations of the enforcement rate for large established airlines did not include that 1992 action against Eastern. We analyzed accidents, incidents, and enforcement actions of new airlines by years of operating experience. Such an analysis compares the records of airlines with the same number of years of operations, regardless of the calendar year in which the observation occurred. For example, we compared airlines within their second year of operations, whether that year was 1990 or 1993, against those with fewer and more years of experience. This method focuses on examining the airline’s records over time, as the airlines gain operating experience. To answer the second question on the relative level of surveillance applied to new airlines and established airlines during the 1990-94 period, we compared the number of inspections of new airlines to the number of inspections of established airlines, normalized for departures in each year. We obtained those data from FAA’s Program Tracking and Reporting Subsystem (PTRS). We have long reported on problems with the data in FAA’s safety inspection management system. Because of continuing concerns about the reliability of the data on inspection results, we used the PTRS data only to determine the number of inspections done, and not their outcomes. We also reviewed the national program guidelines for airline surveillance and spoke to responsible FAA officials to determine whether FAA distinguished between new and established airlines in its surveillance and inspection efforts. To answer the third question, we reviewed GAO products, both reports and testimonies published over the last decade, reporting on many aspects of FAA’s aviation safety inspection program. To assess FAA’s progress in addressing the problems that were discussed in those reports and testimonies, we reviewed documentation that monitors the extent of FAA’s implementation of GAO’s recommendations. After completing our analysis, we discussed our preliminary findings with officials of FAA and NTSB. We also provided a draft of our report to DOT for its review and comment. The agency’s letter in response is reproduced in appendix III. We performed our work primarily at FAA headquarters in Washington, D.C., from August 1995 through September 1996 in accordance with generally accepted government auditing standards. Aviation Safety: Targeting and Training of FAA’s Safety Inspector Workforce (GAO/T-RCED-96-26, Apr. 30, 1996). FAA Budget: Issues Related to the Fiscal Year 1996 Request (GAO/T-RCED/AIMD-95-131, Mar. 13, 1995). Aviation Safety: Data Problems Threaten FAA Strides on Safety Analysis System (GAO/AIMD-95-27, Feb. 8, 1995). Aviation Safety: FAA Can Be More Proactive in Promoting Aviation Safety (GAO/T-RCED-95-81, Jan. 12, 1995). Aviation Safety: FAA’s Efforts to Improve Oversight of Foreign Carriers (GAO/T-RCED-95-33, Oct. 4, 1994). FAA Technical Training (GAO/RCED-94-296R, Sept. 26, 1994). Aviation Safety: Unresolved Issues Involving U.S.-Registered Aircraft (GAO/RCED-93-135, June 18, 1993). Aircraft Maintenance: FAA Needs to Follow Through on Plans to Ensure the Safety of Aging Aircraft (GAO/RCED-93-91, Feb. 26, 1993). Aviation Safety: Increased Oversight of Foreign Carriers Needed (GAO/RCED-93-42, Nov. 20, 1992). Aviation Safety: Additional Actions Needed for Three Safety Programs (GAO/T-RCED-92-90, Aug. 4, 1992). Aviation Safety: Commuter Airline Safety Would Be Enhanced With Better FAA Oversight (GAO/T-RCED-92-40, Mar. 17, 1992). Aviation Safety: Better Oversight Would Reduce the Risk of Air Taxi Accidents (GAO/T-RCED-92-27, Feb. 25, 1992). Aviation Safety: FAA Needs to More Aggressively Manage Its Inspection Program (GAO/T-RCED-92-25, Feb. 6, 1992). Aviation Safety: Air Taxis—The Most Accident-Prone Airlines—Need Better Oversight (GAO/RCED-92-60, Jan. 21, 1992). Aviation Safety: Problems Persist in FAA’s Inspection Program (GAO/RCED-92-14, Nov. 20, 1991). Aviation Safety: Emergency Revocation Orders of Air Carrier Certificates (GAO/RCED-92-10, Oct. 17, 1991). Aging Aircraft Maintenance: Additional FAA Oversight Needed (GAO/T-RCED-91-84, Sept. 17, 1991). Aircraft Maintenance: Additional FAA Oversight Needed of Aging Aircraft Repairs (GAO/RCED-91-91A and B, May 24, 1991). Aviation Safety: Limited Success Rebuilding Staff and Finalizing Aging Aircraft Plan (GAO/RCED-91-119, Apr. 15, 1991). Serious Shortcomings in FAA’s Training Program Must Be Remedied (GAO/T-RCED-90-86, June 6, 1990). Staffing, Training, and Funding Issues for FAA’s Major Work Forces (GAO/T-RCED, 90-42, Mar. 14, 1990). Aging Aircraft: FAA Needs Comprehensive Plan to Coordinate Government and Industry Actions (GAO/RCED-90-75, Dec. 22, 1989). Aviation Safety: FAA’s Safety Inspection Management System Lacks Adequate Oversight (GAO/RCED-90-36, Nov. 13, 1989). Meeting the Aging Aircraft Challenge: Status and Opportunities (GAO/T-RCED-90-2, Oct. 10, 1989) and (GAO/T-RCED-89-67, Sept. 27, 1989). Aviation Training: FAA Aviation Safety Inspectors Are Not Receiving Needed Training (GAO/RCED-89-168, Sept. 14, 1989). Aviation Safety: FAA Has Improved Its Removal Procedures for Pilot Examiners (GAO/RCED-89-199, Sept. 8, 1989). FAA Staffing: Recruitment, Hiring, and Initial Training of Safety-Related Personnel (GAO/RCED-88-189, Sept. 2, 1988). Aviation Safety: Measuring How Safely Individual Airlines Operate (GAO/RCED-88-61, Mar. 18, 1988). Aviation Safety: Needed Improvements in FAA’s Airline Inspection Program Are Under Way (GAO/RCED-87-62, May 19, 1987). Department of Transportation: Enhancing Policy and Program Effectiveness Through Improved Management (GAO/RCED-87-3, Apr. 13, 1987). Compilation and Analysis of the Federal Aviation Administration’s Inspection of a Sample of Commercial Air Carriers (GAO/RCED-85-157, Aug. 2, 1985). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the safety performance of new airlines having 5 or fewer years of operating experience, focusing on: (1) the frequency with which the Federal Aviation Administration (FAA) inspects new airlines compared with its inspections of established airlines; and (2) FAA efforts to correct long-standing problems that limit the effectiveness of its safety inspection program. GAO found that: (1) although data regarding airline accidents and FAA incident and enforcement actions require cautious interpretation, it appeared that, for the review period of 1990 through 1994, new airlines had higher rates of accidents, incidents, and FAA enforcement actions than established airlines during their early years of operations; (2) FAA officials theorized that new airlines may experience more incidents because their fleets expand faster than their ability to absorb growth, train staff, and maintain fleets; (3) FAA national inspection guidelines that were in effect during the review period did not target new airlines for increased surveillance; (4) no clear pattern in the inspection rates distinguished airlines with relatively high rates of incidents and enforcement actions from those that had few or no problems; (5) FAA aviation safety inspection program shortcomings include insufficient inspector training, inadequate aviation safety databases, and the need to improve the oversight of aging aircraft; (6) FAA actions to better target its inspection resources to areas with the greatest safety risks remain incomplete; and (7) initiatives to accelerate the hiring of safety inspectors, strengthen FAA data collection and tracking systems, review FAA inspection operations, and conduct a safety review have the potential to significantly improve the efficiency and effectiveness of the FAA safety inspection program. |
IAEA’s technical cooperation program provides nuclear technical assistance through projects that have three main components—equipment, expert services, and training activities (project- and non-project-related), including fellowships, scientific visits, and training courses—that support the upgrading or establishment, for peaceful purposes, of nuclear techniques and facilities in IAEA member states. IAEA’s technical cooperation program funds projects in 10 major program areas, including the development of member states’ commercial nuclear power and nuclear safety programs. Nuclear technical assistance projects are approved by IAEA’s Board of Governors for a 2-year programming cycle, and member states are required to submit written project proposals to IAEA 1 year before the start of the programming cycle. These proposals are then appraised for funding by IAEA staff and by the agency’s member states in terms of technical and practical feasibility, national development priorities, and long-term advantages to the recipient countries. Within IAEA, the Department of Technical Cooperation and three other technical departments—the departments of Research and Isotopes, Nuclear Safety, and Nuclear Energy—are the main channels for technology transfer activities within the technical cooperation program. While the funding for IAEA’s technical cooperation program comes primarily from member states’ voluntary contributions, the funding for activities in the other three technical departments is through IAEA’s regular budget. The United States contributes about 25 percent of IAEA’s regular budget. In 1996, the United States’ contribution to IAEA’s regular budget of $219 million was $63 million. IAEA spent about $12 million on nuclear technical assistance projects for Cuba from 1963—when Cuba started to receive nuclear technical assistance from IAEA—through 1996, for equipment, expert services, fellowships, scientific visits, and subcontracts (agreements between IAEA and a third party to provide services to its member states). IAEA has approved an additional $1.7 million in nuclear technical assistance projects for Cuba for 1997 through 1999. Over half of this additional assistance will be provided for the application of isotopes and radiation in medicine, industry, and hydrology. In addition to the approximately $12 million for nuclear technical assistance projects for Cuba, IAEA spent $2.39 million on regional and interregional training courses for Cuban nationals. These courses were not related to IAEA’s nuclear technical assistance projects. (This information was available from IAEA only for 1980 through 1996.) Cuban nationals attended IAEA training courses in radiation protection and nuclear safety, probabilistic safety assessment, safety analysis and assessment techniques for the operational safety of nuclear power plants, and quality assurance for nuclear power plants. In addition, IAEA spent about $433,000 on research contracts for Cuba. (This information was available from IAEA only for 1989 through 1996.) Under IAEA’s research contract program, the agency places contracts and cost-free agreements with research centers, laboratories, universities, and other institutions in member states to conduct research projects supporting its scientific programs. As shown in figure 1, of the approximately $12 million for nuclear technical assistance projects that Cuba received from 1963 through 1996—about $8.7 million—or almost three-fourths—consisted of equipment, such as computer systems, and radiation-monitoring and laboratory equipment. (App. I provides information on all nuclear technical assistance projects that IAEA provided for Cuba, by program area, from 1980 through 1996. Most of this assistance was provided in the areas of general atomic energy development and in the application of isotopes and radiation in agriculture). While the costs of administration and related support for IAEA’s technical cooperation program are funded through IAEA’s regular budget, most of the funding for IAEA’s nuclear technical assistance projects comes from voluntary contributions made by the member states to IAEA’s technical cooperation fund. Some funding is also provided to IAEA from the United Nations Development Program (UNDP). Other sources of financial support include extrabudgetary income, which is in addition to the funds donated to the technical cooperation fund and is contributed by member states for specific projects, and assistance-in-kind, which is provided by member states that donate equipment, provide expert services, or arrange fellowships on a cost-free basis. As shown in figure 2, IAEA’s technical cooperation fund was the primary source of funding for the nuclear technical assistance projects provided for Cuba, for equipment, expert services, fellowships, scientific visits, and subcontracts. In 1996, the United States voluntarily contributed $36 million to IAEA. Of this amount, the United States contributed over $16 million—about 30 percent—of the total $53 million in the technical cooperation fund. (Cuba contributed its share of $45,150—or 0.07 percent—to the fund in 1996.)From 1981 through 1993, the United States was required, under section 307(a) of the Foreign Assistance Act of 1961, as amended, to withhold a proportionate share of its voluntary contribution to the technical cooperation fund for Cuba, Libya, Iran, and the Palestine Liberation Organization because the fund provided assistance to these entities. The United States withheld about 25 percent of its voluntary contribution to the fund, which otherwise would have helped to fund projects for Cuba and the other proscribed entities. On April 30, 1994, the Foreign Assistance Act was amended, and Burma, Iraq, North Korea, and Syria were added to the list of entities from which U.S. funds for certain programs sponsored by international organizations were withheld. At the same time, IAEA and the United Nations Children’s Fund (UNICEF) were exempted from the withholding requirement. Consequently, as of 1994, the United States was no longer required to withhold a portion of its voluntary contribution to IAEA’s technical cooperation fund for any of these entities, including Cuba. However, State Department officials continued to withhold funds in 1994 and 1995. But beginning in 1996, the United States no longer withheld any of its voluntary contribution to the fund for these entities, including Cuba. Because IAEA’s technical cooperation fund provides nuclear technical assistance for Cuba, from 1981 through 1995, the United States withheld a total of about $2 million that otherwise would have gone for nuclear technical assistance for Cuba. Of the total dollar value of all nuclear nuclear technical assistance projects that IAEA has provided for Cuba, about $680,000 has been approved for four nuclear technical assistance projects for Cuba’s nuclear power reactors from 1991 through 1998. As of January 1997, $313,364 of this amount had been spent for two of these projects. State Department officials told us that they did not object to these projects because the United States generally supports nuclear safety assistance for IAEA member states. Following is a summary of each of these projects. (See app. II for more details.) Since 1991, IAEA has assisted Cuba in undertaking a safety assessment of the reactors’ ability to respond to accidents and in conserving, or “mothballing,” the nuclear power reactors while construction is suspended. The agency had spent almost three-fourths of the approximately $396,000 approved for the project, as of January 1997. Of this amount, Spain has agreed to provide about $159,000 in extrabudgetary funds. According to IAEA’s information on the technical cooperation program for 1995 to 1996, this project is designed to develop proper safety and emergency systems and to preserve the plant’s emergency work and infrastructure in order to facilitate the resumption of the nuclear power plant’s activities. Seven reports were prepared by IAEA experts under this project that discuss the power plant’s ability to cope with a nuclear accident. Our requests to review or to be provided with copies of these reports were denied by IAEA because information obtained by the agency under a technical cooperation project is regarded as belonging to the country receiving the project and cannot be divulged by IAEA without the formal consent of the country’s government. At the time of our review, the government of Cuba had not given IAEA permission to release these reports. Since 1995, IAEA has assisted Cuba in designing and implementing a training program for personnel involved in the operational safety and maintenance of all nuclear installations, including the reactors, in Cuba. IAEA has spent about $31,000 of the about $74,000 approved for the project. Furthermore, according to IAEA’s information on the technical cooperation program for 1995 to 1996, this project will develop and implement an adequate training program that will improve operational safety at all nuclear installations in Cuba and will promote a safety culture. For the 1997 to 1998 technical cooperation program, IAEA has approved two new projects to assist in licensing the reactors and establishing a quality assurance program for them. Funding of about $210,000 has been approved for these two projects. According to IAEA’s information on the technical cooperation program for 1997 to 1998, the objective of the licensing project is to strengthen the ability of Cuba’s nuclear regulatory body to carry out the process of licensing the reactors. According to IAEA’s information, the quality assurance project will assist the nuclear power plant in developing an effective program that will improve safety and lower construction costs. In our September 1992 report and in our August 1995 testimony on the nuclear power reactors in Cuba, we reported that the United States preferred that the reactors not be completed and discouraged other countries from providing assistance, except for safety purposes, to Cuba’s nuclear power program. In a statement made at the August 1995 hearing, the State Department’s Director of the Office of Nuclear Energy Affairs agreed that the United States supported efforts by IAEA to improve safety and the quality of construction at the facility but that the administration strongly believed that sales or assistance to the Cuban nuclear program should not be provided until Cuba had undertaken a legally binding nonproliferation commitment. Cuba is not a party to the 1970 Treaty on Non-Proliferation of Nuclear Weapons, but as a member of IAEA, it is entitled to receive nuclear technical assistance from the agency. State Department officials responsible for IAEA’s technical cooperation program and U.S. Mission officials at the United Nations System Organizations in Vienna, Austria, told us that they did not object to IAEA’s providing nuclear safety assistance to Cuba’s reactors because the United States generally supports nuclear safety assistance for IAEA member states that will promote the establishment of a safety culture and quality assurance programs. These U.S. officials also said that the United States has little control over other IAEA member states that choose to provide extrabudgetary funds for any of the agency’s nuclear technical assistance projects, including those in Cuba. State Department and Arms Control and Disarmament Agency officials told us that the United States will not provide extrabudgetary funds for IAEA’s nuclear technical assistance projects with Cuba or generally to other IAEA member states that are not parties to the Non-Proliferation Treaty, will not host Cuban nationals at training courses held by IAEA in the United States, or select Cuban nationals for training as IAEA fellows in the United States. However, according to the State Department, U.S. experts are allowed to work on IAEA’s nuclear technical assistance projects in the areas of nuclear safety and physical protection for Cuba. We found that one U.S. expert had visited Cuba three times to help with an IAEA nuclear technical assistance project designed to eradicate agricultural pests. We provided copies of a draft of this report to the Department of State for its review and comment. The Department obtained and consolidated additional comments from the Arms Control and Disarmament Agency; the Department of Energy; the Nuclear Regulatory Commission; the U.S. Mission to the United Nations System Organizations and IAEA in Vienna, Austria. On March 5, 1997, we met with an official in the State Department’s Bureau of International Organization Affairs to discuss the consolidated comments. In general, reviewing officials agreed with the facts and analysis presented. Additional clarifying information was provided, and we revised the text as appropriate. An IAEA official in the Department of Technical Cooperation noted that, in assessing the safety and planning for the conservation of Cuba’s nuclear power reactors while their construction is suspended, IAEA’s role in the area of nuclear power is to assist governments in taking actions that are consistent with the highest standards and best practices involving the design, performance, and safety of nuclear facilities. We discussed the United States’ participation in IAEA’s technical cooperation program with officials of and gathered data from the Departments of State and Energy; the Arms Control and Disarmament Agency; the Nuclear Regulatory Commission; Argonne National Laboratory; the National Academy of Sciences; and the National Research Council in Washington, D.C., and the U.S. Mission to the United Nations System Organizations and IAEA in Vienna, Austria. We gathered data from IAEA on its nuclear technical assistance for Cuba, during the period from 1958, when the technical cooperation program began, until 1996. In some cases, funding data for the entire period from 1958 through 1996 was not available from IAEA. Cuba started to receive nuclear technical assistance from IAEA in 1963. We also met with officials in IAEA’s departments of Technical Cooperation and Nuclear Safety who are responsible for managing IAEA’s nuclear nuclear technical assistance projects for Cuba’s nuclear power reactors and with the Vice Minister, Ministry of the Russian Federation for Atomic Energy, to discuss Russia’s plans to complete the Cuban reactors. As agreed with your offices, in a forthcoming report we plan to discuss, among other things, the United States’ participation in IAEA’s technical cooperation program and information on the dollar value and type of nuclear nuclear technical assistance provided to the agency’s member states. We performed our work from November 1996 through March 1997 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretaries of State and Energy, the Chairman of the Nuclear Regulatory Commission, the Director of the Arms Control and Disarmament Agency, and other interested parties. We will also make copies available to others on request. Please call me at (202) 512-3600 if you or your staff have any questions. Major contributors to this report are listed in appendix III. As shown in figure I.1, almost half—about $5 million—of the $10.4 million that the International Atomic Energy Agency (IAEA) spent for nuclear nuclear technical assistance projects for Cuba from 1980 through 1996 was provided in the areas of general atomic energy development and in the application of isotopes and radiation in agriculture. Nuclear safety was the next largest program area; over 12 percent of the funds, or over $1.2 million, went for nuclear technical assistance projects in this area. Of the total dollar value of all nuclear nuclear technical assistance projects that IAEA has provided for Cuba, about $680,000 has been approved for four nuclear technical assistance projects for Cuba’s nuclear power reactors from 1991 through 1998. As of January 1997, $313,364 of this amount had been spent for two of these projects. IAEA’s four nuclear technical assistance projects for Cuba’s nuclear power reactors include (1) a safety assessment and a plan for conserving the nuclear power plant during the suspension of its construction; (2) training in the safe operation of nuclear installations, including the power plant; (3) helping Cuba’s regulatory body develop a process for licensing the power plant; and (4) developing a quality assurance program for the power plant. This ongoing project was originally approved in 1991 to develop the ability to undertake a safety assessment of Cuba’s nuclear power plant program. In 1995, this project was expanded to, among other things, develop the ability to conduct a safety assessment of the nuclear power plant and to provide supervision and advice in the conservation, or “mothballing”, of the nuclear power plant during the suspension of construction. According to IAEA’s project summaries for the technical cooperation program for 1995 to 1996, this project is designed to develop proper safety and emergency systems and to preserve the plant’s emergency work and infrastructure in order to facilitate the resumption of the nuclear power plant’s activities. A Spanish firm that provides architectural and engineering services is assisting IAEA in providing supervision and advice for the implementation of a plan to suspend the program and is training the Cuban technical staff in conducting a probabilistic safety assessment of the plant. Activities undertaken by the Spanish firm at the plant include the conservation and protection of existing structures, equipment, and components, in order to keep them in the best possible state for future use when the project and the construction of the plant are restarted. Under this project, IAEA has provided experts on regulation, licensing, and emergency planning; equipment, such as personal computers, software, printers; and training in inspections and emergency planning. As of January 1997, IAEA had spent over $282,000 of the approved $395,837 budget, as shown in table II.1 below. Spain also provided extrabudgetary funds for this project. IAEA has spent about $113,000 of the approximately $159,000 that Spain has offered to provide for this project. According to IAEA’s project summaries for the technical cooperation program for 1995 to 1996, this ongoing project is intended to design and implement a training program for personnel involved in the operational safety and maintenance of nuclear installations, including the nuclear power plant. Even though the construction of Cuba’s nuclear power plant was suspended, according to IAEA’s project summaries, Cuba requested assistance to train personnel involved in the operational safety of nuclear installations. IAEA is assisting in designing a training program that will include the development of computerized systems for instruction, simulation, evaluation, and certification of staff. As of January 1997, IAEA had spent about $31,000 of the approved $73,926 for the project, as shown in table II.2. According to IAEA’s project summaries for the technical cooperation program for 1997 to 1998, the objective of this new project is to strengthen the ability of Cuba’s nuclear regulatory body to carry out the process of licensing the nuclear power plant. IAEA’s Board of Governors approved this project in December 1996 for a budget of $107,000 for 1997 through 1998. According to IAEA’s project summaries, Cuba’s nuclear regulatory body asked the agency to help it acquire the ability to review the safety of the nuclear power plant as a preliminary step in the licensing process. In addition, Cuba has asked IAEA to assist its nuclear regulatory body in adopting the best international practices on licensing for the latest design of the VVER 440 megawatt reactors. According to IAEA’s project summaries, the project is designed to provide Cuba’s nuclear regulatory body with the technology needed to be effective and self-sufficient and to promote the safe development of nuclear energy as a contribution to meeting Cuba’s energy needs. According to IAEA’s project summaries for the technical cooperation program for 1997 to 1998, the objective of this new project is to improve and revise the structure, integration, and efficiency of the quality assurance program for Cuba’s nuclear power plant and to evaluate its effectiveness and propose corrective measures. Cuba requested IAEA’s assistance to establish a quality assurance program to conform with IAEA’s nuclear safety standards. IAEA’s Board of Governors approved this project in December 1996 for a budget of $103,150 for 1997 through 1998. The aim of this project, as discussed in IAEA’s project summaries, is to achieve adequate levels of reliability and efficiency in documentation, including the elaboration and preservation of quality assurance records; to provide practical experience for quality assurance and management personnel; and to improve the training of quality control and inspection staff, including training in nondestructive testing and other modern technologies. According to IAEA’s project summaries, this project will provide the nuclear power plant with an effective quality assurance program that will improve the plant’s safety and lower construction costs. Nuclear Safety: Uncertainties About the Implementation and Costs of the Nuclear Safety Convention (GAO/RCED-97-39, Jan. 2, 1997). Nuclear Safety: Status of U.S. Assistance to Improve the Safety of Soviet-Designed Reactors (GAO/RCED-97-5, Oct. 29, 1996). Nuclear Safety: Concerns With the Nuclear Power Reactors in Cuba (GAO/T-RCED-95-236, Aug. 1, 1995). Nuclear Safety: U.S. Assistance to Upgrade Soviet-Designed Nuclear Reactors in the Czech Republic (GAO/RCED-95-157, June 28, 1995). Nuclear Safety: International Assistance Efforts to Make Soviet-Designed Reactors Safer (GAO/RCED-94-234, Sept. 29, 1994). Nuclear Safety: Progress Toward Internatinal Agreement to Improve Reactor Safety (GAO/RCED-93-153, May 14, 1993). Nuclear Nonproliferation and Safety: Challenges Facing the International Atomic Energy Agency (GAO/NSIAD/RCED-93-284, Sept. 22, 1993). Nuclear Safety: Concerns About the Nuclear Power Reactors in Cuba (GAO/RCED-92-262, Sept. 24, 1992). Nuclear Power Safety: Chernobyl Accident Prompted Worldwide Actions but Further Efforts Needed (GAO/NSIAD-92-28, Nov. 4, 1991). Nuclear Power Safety: International Measures in Response to Chernobyl Accident (GAO/NSIAD-88-131BR, Apr. 8, 1988). Nuclear Safety: Comparison of DOE’s Hanford N-Reactor With the Chernobyl Reactor (GAO/RCED-86-213BR, Aug. 5, 1986). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to congressional requests, GAO provided information on the International Atomic Energy Agency's (IAEA) nuclear technical assistance to Cuba, focusing on: (1) the dollar value and type of all nuclear technical assistance projects IAEA provided for Cuba; (2) the sources of funding for all nuclear technical assistance projects IAEA provided for Cuba; and (3) IAEA's nuclear technical assistance projects for the Cuban nuclear power reactors and U.S. officials' views on this assistance. GAO noted that: (1) IAEA spent about $12 million on nuclear technical assistance projects for Cuba from 1963 through 1996; (2) about three-fourths of the assistance Cuba received through these projects consisted of equipment; (3) IAEA's assistance for Cuba was given primarily in the areas of general atomic energy development and in the application of isotopes and radiation in agriculture; (4) IAEA recently approved an additional $1.7 million for nuclear technical assistance projects for Cuba for 1997 through 1999; (5) IAEA spent about $2.8 million on training for Cuban nationals and research contracts for Cuba that were not part of specific assistance projects; (6) most of IAEA's nuclear technical assistance projects for Cuba were funded through the agency's technical cooperation fund; (7) in 1996, the United States contributed over $16 million, about 30 percent, of the $53 million in the fund; (8) from 1981 through 1993, the United States was required, under the Foreign Assistance Act of 1961, to withhold a share of its voluntary contribution to the fund because the fund provided assistance for Cuba, Libya, Iran, and the Palestine Liberation Organization; (9) in 1994, the act was amended to exempt IAEA from the withholding requirement; (10) although the United States was no longer required to withhold the portion of its voluntary contribution that would have gone to proscribed entities, State Department officials continued to withhold funds in 1994 and 1995 but did not withhold any of the United States' voluntary contribution to IAEA's technical cooperation fund for 1996; (11) from 1981 through 1995, the United States withheld a total of about $2 million that otherwise would have gone for assistance for Cuba; (12) of the total value of all nuclear technical assistance projects that IAEA has provided for Cuba, about $680,000 was approved for nuclear safety assistance for Cuba's nuclear power reactors from 1991 through 1998, of which about $313,000 has been spent; (13) IAEA is assisting Cuba in developing the ability to conduct a safety assessment of the nuclear power reactors and in preserving the reactors while construction is suspended; (14) IAEA is also implementing a training program for personnel involved in the operational safety and maintenance of all nuclear installations in Cuba; and (15) State Department and U.S. Mission officials in Vienna, Austria, told GAO that they did not object to IAEA's providing nuclear safety assistance to Cuba's reactors because the United States generally supports nuclear safety assistance for IAEA member states that will promote the establishment of a safety culture and quality assurance programs. |
The DWWCF is intended to (1) generate sufficient resources to cover the full cost of its operations and (2) operate on a break-even basis over time—that is, neither make a gain nor incur a loss. Customers primarily use appropriated funds to finance orders placed with the DWWCF. Cash generated from the sale of goods and services, rather than annual appropriations, is the DWWCF’s primary means of maintaining an adequate level of cash to sustain its operations. The ability to operate on a break-even basis and generate cash consistent with DOD’s regulations depends on accurately (1) projecting workload, (2) estimating costs, and (3) setting prices to recover the full costs of producing goods and services. DOD policy requires the DWWCF to establish its sales prices prior to the start of each fiscal year and to apply these predetermined or “stabilized” prices to most orders received during the year—regardless of when the work is accomplished or what costs are incurred. Stabilized prices provide customers with protection during the year of execution from prices greater than those assumed in the budget and permit customers to execute their programs as authorized by Congress. Developing accurate prices is challenging because the process to determine the prices begins about 2 years in advance of when the DWWCF actually receives customers’ orders and performs the work. In essence, the DWWCF’s budget development has to coincide with the development of its customers’ budgets so that they both use the same set of assumptions. To develop prices, the DWWCF estimates labor, material, and other costs based on anticipated demand for work as projected by customers. Higher-than-expected costs or lower-than- expected customer sales for goods and services can result in lower cash balances. Conversely, lower-than-expected costs or higher-than-expected customer sales for goods and services can result in higher cash balances. Because the DWWCF must base sales prices on assumptions made as long as 2 years before the prices go into effect, some variance between expected and actual costs and sales is inevitable. If projections of cash disbursements and collections indicate that cash balances will drop below the lower cash requirement, the DWWCF may need to generate additional cash. One way this may be done is to bill customers in advance for work not yet performed. Advance billing generates cash almost immediately by billing DWWCF customers for work that has not been completed. This method is a temporary solution and is used only when cash reaches critically low balances because it requires manual intervention in the normal billing and payment processes. During fiscal year 2016, DLA, DISA, and DFAS reported total revenue of $45.7 billion through the DWWCF. DLA: During fiscal year 2016, DLA reported total revenue of $37.5 billion. In addition to centrally managing DWWCF cash, DLA operates three activity groups: Supply Chain Management, Energy Management, and Document Services. The Supply Chain Management activity group manages material from initial purchase, to distribution and storage, and finally to disposal or reutilization. This activity group fills about 36.6 million customer orders annually and manages approximately 6.2 million consumable items, including (1) 2.6 million repair parts and operating supply items to support aviation, land, and maritime weapon system platforms; (2) dress and field uniforms, field gear, and personal chemical protective items to support military servicemembers and other federal agencies; (3) 1.3 million medical items for military servicemembers and their dependents; and (4) subsistence and construction items to support our troops both at home and abroad. The Energy Management activity group provides comprehensive worldwide energy solutions to DOD as well as other authorized customers. This activity group provides goods and services, including petroleum, aviation, and natural gas products; facility and equipment maintenance on fuel infrastructure; coordination of bulk petroleum transportation; and energy-related environmental assessments and cleanup. The Document Services activity group is responsible for DOD printing, duplicating, and document automation programs. DISA: During fiscal year 2016, DISA reported total revenue of $6.8 billion. DISA is a combat support agency responsible for planning, engineering, acquiring, fielding, and supporting global information technology solutions to serve the needs of the military services and defense agencies. It operates the Information Services activity group within the DWWCF. This activity group consists of two components: Computing Services and Telecommunications Services/Enterprise Acquisition Services. The Computing Services component operates eight Defense Enterprise Computing Centers, which provide mainframe and server processing operations, data storage, production support, technical services, and end-user assistance for command and control, combat support, and enterprise service applications across DOD. The computing centers support over 4 million users through 21 mainframes and almost 14,500 servers. Among other things, these services enable DOD components to (1) provide for the command and control of operating forces; (2) ensure weapons systems availability through management and control of maintenance and supply; and (3) provide operating forces with information on the location, movement, status, and identity of units and supplies. The Telecommunications Services/Enterprise Acquisition Services component provides telecommunications services to meet DOD’s command and control requirements. One element of this component is the Defense Information System Network. The Defense Information System Network is a collection of telecommunication networks that provides secure and interoperable connectivity of voice, data, text, imagery, and bandwidth services for DOD, coalition partners, combatant commands, and other federal agencies. Another element of this component is the Enterprise Acquisition Services, which provides contracting services for information technology and telecommunications acquisitions from the commercial sector and provides contracting support to the Defense Information System Network programs as well as to other DISA, DOD, and authorized non-Defense customers. DFAS: During fiscal year 2016, DFAS reported total revenue of $1.4 billion. DFAS pays all DOD military and civilian personnel, military retirees and annuitants, and DOD contractors and vendors. In fiscal year 2016, DFAS processed about 122 million pay transactions, paid 12 million commercial invoices, accounted for 1,359 active DOD appropriations while maintaining 152 million general ledger accounts, and made $535 billion in disbursements to about 6.4 million customers. DOD requires each of its working capital funds to maintain a minimum cash balance sufficient to pay bills, which, for the DWWCF, includes payments for (1) consumable items (spare parts) and petroleum products from vendors; (2) employees’ salaries to perform material management, information services, and finance and accounting functions; and (3) expenses associated with the maintenance and operations of DLA, DISA, and DFAS facilities. The provisions of the DOD Financial Management Regulation that provides guidance on the calculation of DOD working capital funds’ upper and lower cash requirements, have changed several times over the past 10 years—the period covered by our audit—as discussed below. Prior to June 2010, DOD’s Financial Management Regulation stated that “cash levels should be maintained at 7 to 10 days of operational cost and cash adequate to meet six months of capital disbursements.” Thus, the minimum cash requirement consisted of 6 months of capital requirements plus 7 days of operational cost, and the maximum cash requirement consisted of 6 months of capital requirements plus 10 days of operational cost. The regulation further provided that a goal of DOD working capital funds was to minimize the use of advance billing of customers to maintain cash solvency, unless advance billing is required to avoid Antideficiency Act violations. The DOD Financial Management Regulation was amended in June 2010. As a result, from June 2010 through June 2015, DOD working capital funds were allowed—with the approval of the Office of the Under Secretary of Defense (Comptroller), Director of Revolving Funds—to incorporate three new adjustments into the formula for calculating the minimum and maximum cash requirements. These adjustments would increase the minimum and maximum cash requirements. First, a working capital fund could increase the cash requirements by the amount of accumulated profits planned for return to customer accounts. A working capital fund returns accumulated profits to its customers by reducing future prices so it can operate on a break-even basis over time. The second adjustment allowed by the revised DOD Financial Management Regulation was for funds appropriated to the working capital fund that were obligated in the year received but not fully spent until future years. The adjustment allowed the working capital fund to retain these amounts as an addition to their normal operational costs. Finally, a working capital fund could increase the minimum and maximum cash requirements by the marginal cash required to purchase goods and services from the commodity or business market at a higher price than that submitted in the President’s Budget. The adjustment reflected the cash impact of the specified market fluctuation. Beginning in July 2015, DOD revised its cash management policy to maintain a positive cash balance throughout the year and an adequate ending balance to support continuing operations into the subsequent year. In setting the upper and lower cash requirements, DOD working capital funds are to consider the following four elements: Rate of disbursement. The rate of disbursement is the average amount disbursed between collection cycles. It is calculated by dividing the total amount of disbursements planned for the year by the number of collection cycles planned for the year. The rate describes the average amount of cash needed to cover disbursements from one collection cycle to the next. Range of operation. The range of operation is the difference between the highest and lowest expected cash levels based on budget assumptions and past experience. The DOD Financial Management Regulation noted that cash balances are not static and volatility can be expected because of annual, quarterly, and more frequent seasonal trends and significant onetime events. Risk mitigation. Some amount of cash is required, beyond the range of operation discussed above, to mitigate the inherent risk of unplanned and uncontrollable events. The risks may include budget estimation errors, commodity price fluctuations, and crisis response missions. Reserves. Cash reserves are funds held for known future requirements. This element provides for cash on hand to cover specific requirements that are not expected to disburse until subsequent fiscal years. Our analysis of DWWCF cash data showed that the DWWCF monthly cash balances fluctuated significantly from fiscal years 2007 through 2016 and were outside the upper and lower cash requirements for 87 of the 120 months—about 73 percent of the time for this period. Reasons why the monthly cash balances were outside the cash requirements included, among other things, (1) DLA charging its customers more or less than it cost to purchase, refine, transport and store fuel and (2) DOD transferring funds into or out of the DWWCF to pay for combat fuel losses or other higher priorities. During this 10-year period, DOD took actions to adjust the DWWCF cash balance, such as transferring funds to other appropriation accounts, but the actions did not always bring the balances within the requirements in a timely manner. As a result, the monthly cash balances were above or below the cash requirements for more than 12 consecutive months on three separate occasions from fiscal years 2007 through 2016. Figure 1 shows the DWWCF monthly cash balances compared to the upper and lower cash requirements from fiscal years 2007 through 2016. Further, for the 10-year period, the DWWCF’s reported monthly cash balances were above the cash requirement 62 times, between the upper and lower cash requirements 33 times, and below the cash requirement 25 times. Table 1 shows the number of months the DWWCF monthly cash balances were above, between, or below the upper and lower cash requirements for each of the 10 years reviewed. The monthly cash balances were above or below the cash requirements more than 12 consecutive months on three separate occasions from fiscal years 2007 through 2016. Specifically, the monthly cash balances were (1) below the lower cash requirement for 13 consecutive months beginning in October 2007, (2) above the upper cash requirement for 29 consecutive months beginning in March 2010, and (3) above the upper cash requirement for 15 consecutive months beginning in April 2015. The draft DOD Financial Management Regulation currently being implemented by the DWWCF provides information on the management tools DOD cash managers can use to bring cash balances within the upper and lower cash requirements. The draft regulation states that these tools include, but are not limited to, changing the frequency of collections; controlling the timing of contract renewals and large obligations or disbursements; negotiating the timing of customer orders and subsequent work; and requesting policy waivers, when necessary. In addition, the draft DOD Financial Management Regulation also provides guidance on transfers of cash between DOD working capital fund activities or between DOD working capital fund activities and appropriation-funded activities. DOD has used transfers to increase or decrease cash balances in the past, under authorities provided in annual appropriations acts. We determined that DOD took actions during fiscal years 2007 through 2016 to increase or decrease cash balances in the DWWCF. An Office of the Under Secretary of Defense (OUSD) (Comptroller) official informed us that DLA provides the OUSD (Comptroller) monthly information on DWWCF cash balances compared to the upper and lower cash requirements. This information is used by the OUSD (Comptroller) and DWWCF officials to determine whether actions are needed to increase or decrease the DWWCF cash balance, such as transferring funds into or out of the DWWCF. Our analysis of accounting documentation and discussions with OUSD (Comptroller) and DLA officials showed that DOD used two types of actions to increase or decrease the monthly cash balances to help bring the cash balances within the cash requirements during the 10-year period of our review. First, DOD transferred a total of about $9 billion into and out of the DWWCF to adjust the cash balance during this period. For example, DOD transferred about $3 billion from the DWWCF to other DOD appropriation accounts throughout fiscal year 2016. This reduced the DWWCF cash balance to within the cash requirements after remaining above the upper cash requirement for 15 consecutive months from April 2015 to July 2016. Second, the OUSD (Comptroller), in coordination with DLA, adjusted the standard fuel prices upward or downward a total of 16 times outside of the normal budget process. For example, in fiscal year 2012, DOD lowered the standard fuel price three times from October 2011 through July 2012 to help offset cash increases by DLA Energy Management on the sale of fuel to its customers. These price changes helped offset further increases in the cash balances, but they did not reduce the monthly cash balances to within the cash requirements during this period. Although DOD managers used management tools such as transfers and price adjustments to help bring the DWWCF monthly cash balances within upper and lower cash requirements, such actions did not always bring the balances within the requirements in a timely manner. Specifically, as noted above, the DWWCF monthly cash balances were outside the upper and lower cash requirements for 12 consecutive months on three separate occasions during the 10-year period of our audit. We selected the 12-month period for comparing the monthly cash balances to the cash requirements because (1) DOD develops a budget every 12 months; (2) the DWWCF annual budget projects fiscal year-end cash balances and the upper and lower cash requirements based on 12- month cash plans that include information on projected monthly cash balances, and whether those projected monthly cash balances fall within the cash requirements; and (3) DOD revises the DWWCF stabilized prices that it charges customers for goods and services every 12 months during the budget process. Our analysis of both the current official DOD Financial Management Regulation and the provisions of the draft version that DOD has begun implementing determined that the regulations do not provide guidance on the timing of when DOD managers should use available management tools so that they would be effective in helping to ensure monthly cash balances are within the upper and lower cash requirements. Without this guidance, DOD risks not taking prompt action to bring the monthly cash balances within the cash requirements. When DWWCF monthly cash balances are below the lower cash requirements for long periods, the DWWCF is at greater risk of either (1) not paying its bills on time or (2) making a disbursement in excess of available cash, which would potentially constitute an Antideficiency Act violation. In cases of cash balances above the upper requirement, the DWWCF may be holding funds that could be used for higher priorities. The DWWCF monthly cash balances were below the lower cash requirement for 19 of the 36 months from fiscal years 2007 through 2009, as shown in table 1. During the 3-year period, the DWWCF reported that monthly cash balances were below the lower cash requirement for 13 consecutive months from October 2007 through October 2008, falling to their lowest point in December 2007 at $574 million—$890 million below the lower cash requirement. According to DLA and DISA officials and our analysis of financial documentation, the monthly cash balances were below the lower cash requirement for more than half of the 3-year period for four primary reasons. First, during the first 4 months of fiscal year 2007, DISA disbursed $340 million more than it collected because (1) DISA’s customers received funding late in the first quarter of fiscal year 2007, which delayed certain support service contracts with DISA that in turn delayed DWWCF collections until the second quarter of fiscal year 2007, and (2) DISA reduced fiscal year 2007 prices to return prior year accumulated profits to its customers. Second, DOD transferred $262 million in November 2006 from the DWWCF to the Air Force Working Capital Fund to cover increased fiscal year 2006 Air Force fuel costs. Third, DLA monthly cash balances declined during the first 4 months of fiscal year 2007 because of financial systems issues affecting collections when certain DLA activities transitioned to another financial system. Fourth, in fiscal year 2008, DLA disbursed about $1.3 billion more for, among other things, the purchase, refinement, transportation, and storage of fuel than it collected for the sale of fuel to its customers because of higher fuel costs. When DWWCF monthly cash balances are below the lower cash requirements for long periods, the DWWCF is at greater risk of either (1) not paying its bills on time or (2) or making a disbursement in excess of available cash, which would potentially constitute an Antideficiency Act violation. The DWWCF reported monthly cash balances increased from about $1.5 billion at the beginning of fiscal year 2010 (October 1, 2009) to $3 billion at the end of fiscal year 2010—about $1.5 billion more than the beginning balance and $1.1 billion above the upper cash requirement. The reported monthly cash balance remained high for most of the next 2 fiscal years, with an average monthly cash balance of $3.1 billion. During the 3-year period, the monthly cash balances were above the upper cash requirement for 29 of the 36 months, as shown in table 1. All 29 months were consecutive. According to DLA officials and our analysis of financial documentation, the monthly cash balances were above the upper cash requirement for four primary reasons. First, DWWCF monthly cash balances increased when the DWWCF received $1.4 billion in appropriations from fiscal year 2010 through fiscal year 2012 to pay mostly for, among other things, combat fuel losses and fuel transportation charges associated with operations in Iraq and Afghanistan. Second, in fiscal year 2010, DLA Energy Management charged its customers more per barrel of fuel than it cost to purchase, refine, transport, and store the product (among other things), causing an increase in cash of approximately $659 million. Third, DLA Supply Chain Management collected more from the sale of inventory to its customers than it disbursed for the purchase of inventory from its suppliers in the second half of fiscal year 2010, resulting in a $296 million increase in cash. Fourth, in June 2012, DOD transferred $1 billion into the DWWCF from the Afghanistan Security Forces Fund (a onetime infusion of cash) to compensate for the reduced price that DLA Energy Management was charging its customers for fuel. DWWCF monthly cash balances were below the lower cash requirement three times in the beginning of fiscal year 2013 before ending the fiscal year at a level above the upper cash requirement. During fiscal year 2013, the DWWCF monthly cash balances were outside the cash requirements for 7 of 12 months, as shown in table 1. There was a wide range in the reported monthly cash balances, from a low of $929 million in January 2013 to a high of $2.8 billion in June 2013. According to DLA officials and our analysis of financial documentation, the monthly cash balances were below the cash requirements in November 2012, January 2013, and February 2013 for two primary reasons. First, in the first 5 months of fiscal year 2013, DLA Energy Management disbursed $588 million more to purchase, refine, transport, and store fuel than it was paid from its customers for the sale of fuel. Second, DLA Supply Chain Management disbursed $280 million more for the purchase of inventory than it collected from the sale of inventory to its customers in the first 5 months of fiscal year 2013. On the other hand, the monthly cash balances were above the cash requirement for the last 4 months of fiscal year 2013 because funds were transferred into the DWWCF in June and September 2013. Specifically, DOD transferred $1.4 billion into the DWWCF from various defense appropriations accounts to mitigate cash shortfalls that resulted from DLA Energy Management paying higher costs for refined fuel products. DWWCF monthly cash balances were above the upper cash requirements for 25 of 36 months from fiscal years 2014 through 2016. During the 3-year period, the reported monthly cash balances averaged $2.9 billion, reaching a high point of $4.7 billion in May 2016—about $1.9 billion above the upper cash requirement. Furthermore, the monthly cash balances remained above the upper cash requirement for 15 consecutive months, from April 2015 through June 2016. According to DLA officials and our analysis of financial documentation, the monthly cash balances were above the upper cash requirement for three primary reasons. First, the DWWCF cash balance at the beginning of fiscal year 2014 was above the upper cash requirement because of the $1.4 billion transferred into the fund during the last 4 months of fiscal year 2013. Second, in fiscal year 2015, DLA Energy Management’s price for the sale of fuel to its customers was considerably more than the cost to purchase, refine, transport, and store fuel. As a result, DLA Energy Management collected about $3.7 billion more than it disbursed for fuel during the year. Third, in fiscal year 2016, DLA Energy Management continued to charge its customers more for fuel than it cost. Initially, the DWWCF monthly cash plan that supports the fiscal year 2017 President’s Budget, dated February 2016, showed the monthly cash balances were projected to be above the upper cash requirement for most of fiscal year 2017. However, after the DWWCF revised its fiscal year 2017 cash plan in October 2016, cash balances were projected to be within the upper and lower cash requirement for all 12 months, in accordance with the DOD Financial Management Regulation. According to DOD officials, the DWWCF changed its plan after the President’s Budget was issued because (1) DOD made unplanned cash transfers out of the DWWCF in the second half of fiscal year 2016 and (2) DOD reduced the fiscal year 2017 standard fuel price in September 2016, leading to lower projected cash balances. Figure 2 shows the DWWCF’s initial cash plan under the President’s Budget and the revised monthly cash plans compared to the upper and lower cash requirements for fiscal year 2017. As shown in figure 2, the DWWCF’s revised cash plan for fiscal year 2017 shows that the cash balance at the end of fiscal year 2016 (September 2016) was about $1.1 billion lower than the President’s Budget cash plan. According to DOD officials and our review of documentation on transfers, this decrease was largely due to DOD’s transfer of about $2 billion out of the DWWCF in the second half of fiscal year 2016, which occurred after the fiscal year 2017 President’s Budget cash plan was submitted in February 2016. The $2 billion of DWWCF cash was transferred to other DOD appropriations accounts to pay for, among other things, unforeseen military requirements to maintain a larger troop presence in Afghanistan than that planned for in the President’s Budget and funding shortfalls in fuels, consumable inventory items, repair parts, and medical supplies and services. The DWWCF monthly cash balances under the revised cash plan are projected to remain within the upper and lower cash requirements for all 12 months in fiscal year 2017. This is a significant change from the President’s Budget cash plan that showed the DWWCF monthly cash balances exceeding the upper cash requirement for 9 of the 12 months. Another factor that contributed to the improved results reflected in the revised plan is that the OUSD (Comptroller) lowered the standard fuel price in September 2016 from $105.00 per barrel to $94.92—a $10.08 difference. DOD lowered the standard fuel price for refined petroleum products because the fiscal year 2017 costs for those products were expected to remain lower than initially projected when the fiscal year 2017 President’s Budget was developed. In connection with its decision to lower fuel prices, DOD stated that the lower refined product costs have had a positive impact on the DWWCF cash balance (i.e., higher cash balances), and the department anticipated an associated congressional reduction as a result of the positive cash balance. Thus, DOD reduced the fiscal year 2017 standard fuel price effective October 1, 2016. In November 2016, DLA officials informed us that several factors could nevertheless cause the DWWCF monthly cash balances to fall outside the upper and lower cash requirements in fiscal year 2017. These factors could include (1) higher or lower fuel product costs than expected, (2) higher or lower customer sales than expected, (3) the timing of vendor payments on a daily basis versus collections from customers on a weekly or monthly basis, and (4) nonpayment from customers for goods or services provided to them. While DWWCF officials stated that these factors could affect whether the DWWCF monthly cash balances are within the cash requirements for all 12 months in fiscal year 2017, the officials believe that the cash balances will remain within the cash requirements for the entire period. The DWWCF supports military readiness by providing energy solutions, inventory management, information system solutions, and financial services for DOD in times of peace and war. Maintaining the DWWCF cash balance within the upper and lower cash requirements as defined by DOD regulation is critical for the DWWCF to continue providing these services for its customers. During fiscal years 2007 through 2016, DOD transferred a total of $9 billion into and out of the DWWCF and adjusted fuel prices 16 times outside of the normal budget process to try to bring the cash balances within the cash requirements. However, the cash balances were outside the upper and lower cash requirements almost three quarters of that time and for more than a year on three separate occasions. Although the DOD Financial Management Regulation provides guidance on tools DOD managers can use to help bring the monthly cash balances within the upper and lower cash requirements, the regulation does not provide guidance on the timing of when DWWCF managers should use these tools to help ensure that the monthly cash balances are within the cash requirements. Without this guidance, DOD risks not taking prompt action in response to changes in fuel costs, inventory costs, appropriations, or other events to bring the monthly cash balances within the cash requirements. When DWWCF monthly cash balances are below the lower cash requirements for long periods of time, the DWWCF is at greater risk of either (1) not paying its bills on time or (2) making a disbursement in excess of available cash, which would potentially result in an Antideficiency Act violation. In cases of cash balances above the upper requirement, the DWWCF may restrict funds that could be used for other higher priorities. We recommend that the Secretary of Defense direct the Office of the Under Secretary of Defense (Comptroller) to provide guidance in the DOD Financial Management Regulation on the timing of when DOD managers should use available tools to help ensure that monthly cash balances are within the upper and lower cash requirements. We provided a draft of this report to DOD for comment. In its written comments, which are reprinted in appendix II, DOD concurred with our recommendation and stated that it plans to update the DOD Financial Management Regulation as we recommended to provide additional guidance on the timing of when DOD managers should use available tools to help ensure that monthly cash balances are within the upper and lower cash requirements. DOD also stated that this change will be incorporated for the fiscal year 2019 President’s Budget submission and subsequent budgets. DOD also provided a technical comment, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Under Secretary of Defense (Comptroller), and the Directors of the Defense Logistics Agency, the Defense Finance and Accounting Service, and the Defense Information Systems Agency. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-9869 or khana@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. To determine to what extent the Defense-wide Working Capital Fund’s (DWWCF) reported monthly cash balances were within the Department of Defense’s (DOD) upper and lower cash requirements from fiscal years 2007 through 2016, we (1) obtained the DWWCF’s reported monthly cash balances for fiscal years 2007 through 2016, (2) used the DOD Financial Management Regulation that was in effect at that time to determine the upper and lower cash requirements, and (3) compared the upper and lower cash requirements to the month-ending reported cash balances. If the cash balances were not within the upper and lower requirement amounts, we met with the Defense Logistics Agency (DLA), the Defense Information Systems Agency (DISA), and the Defense Finance and Accounting Service (DFAS) officials and reviewed DWWCF budgets and other documentation to ascertain the reasons. We also identified instances in which monthly cash balances were outside the upper and lower cash requirements for 12 consecutive months or more to assess the timeliness of DOD actions to bring them within the upper and lower cash requirements. We selected the 12-month period for comparing the monthly cash balances to the cash requirements because (1) DOD develops a budget every 12 months; (2) the DWWCF annual budget projects fiscal year-end cash balances and the upper and lower cash requirements based on 12-month cash plans that include information on projected monthly cash balances, and whether those projected monthly cash balances fall within the cash requirements; and (3) DOD revises the DWWCF stabilized prices that it charges customers for goods and services every 12 months during the budget process. In addition, we performed a walk-through of DFAS processes for reconciling the Department of the Treasury trial balance monthly cash amounts for the DWWCF to the balances reported on the DWWCF cash management reports. Further, to determine the extent cash transfers for fiscal years 2007 through 2016 contributed to the DWWCF cash balances being above or below the cash requirements, we (1) analyzed DOD budget and accounting reports to determine the dollar amount of transfers made for the period and (2) obtained journal vouchers from DFAS that documented the dollar amounts of the cash transfers. We analyzed cash transfers to determine if any of the transfers contributed to the cash balances falling outside the upper or lower cash requirements and, if so, the amount outside those requirements. We also obtained and analyzed documents that provide information on transfer of funds into and out of the DWWCF and interviewed key DLA, DISA, and DFAS officials to determine the reasons for the transfers. To determine to what extent the DWWCF’s projected monthly cash balances were within the upper and lower cash requirements for fiscal year 2017, we obtained and analyzed DWWCF budget documents and cash management plans for fiscal year 2017. We compared the upper and lower cash requirements to the month-ending projected cash balances. If the projected monthly cash balances were above or below the cash requirement, we discussed these balances with DLA officials to ascertain the reasons. We obtained the DWWCF financial data in this report from budget documents and accounting reports. To assess the reliability of these data, we (1) obtained the DOD regulation on calculating the upper and lower cash requirements; (2) reviewed DLA’s calculations of the cash requirements to determine if they were calculated in accordance with DOD regulations; (3) interviewed DLA, DISA, and DFAS officials knowledgeable about the cash data; (4) compared DWWCF cash balance information (including collections and disbursements) contained in different reports to ensure that the data reconciled; (5) obtained an understanding of the process DFAS used to reconcile DWWCF cash balances with the Department of the Treasury records; and (6) obtained and analyzed documentation supporting the amount of funds transferred in and out of the DWWCF. On the basis of these procedures, we have concluded that these data were sufficiently reliable for the purposes of this report. We performed our work at the headquarters of the Office of the Under Secretary of Defense (Comptroller), Washington, D.C.; DLA, Fort Belvoir, Virginia; DISA, Columbus, Ohio; and DFAS, Indianapolis, Indiana. We conducted this performance audit from June 2016 to June 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Greg Pugnetti (Assistant Director), John Craig, Steve Donahue, and Keith McDaniel made key contributions to this report. | The Defense Finance and Accounting Service, the Defense Information Systems Agency, and DLA use the DWWCF to charge for goods and services provided to the military services and other customers. The DWWCF relies primarily on sales revenue rather than annual appropriations to finance its continuing operations. The DWWCF reported total revenue of $45.7 billion in fiscal year 2016 from (1) providing finance, accounting, information technology, and energy solution services to the military services and (2) managing inventory items for the military services. GAO was asked to review issues related to DWWCF cash management. GAO's objectives were to determine to what extent (1) the DWWCF's reported monthly cash balances were within DOD's upper and lower cash requirements from fiscal years 2007 through 2016 and (2) the DWWCF's projected monthly cash balances were within the upper and lower cash requirements for fiscal year 2017. To address these objectives, GAO reviewed relevant DOD cash management guidance, analyzed DWWCF actual reported and projected cash balances and related data, and interviewed DWWCF officials. The Defense-wide Working Capital Fund's (DWWCF) reported monthly cash balances were outside the upper and lower cash requirements as defined by the Department of Defense's (DOD) Financial Management Regulation (FMR) for 87 of 120 months, and more than 12 consecutive months on three separate occasions during fiscal years 2007 through 2016. Reasons why the balances were outside the requirements at selected periods of time include the following: The Defense Logistics Agency (DLA) disbursed about $1.3 billion more in fiscal year 2008 for, among other things, the purchase of fuel than it collected from the sale of fuel because of higher fuel costs. DOD transferred $1.4 billion to the DWWCF in fiscal year 2013 because of cash shortfalls that resulted from DLA paying higher costs for fuel. DLA collected about $3.7 billion more from the sale of fuel than it disbursed for fuel in fiscal year 2015 because of lower fuel costs. Although the DOD FMR contains guidance on tools DOD managers can use to help ensure that the monthly cash balances are within the requirements, the regulation does not provide guidance on when to use the tools. Without this guidance, DOD risks not taking prompt action to bring the monthly cash balances within requirements. When monthly cash balances are outside requirements for long periods of time, the DWWCF is at further risk of not paying its bills on time or holding funds that could be used for other higher priorities. Initially, the DWWCF's cash plan that supports the fiscal year 2017 President's Budget, dated February 2016, showed the monthly balances were projected to be above the upper cash requirement for most of fiscal year 2017. However, its October 2016 revised plan showed that the monthly cash balances were projected to be within the requirements for all 12 months. The plan changed because (1) DOD made unplanned cash transfers out of the DWWCF in the second half of fiscal year 2016 and (2) DOD reduced the standard fuel price in September 2016, leading to lower projected cash balances for fiscal year 2017. GAO recommends that DOD update the FMR to include guidance on the timing of when DOD managers should use available tools to help ensure that monthly cash balances are within the upper and lower cash requirements. DOD concurred with GAO's recommendation and cited related actions planned. |
The Coast Guard is a multimission, maritime military service within DHS. The Coast Guard’s responsibilities fall into two general categories—those related to homeland security missions, such as port security, vessel escorts, security inspections, and defense readiness; and those related to non-homeland security missions, such as search and rescue, environmental protection, marine safety, and polar ice operations. To carry out these responsibilities, the Coast Guard operates a number of vessels and aircraft and, through its Deepwater Program, is currently modernizing or replacing those assets. At the start of the Deepwater Program, the Coast Guard chose to use a system-of-systems acquisition strategy that would replace its assets with a single, integrated package of aircraft, vessels, and communications systems through ICGS, a lead system integrator that was responsible for designing, constructing, deploying, supporting and integrating the assets to meet Coast Guard requirements. Under this approach, the Coast Guard provided the contractor with broad, overall performance specifications—such as the ability to interdict illegal immigrants—and ICGS determined the specifications for the Deepwater assets. The decision to use a lead system integrator was driven in part because of the Coast Guard’s lack of expertise in managing and executing an acquisition of this magnitude. In past reports on Deepwater, as well as the Army’s Future Combat Systems that is pursuing a similar acquisition approach for similar reasons, we have raised a number of concerns about this approach to acquiring complex systems. The role of a system integrator differs from that of a traditional prime contractor in that it includes increased responsibilities for ensuring that the design, development, and implementation of the system-of-systems it is under contract to produce meet established budget and schedule. The close working relationship with the government that this arrangement engenders has advantages and disadvantages. An advantage is that such a relationship allows flexibility in responding to shifting priorities. Disadvantages are the government’s weakened ability to provide oversight over the long term and the potential for increased costs. In a series of reports since 2001, we have noted the risks inherent in the lead system integrator approach to the Deepwater Program and have made a number of recommendations intended to improve the Coast Guard’s management and oversight. In particular, we raised concerns about the agency’s ability to keep costs under control in future program years by ensuring adequate competition for Deepwater assets and pointed to the need for better oversight and management of the system integrator. We, as well as the DHS Inspector General and others, have also noted problems in specific acquisition efforts, notably the NSC and the 110-Foot Patrol Boat Modernization, which the Commandant of the Coast Guard permanently halted in November 2006 because of operational and safety concerns. Acknowledging that the initial approach to Deepwater gave too much control to the contractor, the Coast Guard has reoriented its acquisition organization to position itself to execute systems integration and program management responsibilities formerly carried out by industry. Project managers, whose role in the past was largely one of monitoring ICGS without the authority to make decisions, have now been vested with accountability for program outcomes. In addition, integrated product teams (IPT)—a key program management tool—are now led by Coast Guard officials, not contractor representatives. The Coast Guard has also increased its leverage of its own technical authorities and third party expertise. In the midst of these positive changes, the Coast Guard, like other federal agencies, faces challenges in building a capable government workforce to manage this large acquisition. While it attempts to reduce vacancy rates, it is relying on support contractors in key positions. Since July 2007, the Coast Guard has consolidated acquisition responsibilities into a single acquisition directorate, known as CG-9, and is making efforts to standardize operations within this directorate. Previously, Deepwater assets were managed independently of other Coast Guard acquisitions by the Deepwater Program Executive Office in an insulated structure. The Coast Guard’s goal for the reorganization is to provide greater consistency in its oversight and acquisition approach by concentrating acquisition activities under a single official and allowing greater leveraging of knowledge and resources across programs. The Coast Guard’s consolidation of the acquisition function into a single directorate is consistent with best practices as it allows the agency to operate strategically to meet its overall missions and needs. Figure 1 depicts the changes to the Coast Guard’s acquisition structure. In conjunction with the restructuring of its acquisition directorate, Coast Guard officials have begun to increase the responsibilities and accountability of the project managers who oversee the acquisition of Deepwater assets. Previously, ICGS was charged with a number of key program management responsibilities—ranging from designing and constructing assets to developing concepts for deployment and operations—while Coast Guard program and project managers tracked and monitored the contractor’s activities. The Coast Guard’s new approach increases government control over these key elements of program management while vesting project managers with authority and accountability they lacked in the past. For example, a previous Deepwater management plan emphasized “partnership” between the Coast Guard and ICGS in managing Deepwater and “joint and ICGS responsibility for overall management and execution of the program, including authorization of necessary resources and resolving performance, cost, schedule, and risk tradeoff issues.” Under this scenario, according to Coast Guard officials, project managers could not provide as much direction as they wanted because of the terms of the contract, where ICGS bore ultimate responsibility for outcomes. In contrast, Coast Guard project managers are now responsible for defining, planning, and executing the acquisition projects within established cost, schedule, and performance constraints. Another significant shift has been to assert government control over Deepwater integrated product teams. These teams, a key program management tool, consist of groups of project officials and technical experts responsible for discussing options for problem solving relating to cost, schedule, and performance objectives. In the past, the teams were led and managed by the contractor, while government team members acted as “customer” representatives. Now, the teams are led by Coast Guard personnel. Figure 2 shows examples of how responsibility for program outcomes has shifted from ICGS to the Coast Guard. The Coast Guard is also establishing technical authorities within the agency who review, approve, and monitor technical standards and ensure that assets meet those standards. The Coast Guard has established a technical authority for engineering to oversee issues related to Deepwater, and officials state that a similar authority for C4ISR is pending. Previously, the Coast Guard held only an advisory role in making technical decisions, and in some cases this arrangement led to poor outcomes. For example, Coast Guard officials told us their engineering experts had raised concerns during the NSC’s design phase about its ability to meet service life requirements and recommended design changes, but were ignored. If the recommendations had been heeded, changes to the ship’s design could have been made earlier and some additional costs may have been avoided. To supplement and enhance the use of its internal expertise, the Coast Guard has increased its use of third-party, independent sources of technical expertise and advice. For example, the Coast Guard is increasing its use of the American Bureau of Shipping (ABS), an independent organization that establishes and applies standards for the design and construction of ship and other marine equipment, to assist the Coast Guard in certifying that Deepwater vessels meet certain safety and performance standards. As a case in point there are 987 standards pertaining to hull, mechanical, and electrical systems on the first NSC which must be certified. Currently, ICGS is responsible for submitting documentation to the Coast Guard for 892 of the standards, while ABS and other third parties have a minimal role. In contrast, the Coast Guard plans for ABS to be responsible for reviewing approximately 200 certifications starting with the third NSC and to have an even broader role in certifying the design and production of future assets such as the Offshore Patrol Cutter (OPC) and Fast Response Cutter (FRC). In addition, the Coast Guard is using the U.S. Navy’s Space and Naval Warfare Systems Command to verify the security of certain communications systems and has established partnerships with Naval Sea Systems Command, the Navy Board of Inspection and Survey (INSURV), Naval Air Systems Command, and Naval Surface Warfare Center to leverage their expertise. INSURV, for example, conducted acceptance trials of the NSC in April 2008. Effective management of acquisition programs depends on appropriately trained individuals properly placed within the acquisition workforce. In the initial development of the Deepwater contract, the Coast Guard sought a system integrator because it recognized that it lacked the experience and depth in workforce to manage the acquisition internally. The Coast Guard’s 2008 acquisition human capital strategic plan sets forth a number of acquisition workforce challenges that pose the greatest threats to acquisition success. Key challenges and Coast Guard actions to address them are cited below. Like many federal agencies that acquire major systems, the Coast Guard faces challenges in recruiting and retaining a sufficient number of government employees in acquisition positions such as contract specialists, cost estimators, system engineers, and program management support. The Coast Guard has taken a number of steps to hire acquisition professionals, including the increased use of recruitment incentives and relocation bonuses, utilizing direct hire authority, and rehiring government annuitants. While some vacancies are to be expected in any organization and especially in an acquisition organization given current trends across the government, the Coast Guard is experiencing vacancy rates of almost 20 percent. The Coast Guard also recognizes the impact of military personnel rotation on its ability to maintain people in key positions. The Coast Guard’s policy of regular three-year rotations of military personnel among units, including to and from the acquisition directorate, limits continuity in key project roles filled by military officers and can have a serious impact on the acquisition expertise gained and maintained by those officers. The presence of Coast Guard officers in the acquisition directorate is important, as they provide specialized expertise in Coast Guard operations and fill key positions as program and project managers and technology leads. While the Coast Guard concedes that it does not have the personnel required to form a dedicated acquisition career field for military personnel, such as that found in the Navy, it is seeking to improve the base of acquisition knowledge throughout the Coast Guard by exposing more officers to acquisitions as they follow their regular rotations. To build this base, the Coast Guard is creating acquisition policy courses at the Coast Guard Academy and other institutions and is working with the academy to create an internship program where interested officer candidates can work within the acquisition directorate. Some of the positions that rely on technical and other expertise, such as project technology leads and contracting officials, remain vacant. In the absence of new personnel to fill these positions, the Coast Guard is forced to turn elsewhere. Officials stated that for some specialties, such as cost estimation, the Coast Guard can leverage existing relationships, such as with the Navy. However, because of a shortage of acquisition personnel across government, support contractors are often used to supplement government staff. For example, all the cost and earned value analysts currently employed by the aviation program are support contractors. Program managers stated that they would prefer these positions be filled by government employees. The head of contracting activity for the Coast Guard cited similar concerns, specifically for using contractors as contract specialists. The issue of support contractors in acquisition is not unique to the Coast Guard. In our recent report on the acquisition of major weapons systems in the Department of Defense (DOD), we found that it too relies heavily on contractors to perform roles in program management, cost estimation, and engineering and technical functions. For example, of the 52 programs we reviewed, support contractors represented 34 percent of program office staff for engineering and technical positions and 22 percent for program management functions. While support contractors can provide a variety of essential services, their use must be carefully overseen to ensure that they do not perform inherently governmental roles. As we recently reported in our work on Army contracting practices, for example, using contractors as contract specialists can create the risk of decreased government control over and accountability for policy and program decisions when contractors provide services that closely support inherently governmental functions. Conflicts of interest, improper use of personal services contracts, and increased costs are also potential risks of reliance on contractors. According to officials, the Coast Guard is currently analyzing its workforce to better determine which roles are appropriate for contractors and to what extent support contractors can be used. In addition, it is investigating practices and policies to improve oversight of contractors and ensure their work remains in a supporting role. In order to provide a clearer picture of its future needs for acquisition personnel, the Coast Guard evaluated two potential workforce forecasting tools: one developed internal to the Coast Guard and another developed by the Air Force and tested as part of a broader effort by DHS. The Coast Guard tool is intended to forecast the potential workload of a project office and its acquisition staff requirements by determining the number of hours spent on specific acquisition-oriented work functions, such as contract management, business management, and systems engineering. Coast Guard officials stated that this tool has the potential, if managed correctly, to forecast workforce needs beyond the current fiscal year to enable long-term planning and workforce development. A potential weakness of the tool, according to the Coast Guard, is the significant time investment required of project managers to establish and maintain it. The other forecasting tool relies on historical DOD and Air Force data on program management, supplemented with annual interviews with appropriate project managers, to create estimates of workforce and workload needs. According to the Coast Guard, testing of both tools has been completed and a decision has been made to implement the Air Force staffing model. The Coast Guard’s move away from the ICGS contract and the system-of- systems model to a more traditional, asset-level acquisition strategy has resulted in greater government visibility and control. For example, cost and schedule information are now captured at the individual asset level rather than at the overall, system-of-systems program level, which was difficult to manage. At the same time, however, key aspects of Deepwater still require a system-of-systems approach. These aspects include the C4ISR system and the numbers of each Deepwater asset the Coast Guard requires to achieve its missions. The Coast Guard has not yet determined how to manage these aspects under its new paradigm, yet it is proceeding with Deepwater acquisitions. The Coast Guard’s transition away from the ICGS system-of-systems contract to an asset-by-asset acquisition strategy is enabling increased government visibility and control over its acquisitions. Cost and schedule information are now captured at the individual asset level rather than at the system-of-systems program level, which did not yield useful information for decision making. For example, while cost and schedule breaches in the past were to be reported to DHS at the Deepwater system- of-systems level only—an unlikely occurrence as only a catastrophic event would ever trigger a threshold breach under that approach—the Coast Guard is now reporting breaches by asset. In 2007, for example, the Coast Guard reported breaches for the NSC and for the C-130J. Because of a number of factors including changes to the ship’s design and requirements, the total acquisition cost of the NSC class increased by $520 million, or 15 percent, and the schedule for lead ship delivery was delayed by approximately 2 years. The cost increase for the C-130J is projected to be between 10 and 20 percent of the original contract price and stems from issues such as changes in requirements and concurrent design and installation activities. The Coast Guard recently demonstrated this new approach of increased control over acquiring Deepwater assets by holding its own competition for the Fast Response Cutter-B (FRC-B), in lieu of obtaining the asset through the ICGS contract after determining that it could better control costs by doing so. According to the Coast Guard’s head of contracting activity, the contract award is expected in July 2008. The Coast Guard plans to hold other competitions outside of the ICGS contract for additional assets in the future. However, Coast Guard officials told us that, in the near term, they may continue to issue task orders under the ICGS contract for specific efforts, such as logistics, or for assets that are already well under way. Although the shift to individual acquisitions is intended to provide the Coast Guard with more visibility and control, key aspects still require a system-level approach. These aspects include an integrated C4ISR system, which is needed to provide critical information to field commanders and facilitate interoperability with DHS and DOD, and the numbers of each Deepwater asset the Coast Guard requires to achieve its missions. The Coast Guard is not fully positioned to manage these aspects under its new paradigm. It has not approved an acquisition strategy for C4ISR and lacks at present the ability to model the capabilities of existing and planned assets in a way that could inform the numbers of Deepwater assets it requires. The Coast Guard maintains, however, that it must proceed with its acquisitions in the absence of this information. C4ISR is a key aspect of the Coast Guard’s ability to meet its homeland security, as well as its traditional, missions. How the Coast Guard structures C4ISR—referred to as the “architecture”—is fundamental to the success of the Deepwater Program. C4ISR encompasses the connections between surface, aircraft, and shore-based assets, the means by which information is communicated through them and the way information is displayed across that architecture—referred to as a common operating picture. C4ISR is intended to provide operationally relevant information to Coast Guard field commanders to allow for the efficient and effective execution of their missions across the full range of Coast Guard operations. The Coast Guard plans to integrate the Deepwater C4ISR architecture with legacy cutters and shore facilities as well in order to establish common components across all the assets and further enhance this effort. The Coast Guard recently had an unscheduled demonstration of new capabilities made possible through C4ISR improvements. In February 2008, a Maritime Patrol Aircraft (MPA) diverted from a training flight to participate in the rescue of two downed fighter pilots. With the C4ISR capabilities on board, the aircraft coordinated search and rescue efforts with a number of civilian and military assets it identified in the area. According to Coast Guard officials, a C4ISR acquisition strategy is still in development. The Coast Guard recognizes the need to develop an architecture with common components for use on all assets. However, no agreement has been reached on whether to acquire C4ISR on an asset basis or at a system level. An asset-based approach for C4ISR would entail some risk, as interoperability among all Coast Guard units and DHS components, as well the Navy and others, must be assured. Officials stated that the Coast Guard is revisiting the C4ISR incremental acquisition approach proposed by ICGS and analyzing that approach’s requirements and architecture. In the meantime, the Coast Guard is continuing to contract with ICGS for C4ISR. The first increment, now drawing to a close, is providing core capabilities for Deepwater assets, including common software. Program officials state that the second increment is planned to reduce the reliance on proprietary software and begin the migration toward government owned software where it is practical to do so. The third increment is anticipated to be a new C4ISR solution for the Coast Guard. As the Coast Guard continues to analyze its strategy for procurement of these and other C4ISR increments, a key concern will be to negotiate the data rights it needs to maintain and upgrade the necessary software. An additional risk in transitioning from a system-of-systems based acquisition strategy to an asset-based strategy is that the Coast Guard may lose the strategic vision needed to know how many of each Deepwater asset to procure to meet Coast Guard needs. When deciding how many of a specific vessel or aircraft to procure, it is important to consider not only the capabilities of that asset, but how it can complement or duplicate the capabilities of the other assets with which it operates. The Coast Guard has stated that it will continue to use a systems approach in determining the overall capabilities it needs but has not yet developed the tools necessary to make this assessment. For example, the Coast Guard recently contracted for a Deepwater alternatives analysis that revisited the acquisition approach for many of the individual assets and made a number of recommendations on options for future procurements. The analysis, in general, did not make recommendations about the number of each asset to be procured. It did, however, suggest revisiting the number of NSCs if the capabilities of the OPC allowed it to fill the same missions and eliminating the vertical unmanned aerial vehicle for technical and manufacturing reasons. Coast Guard officials stated that the study was abbreviated in scope because of the limited time available. Senior Coast Guard officials, while stating that the mix of Deepwater assets identified in the alternatives analysis—such as small, medium, and large cutters—is generally reasonable, acknowledge the need to revisit the numbers of each of these assets to be procured in light of Deepwater capabilities as a whole and the move away from the ICGS solution. Officials state, however, that increased capabilities in modeling and simulations are necessary to fully inform this effort. According to officials, the Coast Guard is working to upgrade a model that plots the planned capabilities of Deepwater assets, as well as the capabilities and operations of existing assets, against the requirements for Coast Guard missions. The Coast Guard intends to use this model as a means of testing each planned asset to ensure its capabilities fill stated deficiencies in the Coast Guard’s force structure and to inform how many of a particular asset are needed given the capabilities of the rest of the force. Officials stated that they intend to use this analysis to inform their development of the Deepwater acquisition strategy. In the meantime, the Coast Guard continues to plan for asset acquisitions in numbers very similar to those determined by ICGS, such as procurement of 8 NSCs and 25 OPCs. As the Coast Guard moves the Deepwater Program from a system-of- systems acquisition to a more traditional asset-based approach, it is introducing the use of a more disciplined and formalized process under its Major Systems Acquisition Manual (MSAM). While the introduction of this process is a significant improvement over the prior acquisition process, the absence of a key milestone decision point before low-rate initial production begins and the lack of formal approvals of acquisition decisions by DHS could be problematic. The consequences of not following a more disciplined acquisition approach, especially for the establishment and demonstration of mission requirements, are now apparent for assets already in production and are likely to pose continued problems—such as increased costs—for the Coast Guard. The Coast Guard is now following the process set forth in its MSAM for all Deepwater assets. This process requires documentation and approval of program activities at key points in a program’s life-cycle. The MSAM represents a disciplined management approach that begins with an identification of deficiencies in overall Coast Guard capabilities and then proceeds through a series of structured phases and decision points to identify requirements for performance, develop and select candidate systems that match these requirements, demonstrate the feasibility of selected systems, and produce a functional capability. At each decision point, referred to as a “milestone,” entities across the Coast Guard, such as those responsible for oversight of the budget process or command and control, are to be consulted. Designated officials at high levels—including the Vice Commandant of the Coast Guard—then formally approve the program to proceed to the next phase. Each milestone requires documentation that captures key information needed for decision making. For example, when the Coast Guard makes its milestone decision, under the MSAM process, to proceed with the OPC from the initiation phase into development, the project office presented documentation that described the capabilities the ship is expected to provide, a draft concept for operations, and an initial assessment of cost and schedule. Figure 3 presents the key phases and milestones of the MSAM process and the current status of Deepwater assets within the process. The MSAM process provides a number of benefits that have the potential to improve acquisition outcomes. Primarily, it requires event-driven decision-making by high ranking acquisition executives at a number of key points in an asset’s lifecycle. The process also requires documentation to provide the information and criteria necessary for these decisions. In addition, as the assets proceed through each phase of the process and the requirements and capabilities of the assets become more defined, these assets’ ability to fill deficiencies identified by the Coast Guard must be established. Previously, the Coast Guard authorized the Deepwater Program to deviate from its major systems acquisition process, stating that the process was focused on acquiring discrete assets and contains requirements and documentation that may be inappropriate for the Deepwater system-of- systems approach. Instead, Deepwater Program reviews were required on a schedule-driven basis—planned quarterly or annually—to report the status and performance of the contractor’s efforts. Key decision points were focused primarily at the Deepwater Program as a whole and held only occasionally. Coast Guard officials told us that little, if any, formal documentation of key decisions was maintained. GAO’s work on best practices for major acquisitions has demonstrated that a knowledge-based approach to decision making, where specific knowledge is gathered and measured against standards at key points in the acquisition process to inform decisions about the path forward, can significantly improve program outcomes. While the MSAM process contains many characteristics of a knowledge-based approach, there are key differences that could affect acquisition outcomes. For example, the Milestone 2 decision to approve low-rate initial production precedes the majority of the design activities in the capability development and demonstration phase. By following such a process, the Coast Guard may decide to enter production before a design is proven, a decision that could result in increased costs as design and production activities are conducted concurrently. In a previous report, we reviewed DHS’ acquisition process, with which the Coast Guard’s MSAM process is aligned and intended to complement, and found a similar weakness. Recognition and correction of this weakness in the MSAM approach is particularly important as key assets within Deepwater, most noticeably the FRC, approach a low-rate production decision. The MSAM requires the Coast Guard to obtain approval from DHS on all major program decisions beginning with the start of an acquisition program. This requirement would apply to Deepwater, as it has been designated a DHS major investment program. However, DHS approval of Deepwater acquisition decisions as part of its investment review process is not technically necessary because the department deferred decisions on specific assets to the Coast Guard in 2003. The department did require notification of changes to the Deepwater Program that could result in significant changes to cost, schedule, and performance, but this requirement was at the overall systems level. In practice, the Coast Guard has increased communication and coordination with DHS through good will and informal procedures such as personal working relationships. While increased communication between the Coast Guard and DHS is to be applauded, without a formal process in place, DHS could lose the ability to make strategic decisions—such as how and whether to fund certain projects—across its components if informal procedures and relationships should change. Coast Guard and DHS officials told us that the processes and procedures for coordinating acquisitions with DHS’ Investment Review Board, which is tasked with reviewing major acquisition programs, are currently undergoing revision, and changes to the process are expected near the end of fiscal 2008. The Coast Guard is facing the consequences of its decision not to follow the MSAM process as it attempts to better define requirements for individual assets already being procured, such as the NSC, Long-Range Interceptor (LRI), and the MPA, and to ensure that desired capabilities are met within cost and schedule constraints. Under the MSAM, the requirements generation process takes broad mission needs and translates them to operational capability requirements and then to asset performance specifications. Figure 4 depicts this traceability from mission needs to performance specifications. For example, under the MSAM process, before the design of an asset is selected, representatives of the operational forces within the Coast Guard are required to generate the Operational Requirements Document that determines the capabilities or characteristics considered essential to achieve their mission. Operational requirements described in this document—such as operating environment, functions to be performed, and the need for interoperability with other assets—ultimately drive the performance and capability of an asset and should be traceable throughout development, design, and testing. They should also include basic asset requirements such as speed, maneuvering, and range to serve as threshold and objective values for future trade-off analyses. Under the ICGS-led system-of-systems acquisition approach, the Coast Guard developed high-level system requirements for capabilities, such as the ability to interdict illegal migrants. ICGS then developed an integrated force mix of specific aircraft, vessels, and communications systems to meet those needs. But because the disciplined MSAM approach was not followed, the Coast Guard could not trace the ICGS-proposed asset performance to actual mission needs. Program and project managers are “backfilling” the necessary requirements documentation in programs that are already well underway, with the intent of providing the traceability that was previously lacking. For example, in 2006, the Coast Guard acknowledged that the lack of a traditional requirements document for the NSC, which was then under construction, would inhibit the Coast Guard’s ability to evaluate the vessel’s suitability and effectiveness for Coast Guard missions. To resolve this problem, the Coast Guard developed a document that lists all the operational requirements for the NSC, as derived from identified mission needs, to guide operational testing. According to Coast Guard officials, operational testing based on these requirements will commence when the third NSC is complete. Under the MSAM, operational requirements would have been established prior to design and production to serve as the basis to link the asset’s performance to its ability to fill a mission need. Failure to follow a disciplined approach in requirements generation is also apparent with problems related to the LRI, a small boat intended to be launched from larger cutters such as the NSC. The Coast Guard accepted the ICGS-proposed performance specifications for the LRI as part of the overall Deepwater specification, but the specifications were not tied to Coast Guard mission requirements. Thus, the Coast Guard had no assurance that the boat it was buying was what it needed to accomplish its missions. As a result of Coast Guard-identified technical deficiencies in the performance specifications, design changes were required after the LRI task order was issued. For example, a number of C4ISR specifications had to be added; the initial specification for the fuel tank size was deleted, as its capacity would not enable the boat’s 400 nautical mile range to be met; and a more powerful electrical system was needed. These and other changes, which were required for the boat to accomplish what ICGS had proposed, drove the price for design and production from $744,621 to almost $3 million. The Coast Guard is beginning to define needed capabilities for the LRI under the MSAM process, with an eye towards developing the service’s own requirements for the asset. For example, Coast Guard officials told us that ICGS’ proposed top speed of 45 knots is unrealistic and would under no circumstances be needed to accomplish Coast Guard missions. The LRI has been equipped with a C4ISR suite that officials believe to be much more extensive than they need. They are also concerned that the boat is too heavy, at 22,000 pounds. The ramifications of accepting asset performance specifications not tied to Coast Guard mission requirements also became apparent during recent testing of the system that launches and recovers small boats, such as the LRI, from the NSC’s stern. Design changes to the launch system had to be made because it was found to be inadequate to handle the heavy weight of the LRI. The Coast Guard will pay for this change because the NSC is a cost-plus incentive fee contract. In addition, Coast Guard officials told us that the LRI’s inboard spray rail, which had initially been installed to enable the boat to reach 45 knots, had to be removed to allow the boat to effectively launch from the NSC, a cost ICGS will bear under that fixed- price contract. Coast Guard officials stated that the current LRI acquisition will be terminated with delivery of the first boat (now being considered a prototype with the potential to be used to test launch and recovery mechanisms on future NSCs). The Coast Guard’s procurement of MPAs is another example of the consequences of not following a disciplined acquisition approach, as key program documents that establish the Coast Guard’s requirements for this asset and a plan for operational testing to those requirements have not been finalized. The testing is expected to occur between June 2008 and December 2008. The Coast Guard has contracted with ICGS for eight MPAs and accepted delivery of three aircraft between December 2006 and June 2007. In March 2008, it also accepted delivery of three mission system pallets, which provide the aircraft with C4ISR capabilities. The Coast Guard anticipates putting another 4 MPAs on contract with ICGS in fiscal year 2008 and has requested funding for the 13th and 14th aircraft. The proper functioning of an acquisition organization and the viability of the decisions made through its acquisition process are only as good as the information it receives. The Coast Guard is developing two new means of communicating information related to the Deepwater Program. Quarterly project reports will consolidate and standardize how it communicates information to decision makers, and the probability of project success tool is intended to help officials discern and correct problems before they have cost and schedule impacts. However, Coast Guard officials have concerns about the reliability of the data they receive from the contractor as they lack the visibility required to determine the causes of cost and schedule variances. In addition, Coast Guard officials have stated that Northrop Grumman’s earned value system, which provides the necessary cost and schedule information, may need to be re-certified for compliance with government standards. While the Coast Guard is taking steps to improve its visibility into and confidence in data received from the contractor, it plans to proceed with issuance of a task order for long lead materials for the fourth NSC. The Coast Guard recently developed quarterly project reports, a compilation of cost and schedule information created by the project managers that summarizes the status of each acquisition for reporting through the Coast Guard as well as to DHS and the Congress. The Coast Guard developed these reports to standardize and consolidate asset reporting across all acquisitions, including those outside of Deepwater. Currently, the quarterly performance reports are being developed for 14 separate assets. The reports present general information about the project such as contract value and type, as well as more specific, timely information such as project accomplishments and risks. Project risks are rank-ordered by probability of occurrence and severity of impact, and include such things as technical challenges and production issues. The Coast Guard has also begun to analyze program information using the “probability of project success” tool. This tool was developed by the Army and the Air Force to evaluate projects on factors other than basic cost, schedule, and performance data and is being considered by DHS for application across its acquisitions. Currently, the tool is being applied to the same 14 projects covered under the quarterly performance reports. Coast Guard acquisition officials told us they will use this tool to grade each asset on 19 different elements in 5 categories, including project resources and project execution, to assess the risk of assets failing to meet their goals. Figure 5 lists these categories and elements. The probability of project success tool is developed by acquisition support staff separate from the program and project offices. Of the 19 different elements, only one, health of the contractor, is graded by the project manager. The results of this tool are not seen as an assessment of the project manager, but of the support that the acquisition directorate has given them. Officials stated that the tool allows acquisition executives to identify projects that require assistance before they experience cost breaches or other problems and also allows for a comparison of risks and challenges across all Coast Guard acquisition projects to identify trends. The production and analysis of earned value management data—the cost and schedule data reported by the contractor and used to evaluate progress toward program goals—are critical to informing both the quarterly performance reports and the probability of project success tool. However, Coast Guard officials are concerned about the utility of the earned value data they receive because, under the terms of the ICGS contract, they lack visibility at the levels required to inform decision- makers and manage projects. In addition, officials believe that Northrop Grumman’s earned value system may require re-certification to meet government standards to ensure the reliability of the data. Receiving useful and reliable earned value data could be particularly important for the Deepwater Program, as these data are also used to inform decisions on future projects, such as the pending orders to Northrop Grumman for the materials and production of the fourth NSC. Coast Guard officials expressed concerns about the level of detail of the earned value data provided by ICGS. A Coast Guard official responsible for analyzing the contractor’s reported earned value data for the NSC stated that the data do not provide sufficient visibility for decision making at the asset level. The concerns stem in part from the system-of-systems contract structure with ICGS and how the terms for reporting earned value data to the government were negotiated. Earned value data are reported at different levels of activity, descending in order from the general to the specific, as determined in advance by the government. The levels of activity required for earned value reporting are very important and can determine the usefulness of the data received. Under the ICGS contract, the earned value data are reported at seven levels, beginning with the Deepwater system-of-systems level—”ICGS”—and stopping at the major component level—such as propulsion and armaments. Coast Guard officials stated that previously data on the NSC was reported to the fifth level, which only presents data on the progress of production of the cutter as a whole. A Coast Guard official stated that in order to gain adequate visibility into reported cost variances, a deeper level of reporting is necessary. While the Coast Guard has negotiated a more detailed level of earned value reporting on the first three NSCs to receive data at the major component level, according to an official, the Coast Guard may seek even more detailed levels of cost data in upcoming negotiations for the fourth NSC. In addition to concerns about visibility into contractor earned value data, Coast Guard officials have concerns about the reliability of the underlying systems the contractors use to collect this data. An important consideration in relying on contractor-provided earned value management data is ensuring that the contractor’s process for generating the data is compliant with government standards. Contractors are expected to have earned value management plans that document the methodology, products, and tools they have in place to track earned value. Independent third parties, such as the Defense Contract Management Agency (DCMA) or the Defense Contract Audit Agency, ensure the contractor’s initial compliance with government standards and perform surveillance reviews to ensure that the contractor remains compliant. While Lockheed Martin’s earned value management system has been certified as compliant by DCMA, Coast Guard officials have stated that Northrop Grumman—the first tier subcontractor responsible for work on the NSC—may require re- certification. Previously, Northrop Grumman’s earned value management system had been certified by the Navy, but this certification is no longer considered acceptable by the Coast Guard. According to officials, the Coast Guard is working with DCMA and the Navy to review and, if necessary, re-certify Northrop Grumman’s earned value system. In the meantime, the Coast Guard intends to improve its insight into how the contractor produces and reports earned value data by executing a memorandum of agreement with the DCMA for on-site surveillance at the shipyard. Such on-site presence is critical to increase the likelihood that the Coast Guard receives accurate earned value data. These concerns about visibility into, and reliability of, earned value data affect not only the information the Coast Guard needs for decision making on current projects, but also the information necessary for decisions on future projects, such as the production of the fourth NSC. As the Coast Guard compiles earned value information on the ships being constructed by Northrop Grumman, it can use this information in the estimates of future costs used to establish target prices for additional work to be performed. Because the Coast Guard lacks confidence in how Northrop Grumman is representing its cost and schedule performance on current projects, it may be in the position of paying the contractor for future projects, such as the long lead material and production of the fourth NSC, without the understanding necessary to evaluate proposed prices. In response to significant problems in achieving its intended outcomes under Deepwater, Coast Guard leadership has made a major change in course in its management and oversight of this program. Even with this change, the Coast Guard continues to face numerous risks of varying magnitude in moving forward with an acquisition program of this size. While the initiatives the Coast Guard has underway have already begun to have a positive impact on reducing these risks, the extent and durability of their impact depends on positive decisions that continue to increase and improve government management and oversight. The current reliance on informal procedures to keep DHS informed of Deepwater developments is not appropriate for an acquisition of this magnitude. The Deepwater Program will continue for some time to come, and the full burden of transcending the inevitable challenges should not rest solely with the initiatives of the current Coast Guard leadership. The Coast Guard’s major systems acquisition process requires DHS approval of milestone decisions; however, the 2003 DHS delegation to the Coast Guard of such approval means that DHS does not have formal approval authority, and it could lack the information needed to strategically allocate funding by balancing requirements and needed capabilities across departmental components. In addition, the Coast Guard’s acquisition process calls for a decision to authorize initial production before knowledge is gathered about the stability of an asset’s design and production processes, which is contrary to best practices and could result in cost increases and schedule delays because of redesign. And because the Coast Guard’s knowledge of the reasonableness of contractors’ proposed cost and schedule targets for Deepwater assets relies in part on visibility into and confidence in the contractors’ earned value management data, the Coast Guard may lack a solid basis to evaluate future proposals by Northrop Grumman until known problems with its data are resolved. To help ensure that the initiatives to improve Deepwater management and oversight continue as intended and to facilitate decision-making across the department, we recommend that the Secretary of Homeland Security direct the Under Secretary for Management to rescind the delegation of Deepwater acquisition decision authority. We also recommend that the Commandant of the Coast Guard take the following two actions: To improve knowledge-based decision-making for its acquisitions, revise the procedures in the Major Systems Acquisition Manual related to the authorization of low-rate initial production by requiring a formal design review to ensure that the design is stable as well as a review before authorizing initial production. To improve program management of surface assets contracted to Northrop Grumman Ship Systems, develop an approach to increase visibility into that contractor’s earned value management data reporting before entering into any further contractual relationships, such as for long lead material for and production of the fourth NSC. In written comments on a draft of this report, the Department of Homeland Security concurred with our findings. The department stated that it would take our recommendation on rescinding the delegation of Deepwater acquisition decision authority under advisement, but neither concurred nor disagreed with the recommendation. The Coast Guard concurred with our recommendation on requiring a formal design review before low-rate initial production, and plans to incorporate such a review in its next revision of the MSAM process. In addition, it partially concurred with our recommendation to improve program management of surface assets by developing an approach to increase visibility into Northrop Grumman’s earned value management data. The Coast Guard stated that it agrees with the recommendation and is in the process of funding DCMA for surveillance of Northrop’s earned value system and increasing the level of visibility into Northrop’s data starting with the fourth NSC production contract. However, the Coast Guard stated that earned value data would provide limited utility for the fixed-price long lead materials contract for this acquisition and that obtaining the data would pose a significant cost and schedule impact. To determine a fair and reasonable price for the long lead and production contracts, the Coast Guard plans to obtain and review Northrop’s certified cost and pricing data. It appears to us that the Coast Guard has developed an approach for increasing visibility into the earned value management data for future contracts with Northrop Grumman. We believe this approach, if implemented as planned, will address our recommendation. The comments from the Department of Homeland Security are included in their entirety in appendix III. Technical comments were also provided and incorporated into the report as appropriate. We are sending copies if this report to interested congressional committees, the Secretary of Homeland Security, and the Commandant of the Coast Guard. We will provide copies to others on request. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report or need additional information, please contact me at (202) 512-4841 or huttonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff acknowledgements are provided in appendix IV. Overall, in conducting this review, we relied in part on the information and analysis in our March 2008 report, Status of Selected Aspects of the Coast Guard’s Deepwater Program and testimony, Coast Guard: Deepwater Program Management Initiatives and Key Homeland Security Missions. We also reviewed the Coast Guard’s 2007 Deepwater expenditure plan and fiscal year 2009 budget request. Additional scope and methodology information on each objective of this report follows. To assess the Coast Guard’s efforts to increase accountability and program management through its reorganized acquisition function, we reviewed the Coast Guard’s July 2007 Blueprint for Acquisition Reform, organizational structures before and after the July 2007 reorganization, 2004 and 2008 Deepwater Program Management Plans, and project manager and integrated product team charters. We also interviewed senior acquisition directorate officials, program and project managers, and Integrated Coast Guard Systems (ICGS) representatives to discuss the Coast Guard’s increased role in program management and oversight and changes in roles and responsibilities of key positions. We held discussions with officials from the Coast Guard’s engineering and C4ISR technical authorities and the American Bureau of Shipping, and reviewed lists of certifications for the National Security Cutter (NSC). To assess Coast Guard actions to improve the acquisition workforce, we reviewed additional documentation such as the acquisition human capital strategic plan, documentation of workforce initiatives, and organization charts for aviation, surface, and C4ISR components showing government, contractor, and vacant positions. We supplemented the documentation review with interviews of acquisition directorate officials, including contracting and Office of Acquisition Workforce Management officials and program and project managers. We discussed workforce initiatives, challenges and obstacles to building an acquisition workforce, recruiting, difficulty in filling key positions, use of support contractors, inherently governmental positions, and tools for projecting acquisition workforce needs. We spoke with representatives of a support contractor developing one of the workforce planning tools. We also relied on our past work identifying management and workforce problems within the Deepwater Program and the Department of Defense (DOD). To evaluate the Coast Guard’s transition to an asset-based paradigm for Deepwater, including how system-level aspects such as C4ISR are being managed, we analyzed a 2007 alternatives analysis prepared for the Coast Guard. We also discussed the planned C4ISR procurement strategy with the acquisition directorate C4ISR program manager and the Coast Guard Chief, Office of Cyber Security and Telecommunications. We reviewed the overall Deepwater and the NSC acquisition program baselines. Other acquisition program baselines were in draft form and not made available to us. We reviewed reports on NSC and C-130J missionization cost breaches to understand the change in how cost breaches are reported to DHS under the new approach. We analyzed the Long-Range Interceptor (LRI) task order and associated modifications and interviewed senior acquisition directorate officials, the surface asset program manager, and the LRI project manager about problems with the LRI’s design and its ability to interface with the NSC’s launch and recovery system during testing. We reviewed documentation of the Coast Guard’s acceptance of the first three Maritime Patrol Aircraft and associated mission system pallets and interviewed the aviation program manager. To assess the Coast Guard’s implementation of a disciplined, project management process for Deepwater acquisitions, we reviewed the Major Systems Acquisition Manual and compared its processes with the knowledge-based, best practices processes we have identified through our prior work on large acquisitions at DOD. We reviewed the Coast Guard’s April 2000 memorandum waiving the acquisition manual requirements for the Deepwater Program to understand the rationale for the waiver, as well as the 2003 DHS memorandum giving the Coast Guard acquisition decision authority for Deepwater assets. We interviewed acquisition directorate officials and program and project managers to discuss efforts to transition the acquisition of Deepwater assets to the MSAM process, particularly for assets already under way. We also spoke with DHS officials about the DHS major acquisition review process and reporting requirements. We assessed Coast Guard initiatives to improve the quality of program management information by analyzing Deepwater asset quarterly project reports for the fourth quarter, fiscal year 2007, and probability of project success information. We also analyzed selected earned value management cost performance reports for the NSC and reviewed earned value management system compliance letters for Northrop Grumman and Lockheed Martin, the Coast Guard’s standard operating procedure for earned value management systems, the Deepwater work breakdown structure dictionaries for Northrop Grumman and Lockheed Martin, and ICGS’ earned value management plan. We discussed the information contained within this documentation with acquisition directorate officials, the NSC business finance manager, Coast Guard support contractors responsible for analyzing the earned value management data, and ICGS and Northrop Grumman representatives. We conducted this performance audit from October 2007 to June 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Michele Mackin, Assistant Director; J. Kristopher Keener; Martin G. Campbell; Maura Hardy; Angie Nichols-Friedman; Scott Purdy; Kelly Richburg; Raffaele Roffo; Sylvia Schatz; and Tatiana Winger made key contributions to this report. Status of Selected Aspects of the Coast Guard’s Deepwater Program. GAO-08-270R (Washington, D.C.: Mar. 11, 2008). Coast Guard: Deepwater Program Management Initiatives and Key Homeland Security Missions. GAO-08-531T (Washington, D.C.: Mar. 5, 2008). Coast Guard: Challenges Affecting Deepwater Asset Deployment and Management and Efforts to Address Them. GAO-07-874 (Washington, D.C.: June 18, 2007). Coast Guard: Status of Efforts to Improve Deepwater Program Management and Address Operational Challenges. GAO-07-575T (Washington, D.C.: Mar. 8, 2007). Coast Guard: Status of Deepwater Fast Response Cutter Design Efforts. GAO-06-764 (Washington, D.C.: June 23, 2006). Coast Guard: Changes to Deepwater Plan Appear Sound, and Program Management Has Improved, but Continued Monitoring is Warranted. GAO-06-546 (Washington, D.C.: Apr. 28, 2006). Coast Guard: Progress Being Made on Addressing Deepwater Legacy Asset Condition Issues and Program Management, but Acquisition Challenges Remain. GAO-05-757 (Washington, D.C.: July 22, 2005). Coast Guard: Preliminary Observations on the Condition of Deepwater Legacy Assets and Acquisition Management Challenges. GAO-05-651T (Washington, D.C.: June 21, 2005). Coast Guard: Deepwater Program Acquisition Schedule Update Needed. GAO-04-695 (Washington, D.C.: June 14, 2004). Contract Management: Coast Guard’s Deepwater Program Needs Increased Attention to Management and Contractor Oversight. GAO-04-380 (Washington, D.C.: Mar. 9, 2004). Coast Guard: Actions Needed to Mitigate Deepwater Project Risks. GAO-01-659T (Washington, D.C.: May 3, 2001). | The Coast Guard's Deepwater Program, under the Department of Homeland Security (DHS), has experienced serious performance and management problems. Deepwater is intended to replace or modernize Coast Guard vessels, aircraft, and the communications and electronic systems that link them together. As of fiscal year 2008, over $4 billion has been appropriated for Deepwater. The Coast Guard awarded a contract in June 2002 to a lead system integrator, Integrated Coast Guard Systems (ICGS), to execute the program using a system-of-systems approach. In response to a Senate report accompanying a Department of Homeland Security appropriations bill, 2008, this GAO report assesses whether the changes the Coast Guard is making to its management and acquisition approach to Deepwater will put it in a position to realize better outcomes. GAO reviewed key program documents and interviewed Coast Guard and contractor personnel. Coast Guard leadership is making positive changes to its management and acquisition approach to the Deepwater Program that should put it in a position to realize better outcomes, although challenges to its efforts remain. The Coast Guard has increased accountability by bringing Deepwater under a restructured acquisition function and investing its government project managers with management and oversight responsibilities formerly held by ICGS. Coast Guard project managers and technical experts--as opposed to contractor representatives--now hold the greater balance of management responsibility and accountability for program outcomes. However, like other federal agencies, the Coast Guard has faced obstacles in building an adequate government workforce. It has various initiatives under way to develop and retain a workforce capable of managing this complex acquisition program, but faced with an almost 20 percent vacancy rate, it is relying on support contractors, such as cost estimators, in key positions. The Coast Guard's decision to manage Deepwater under an asset-based approach, rather than as an overall system-of-systems, has resulted in increased government control and visibility over acquisitions. Agency officials have begun to hold competitions for Deepwater assets outside of the ICGS contract. While the asset-based approach is beneficial, certain cross-cutting aspects of Deepwater, such as the program's communications and intelligence components and the numbers of each asset needed, still require a systems-level approach. The Coast Guard recognizes this but is not yet fully positioned to manage these aspects. The Coast Guard has begun to follow the disciplined, project management framework of its Major Systems Acquisition Manual (MSAM), which requires documentation and high-level executive approval of decisions at key points in a program's life cycle. But the consequences of not following this approach in the past are now evident, as Deepwater assets have been delivered without a determination of whether their planned capabilities would meet mission needs. The MSAM process currently allows limited initial production to proceed before the majority of design activities have been completed. In addition, a disconnect between MSAM requirements and current practice exists because DHS had earlier delegated to the Coast Guard all Deepwater acquisition decisions, resulting in little departmental oversight. Coast Guard project managers and decision makers are now receiving information intended to help manage project outcomes, but some key information is unreliable. The earned value management data reported by ICGS lacks sufficient transparency to be useful to Coast Guard program managers, and subcontractor Northrop Grumman's system for producing the data may need to be re-certified to ensure its reliability. Officials state that they are addressing these issues through joint efforts with the Navy and the Defense Contract Management Agency. |
In 1998, Congress passed WIA—partly in response to concerns about inefficiencies in federal employment and training programs. WIA repealed the Job Training Partnership Act (JTPA), effective July 1, 2000, and replaced JTPA programs for economically disadvantaged adults and youths and dislocated workers with three new programs—WIA Adult, WIA Dislocated Worker, and WIA Youth. In an effort that coordinated service delivery for employment and training programs, WIA established one-stop centers in all states and mandated that numerous programs provide their services through the centers. Unlike the JTPA adult program, WIA imposes no income eligibility requirements for adult applicants receiving any of its “core” services, such as job search assistance and employment counseling and assessment. Any person visiting a one-stop center may look for a job, receive career development services, and gain access to a range of vocational education programs. While WIA consolidated the JTPA youth programs and strengthened the service delivery of key workforce development programs, most employment and training programs remain separately funded and continue to be operated by various agencies. We have previously issued reports on overlap in multiple employment and training programs. During the 1990s, we issued a series of reports that documented program overlap among federally funded employment and training programs and identified areas where inefficiencies might result. We reported that overlap among federally funded employment and training programs raised questions about the efficient and effective use of resources. We also reported that program overlap might hinder people from seeking assistance and frustrate employers and program administrators. In 2000 and 2003, we reviewed the workforce development system and identified federally funded employment and training programs for which a key program goal was providing employment and training assistance. Our 2003 report identified 44 programs administered by nine federal agencies that provided a range of employment and training services. While many of the programs were the same as those included in the 2000 report, 10 programs were newly identified and 6 previously identified programs had been discontinued since 2000. The number of employment and training programs and their funding have increased since our 2003 report when we last reported on them. For fiscal year 2009, we identified 47 employment and training programs administered across nine agencies (see figure 1). Together, these programs spent approximately $18 billion on employment and training services in fiscal year 2009, according to our survey data. This is an increase of 3 programs and about $5 billion from our 2003 report. Adjusting for inflation, the amount of the increase is about $2 billion. We estimate based on survey responses that this increase is likely due to temporary funding from the Recovery Act for 14 of the 47 programs we identified (see figure 2). In addition to increasing funding for existing programs, the Recovery Act also created 3 new programs and modified several existing programs’ target population groups and eligibility requirements, according to agency officials. For example, the Recovery Act modified the Trade Adjustment Assistance program by expanding group eligibility to include certain dislocated service workers who were impacted by foreign trade. Officials from most programs reported using almost all their funds for employment and training, although some programs with broader goals, including multipurpose block grants, used lesser amounts. Twenty-seven programs estimated that they used 90 percent or more of their fiscal year 2009 appropriation on employment and training services. Fifteen of these programs reported that they used 100 percent of their funds on employment and training services. Some programs that used less than 90 percent of their fiscal year 2009 appropriations on employment and training services may have broader goals (see table 1). For example, across all programs, the TANF program used the lowest percentage of its appropriations on employment and training activities, about 8 percent. This is not surprising, given that employment is only one aspect of the TANF program, which has several broad social service goals, including providing cash assistance to low-income families with children. However, the amount TANF spends on employment and training activities is among the largest of the programs we surveyed. In addition, Education officials stated that their career and technical education programs emphasize education, as opposed to employment and training. Our survey data showed that 7 programs accounted for about three- fourths of the $18 billion spent on employment and training services in fiscal year 2009 (see figure 3). The largest of the 7, Rehabilitation Services—Vocational Rehabilitation Grants to States, operated by Education, used about $3 billion in fiscal year 2009 to fund employment and training services for individuals with disabilities. The other 6 programs from this group are administered by Labor and HHS. The remaining one- fourth of the amount spent on employment and training in fiscal year 2009 was spent by the remaining programs. WIA Youth Activities (Labor) TANF (HHS) Job Corps (Labor) WIA Adult Program (Labor) Employment Service/Wagner-Peyser Funded Activities (Labor) Our survey data showed that most participants received employment and training services through one of two programs: Employment Service/Wagner-Peyser Funded Activities and the WIA Adult Program. These programs accounted for about 77 percent of the total number of participants served across all programs. Each of these programs reported serving more than 1 million individuals. In contrast to these larger programs, 7 programs each reported serving fewer than 5,000 individuals. See appendix IV for a detailed list of the number of individuals served by each employment and training program. Almost all programs tracked multiple outcome measures related to employment and training, and many programs tracked similar measures. Forty-one of the 47 programs tracked at least three outcome measures in fiscal year 2009, according to officials. The most frequently tracked outcome measure was “entered employment”—the number of program participants who found jobs (see table 2). Many programs also tracked “employment retention” and “wage gain or change.” These are the types of measures developed under the Office of Management and Budget’s (OMB) common measures initiative, which sought to unify definitions for performance across programs with similar goals. Three programs did not track any outcome measures at the federal level in fiscal year 2009. For a detailed list of outcome measures tracked by federal employment and training programs, see appendix V. In addition, officials from 4 of the 14 programs that received Recovery Act funding in fiscal year 2009 reported that the Act modified the outcome measures tracked by their programs. However, these modifications generally applied only to the outcomes for participants in activities funded by the Act. For example, a Job Corps official noted that the program is required to track the number of “green graduates” who complete Recovery Act-funded “green training” for jobs in industries such as renewable resources and green construction. Little is known about the effectiveness of the employment and training programs we identified because only 5 reported demonstrating whether outcomes can be attributed to the program through an impact study, and about half of all the programs have not had a performance review since 2004. Impact studies, which many researchers consider to be the best method for determining the extent to which a program is causing participant outcomes, can be difficult and expensive to conduct, as they take steps to examine what would have happened in the absence of a program to isolate its impact from other factors. Based on our survey of agency officials, we determined that only 5 of the 47 programs have had impact studies that assess whether the program is responsible for improved employment outcomes (see appendix VI). The five impact studies generally found that the effects of participation were not consistent across programs, with only some demonstrating positive impacts that tended to be small, inconclusive, or restricted to short-term impacts. For example, while we have previously reported that a considerable body of research has suggested that welfare-to-work programs can effectively increase employment entry and reduce welfare receipt, a more recent study cited by a TANF program official found services targeted at TANF recipients to be largely ineffective in producing positive employment retention and advancement outcomes and, where impacts were found, they tended to be substantively small, with many families remaining in poverty. A study of the WIA Adult program found the program to have shown positive impacts up to 4 years after participant entry, but noted that the magnitude of this effect could have been due to the selection of applicants with greater income prior to participation and better job prospects. Officials from the remaining 42 programs cited other types of studies or no studies at all. Officials from 19 of these programs reported that, since 2004, some other type of review or study had been conducted to evaluate their program’s performance with respect to employment and training activities. These evaluations included assessments by OMB’s Program Assessment Rating Tool (PART) and nonimpact studies. Officials from 23 of the 47 programs did not identify a study of any kind that assessed program performance since 2004. However, agencies may have impact studies currently under way. For example, Labor is conducting an impact evaluation of WIA services, to be completed in 2015. All but 3 of the programs we surveyed overlap with at least 1 other program, in that they provide at least one similar service to a similar population. Some of these overlapping programs serve multiple population groups, while others target specific populations. For the population groups served by these programs and the services they provide, see appendixes VII, VIII, and IX. In addition, some overlapping programs require participants to be economically disadvantaged. Even when programs overlap, the services they provide and the populations they serve may differ in meaningful ways. All 10 programs that serve multiple groups overlap with another program. For example, a variety of groups—including both employed and unemployed individuals—can receive employment counseling and assessment, job readiness skills training, and occupational or vocational training from three different programs: the Career and Technical Education—Basic Grants to States program, the Community-Based Job Training Grants program, and the H-1B Job Training Grants program. In addition, 3 of the programs that serve multiple groups require participants to be economically disadvantaged. Thirty-four of the 37 programs that serve a primary target population overlap with another program. In addition, nine of these require participants to be economically disadvantaged. The target populations being served by the most programs are Native Americans, veterans, and youth. For example, all 8 programs that target Native Americans provide seven similar types of employment and training services (see figure 4). According to agency officials, 4 of these programs for Native Americans spent a total of about $93 million on employment and training services in fiscal year 2009, and 5 of them served a total of about 55,000 participants in the most recent year for which data were available. Similarly, five of the six programs that target veterans provide seven similar types of employment and training services (see figure 5). According to agency officials, these six programs spent nearly $1.1 billion on employment and training services in fiscal year 2009, and served about 823,000 participants in the most recent year for which data were available. The five programs that target youth provide seven similar types of employment and training services (see figure 6). According to agency officials, four of these programs spent nearly $4.1 billion on employment and training services in fiscal year 2009, and all five programs served about 360,000 participants in the most recent year for which data were available. Despite this overlap, some individuals within a population group may be eligible for one program, but not another because program eligibility criteria differ. For example, one of the programs targeting Native Americans serves only disabled Native Americans residing on or near a federal or state reservation, and another program serves only Native Hawaiians. Similarly, one of the veterans programs serves only homeless veterans, and another is specifically targeted to servicemembers (and their spouses) who are near to retirement or separation from the military. Some overlapping programs also have slightly different objectives. For example, while the Community-Based Job Training Grants and H-1B Job Training Grants programs aim to prepare workers for careers in high- growth industries, the Career and Technical Education—Basic Grants to States program has as its purpose to more fully develop the academic, career, and technical skills of secondary and postsecondary students who enroll in career and technical education programs. Programs that overlap may also provide similar types of services in different ways. The Job Corps program, for example, provides academic instruction and job training in a variety of fields to at-risk youth who live at federally funded campuses, while the YouthBuild program provides academic instruction and job training in construction to disadvantaged youth in their own communities. Officials from 27 of the 47 programs reported that their agencies have coordinated efforts with other federal agencies that provide similar services to similar populations. For example, the Departments of Labor and Health and Human Services issued a joint letter encouraging state- administered youth programs to partner together using Recovery Act funds to promote subsidized employment opportunities. In addition, an official from the Department of the Interior reported that the agency works with Labor and HHS to coordinate programs for Native Americans. Under law, Native American tribes are allowed significant flexibility to combine funding from multiple programs. An official from an Education program that serves incarcerated individuals noted that representatives from the Departments of Education, Labor, and Justice participate in a federal work group on offender workforce development, and have jointly sponsored a national conference on this topic. Similarly, an official from Labor’s Reintegration of Ex-Offenders program stated that the agency coordinates with Justice to design and operate the program’s adult ex- offender grants. The TANF, ES, and WIA Adult programs provide some of the same employment and training services to low-income individuals, despite differences between the programs. Although the extent to which individuals receive the same services from more than one of these programs is unknown, the programs maintain separate administrative structures to provide some of the same services. Labor and HHS officials acknowledged that greater efficiencies could be achieved in delivering employment and training services through these programs, but said they do not believe that these programs are duplicative. The TANF, ES, and WIA Adult programs provide some of the same employment and training services to low-income individuals, despite differences in the programs’ overall goals and the range of services they provide. In our interviews with Labor and HHS officials, they acknowledged that low-income individuals are eligible to receive some of the same employment and training services—including skills assessment, job search, and job referral—from both the TANF and WIA Adult programs. In addition, any individual, including low-income individuals, can receive job search and job referral services from the ES program. Our survey results also indicate that these three programs provide some of the same services (see figure 7). While the TANF program serves low-income families with children, the ES and WIA Adult programs serve all adults, including low-income individuals. Specifically, the WIA Adult program gives priority for intensive and training services to recipients of public assistance and other low-income individuals when program funds are limited. All three programs share a common goal of helping individuals secure employment, and the TANF and WIA Adult programs also aim to reduce welfare dependency. However, employment is only one aspect of the TANF program, which also has three other broad social service goals: to assist needy families so that children can generally be cared for in their own homes, to reduce and prevent out-of-wedlock pregnancies, and to encourage the formation and maintenance of two-parent families. As a result, TANF provides a wide range of other services beyond employment and training, including cash assistance. To reduce dependency, TANF requires many cash assistance recipients to participate in work activities such as subsidized employment, on-the-job training, or community service. Recent PART reviews of these programs had similar findings regarding the programs’ commonalities. The most recent PART reviews of the ES and WIA Adult programs—conducted in 2004 and 2005, respectively—also found that these programs provide some of the same services, and the WIA Adult review found that the program duplicates some job training services offered by TANF. The most recent PART review of the TANF program, conducted in 2005, similarly noted that states may choose to spend TANF funds on employment services that mirror those provided under WIA. However, the extent to which individuals receive the same employment and training services from more than one of these programs is unknown. Labor officials estimated that in program year 2008 approximately 4.5 percent of all WIA Adult participants who received training—about 4,500 of the nearly 100,000 participants who exited the program—were also receiving TANF. However, this likely underestimated the number of TANF recipients served by the WIA Adult program, as the program collects information on TANF receipt only if participants receive intensive or training services. In addition, according to Labor officials, WIA Adult participants may choose not to identify themselves as TANF recipients. It is also unclear whether the WIA Adult participants who self-identify as TANF recipients have received TANF employment and training services or other TANF services. Further, HHS officials told us that data are not available at the federal level on the total number of individuals who receive TANF employment and training services because HHS lacks the legal authority to require such reporting. The TANF program requires states to report data on recipients of TANF assistance who participate in work activities as defined by program regulations, but HHS lacks the legal authority to require states to report data on individuals who participate in work activities but do not receive such assistance. Officials noted that laws including the Personal Responsibility and Work Opportunity Reconciliation Act of 1996 (PRWORA)—the legislation that created the TANF program—limits the information that states must report to HHS. The TANF, ES, and WIA Adult programs maintain separate administrative structures to provide some of the same services to low-income individuals. At the federal level, the TANF program is administered by the Department of Health and Human Services, and the ES and WIA Adult programs are administered by the Department of Labor. At the state level, the TANF program is typically administered by the state human services or welfare agency, and the ES and WIA Adult programs are typically administered by the state workforce agency. By regulation, ES services must be provided by state employees. At the local level, WIA regulations require at least one comprehensive one-stop center to be located in every local workforce investment area. These areas may have the same boundaries as counties, may be multicounty, or may be within and across county lines. Similarly, every county typically has a TANF office. TANF employment and training services may be delivered at TANF offices, in one-stop centers, or through contracts with for-profit or nonprofit organizations, according to HHS officials. In one-stop centers, ES staff provide job search and other services to ES customers, while WIA staff provide job search and other services to WIA Adult customers. Labor and HHS officials acknowledged that greater efficiencies could be achieved in delivering employment and training services through the TANF, ES, and WIA Adult programs. A 2005 Labor-commissioned study stated that operating separate workforce programs under WIA and TANF duplicates efforts. In interviews, Labor officials acknowledged that simplifying programs’ administrative structures, while not without challenges, may allow some states and localities to administer programs more efficiently. Even so, officials from both agencies emphasized that under current law states and localities decide how best to deliver services. For example, since TANF is a block grant program, states have discretion to deliver services under the type of administrative structure they choose, and some states may choose more efficient structures than others. Nonetheless, HHS and Labor officials said they do not believe that these programs are duplicative. HHS officials said that capacity, geography, and the unique needs of TANF clients could warrant having multiple entities providing the same services, even if they are separately administered. They noted that one-stop centers may not have the staff, space, or desire to serve TANF clients; they may be inconveniently located, especially in predominantly rural states; and they may not be able to address TANF clients’ multiple needs. HHS officials added that although some of the employment and training services delivered by the TANF, ES, and WIA Adult programs at the local level to eligible clients are the same, the ways services are delivered and the services themselves can vary subtly with each locality. Labor officials said they have focused on integrating services to meet clients’ needs and affording states flexibility to respond to local needs rather than only on program efficiency. Labor officials also noted that the ES and WIA Adult programs are specific funding streams and as a result, they are unlikely to fund the same services for the same individuals. For example, one-stop centers typically use ES funding to provide core services, such as job search and job referrals, while they typically use WIA Adult funding to provide intensive and training services. States are required by WIA to attest in plans they provide to Labor that their ES and WIA programs have agreements in place to coordinate service delivery across the two programs. Colocating the employment and training services provided by the TANF, ES, and WIA Adult programs may increase administrative efficiencies. WIA requires numerous federally funded workforce development programs, including the ES and WIA Adult programs, to provide their services through the one-stop system. Programs may be colocated within one-stop centers, electronically linked, or linked through referrals. While WIA does not require TANF employment and training services to be provided through one-stop centers, states and localities have the option to include TANF as a partner in their one-stop systems. We have previously reported that colocating services—specifically, providing services from different programs in the same physical location—can result in improved communication among programs, improved delivery of services for clients, and elimination of duplication. While colocating services does not guarantee efficiency improvements, it affords the potential for sharing resources and cross-training staff, and may lead, in some cases, to the consolidation of administrative systems, such as information technology systems. A 2004 study commissioned by HHS found that successful coordination between WIA programs and the TANF program is promoted when WIA and TANF staffs are colocated or communicate regularly to discuss specific cases and policies, and when program management functions, case management functions, and administrative systems are shared across agencies. Labor and HHS officials told us that they encourage states to consider colocating TANF employment and training services with ES and WIA Adult services in one-stop centers, but said that they leave these decisions up to states. While Labor’s policy is that all mandatory one-stop partner programs—including the ES and WIA Adult programs—should be physically colocated in one-stop centers to the extent possible, neither Labor nor HHS currently has a policy in place that specifically promotes the colocation of TANF employment and training services in one-stop centers. According to officials, Labor’s policy is that colocation is one of multiple means for achieving service integration. While ES and WIA Adult services are generally colocated in one-stop centers, the colocation of TANF employment and training services in one- stop centers is not as widespread. We reported in 2007 that nearly all states provided ES and WIA Adult services on site in the majority of their one-stop centers, although nine states also operated at least one standalone ES office that was unaffiliated with the one-stop system. In the same 2007 report, we found that 30 states provided the TANF program on site at a typical comprehensive one-stop center. These states accounted for 57 percent of the comprehensive one-stop centers nationwide (see table 3). The remaining 20 states, where the TANF program was not available on site at a typical comprehensive one-stop center, accounted for 43 percent of comprehensive one-stop centers. This is the most recent available data, as Labor and HHS officials told us that they do not routinely collect data on the extent to which TANF services are colocated in one-stop centers nationwide, and HHS lacks the authority to require states to routinely report this information. Labor and HHS officials said that states and localities may face challenges to colocating TANF employment and training services in one-stop centers. Obstacles to colocation may include those raised earlier, such as capacity and geography, but may also include leases, differing program cultures, the need for partner programs to help fund the operating costs of one-stop centers, and trade-offs regarding the services with which TANF is colocated. Specifically, HHS officials told us that states and localities may have multiyear rental contracts for office space and may not have room to house additional staff. In addition, Labor and HHS officials said that differences between the client service philosophies of the TANF program and the ES and WIA Adult programs may present challenges to colocation. HHS officials noted that the TANF program takes a more holistic approach to helping individuals become self-sufficient by addressing the variety of needs that may affect their ability to obtain employment, such as child care and transportation. The need for partner programs to fund one-stop center operating costs may also be a challenge to colocation. When TANF employment and training services are colocated in one-stop centers, TANF may be expected to contribute to these operating costs, in addition to paying operating costs associated with providing other TANF services in other locations. Finally, HHS officials noted that when TANF employment and training services are not colocated in one-stop centers, they are typically colocated with other services for low-income families, such as SNAP, formerly known as the Food Stamp Program, and Medicaid. Officials acknowledged that colocating TANF employment and training services in one-stop centers may mean that they are no longer colocated with these other services, although Florida, Texas, and Utah provide SNAP services through one-stops along with TANF services, and Utah also provides Medicaid through one-stops. Officials said that in states where this is not the case, the potential trade-off would need to be considered. Legislative proposals to make TANF a mandatory partner in the one-stop system have been introduced but have not been made into law. In the 109th Congress, the WIA reauthorization bills passed by the House and the Senate included provisions to make TANF a mandatory partner, which would have required TANF employment and training services to be provided through one-stop centers nationwide. However, WIA has not yet been reauthorized, and according to Labor officials, the Administration has not taken a position on whether TANF should be a mandatory partner. Nevertheless, officials told us that about half of states have made TANF a partner in their one-stop systems. In addition, about half of states used TANF funds to pay for a portion of their one-stop center infrastructure costs in program year 2005. Consolidating the administrative structures of the TANF, ES, and WIA Adult programs may increase efficiencies and reduce costs. However, we found that data on the cost savings associated with such consolidation initiatives are not readily available. Florida, Texas, and Utah have consolidated the state workforce and welfare agencies that administer the TANF, ES, and WIA Adult programs, among other programs. In Utah, the workforce agency administers the TANF program in its entirety. In Florida and Texas, the workforce agencies administer only that part of TANF related to employment and training services. In all three states, the one-stop centers serve as portals to a range of social services, including TANF. Officials from these three states told us that consolidating agencies led to cost savings through the reduction of staff and facilities. For example, a Utah official said that the state reduced the number of buildings in which employment and training services were provided from 104 to 34. According to a Texas official, Texas also privatized 3,000 full-time staff equivalents (FTE) at the local level, which reduced the pension, retirement, and insurance costs that had previously been associated with these state positions. Officials in the three states, however, could not provide a dollar figure for the cost savings that resulted from consolidation. Additionally, Labor and HHS officials told us that reliable data are not available to compare the states’ costs for serving TANF, ES, and WIA Adult participants with average costs nationwide. These three programs do not require states to report data on costs per participant, and the state officials we spoke with said that the data they could provide would not be comparable with other states. State officials also told us that consolidation improved the quality of services for participants in the WIA Adult and TANF programs. An official in Utah noted the consolidation allowed job seekers to apply for assistance they had not considered in the past; allowed employment counselors to cluster services that made sense for the client; and allowed clients to experience seamless service delivery. These benefits reflected what the official said was one of the visions of consolidation: having one employment plan per client, rather than multiple employment plans for clients served by multiple programs. While Florida officials acknowledged that a subset of TANF clients have significant barriers to employment— such as mental health issues—that one-stop centers may not be well equipped to address, officials said that the one-stops in their state are able to address the employment and training needs of the majority of TANF clients. When asked about the quality of the TANF and workforce programs in Florida, Texas, and Utah, Labor officials were not aware of any performance problems in these programs and added that they view all three states as forerunners in program improvement efforts. That said, they noted that Utah may not be representative of other states, due to its relatively small and homogenous population. According to HHS officials, the three states all met federal work participation rate requirements in 2008, but there is no established means for comparing the employment performance of state TANF programs, so it is not possible to determine whether these states are more or less effective than other states in accomplishing the employment goals of TANF. In addition, officials from the Center for Law and Social Policy (CLASP) said that Texas and Florida may place more of an emphasis on quickly finding work for TANF clients than other states. Even with the benefits identified by state officials, consolidation may have its challenges. An official in Utah noted that the reorganization of state agencies and staff was time-consuming and costly, and it took several years before any cost savings were realized. For example, developing a shared database across programs increased costs temporarily. In addition, when states consolidate their agencies, they must still follow separate requirements for TANF and WIA. A 2004 article on service integration by authors from CLASP and the Hudson Institute concluded that states can take significant steps under current law to integrate TANF and WIA services, but it also noted the difficulty in administering separate programs with different requirements. The article specifically noted differences in work requirements, program performance measures, and reporting requirements, among others. A Utah official said that it was important for program administrators to be knowledgeable about these separate reporting requirements and processes across the multiple federal agencies that oversee these programs. Similarly, this official said that direct service staff needed to be knowledgeable about multiple programs and how to allocate costs across these programs. For states that have not consolidated their workforce and welfare agencies, not knowing what actions are allowable under the law may present a challenge to consolidation. According to the article on service integration, states face some legal barriers to fully integrating TANF and WIA services, but if they do not know what is allowable under the law, they may not always exercise the full range of options available to them. To the extent that colocation and consolidation would reduce administrative costs, funds could potentially be available to serve more clients or for other purposes. States spend a part of each program’s federal appropriation on administration. For the TANF program, we estimate that states spent about $160 million to administer employment and training services in fiscal year 2009. As defined in regulation, TANF administrative costs include costs for general program administration and coordination, such as salaries and benefits for staff performing administrative and coordination activities, and indirect administrative costs that support these activities. Administrative costs do not include salaries and benefits for staff providing program services or the direct administrative costs associated with providing these services, such as supplies, equipment, travel, postage, utilities, and rental and maintena of office space. According to a Labor official, the administrative costs for the WIA Adult program—defined in regulations to include costs for general program administration and coordination, including relate d oversight and monitoring, and excluding costs related to the directprovision of workforce investment services—were at least $56 million in program year 2009. t data on the administrative costs associated with the ES program, as the are not a separately identifiable cost in the legislation. Labor officials said that, on average, the agency spends about $4,000 for each WIA Adult participant who receives training services. Depending on the re administrative costs associated with colocation and consolidation, these funds could be used to train potentially hundreds or thousands of additional individuals. This is particularly important for programs like the WIA Adult program where federal funding has decreased overall from fiscal years 1999 to 2008. Even in the one-stop service delivery environment set forth in WIA, states and localities have substantial flexibility in determining the administrative structures they use to deliver employment and training services. The TANF block grant similarly gives states and localities considerable flexibility in delivering services, including employment and training services. This administrative flexibility allows programs to deliver services in a way that best meets local needs. Program year 2009 ran from July 1, 2009 through June 30, 2010. These costs do not include Recovery Act funds, which also could have been used for administrative costs. However, in the face of increasingly constrained budgets at both the federal and state levels, this is an opportune time to explore options for administrative cost savings. Our work on the WIA Adult, ES, and TANF programs has shown that there is some duplication with regard to their administrative structures—they maintain the means to provide some of the same services to the same population. However, the flexibility afforded these programs under the law allows them to take steps to integrate services that may increase administrative efficiencies. In taking such steps, it is important to recognize that improvements in administrative efficiency may not necessarily result in improvements in program effectiveness. Given that the ES and WIA Adult programs are already colocated in most one-stop centers, colocating TANF employment and training services with these programs provides the most immediate opportunity for efficiency improvements. However, achieving the potential benefits of colocation may require states and localities to address a variety of challenges: how to serve additional clients given the limited capacity of one-stop centers and potential lease restrictions; how to navigate philosophical differences between programs and address the multiple needs of TANF clients in the one-stop center setting; how to ensure that services are geographically accessible; whether the potential benefits of colocating TANF in one-stop centers outweigh the potential costs of no longer colocating these services with other services for low-income families, in some cases; and whether, and to what extent, TANF will contribute to one-stop center operating costs. However, these challenges are not insurmountable, given that over half of the states offer TANF services on site at a typical one-stop center. Similarly, consolidating the administrative structures of these programs would potentially conserve resources and better serve customers by providing the one-stop convenience established by WIA. Florida, Texas, and Utah have taken the initiative to consolidate their state workforce and welfare agencies, and report that they reduced administrative costs and improved services for job seekers. However, consolidation is not without challenges. In particular, states that have not yet consolidated their workforce and welfare agencies may not know how to integrate services in a way that is allowable under the law. While states and localities have undertaken some potentially promising initiatives to achieve greater administrative efficiencies, a major obstacle to further progress on this front is that little information is available about the strategies and results of these initiatives, including improvements to services and reductions in costs. Thus, it is unclear to what extent practices in these states could serve as models for others. In addition, little is known about the incentives states and localities have to undertake such initiatives and whether additional incentives may be needed. To facilitate further progress by states and localities in increasing administrative efficiencies in employment and training programs, we recommend that the Secretaries of Labor and HHS work together to develop and disseminate information that could inform such efforts. This should include information about: state initiatives to consolidate program administrative structures; and state and local efforts to colocate new partners, such as TANF, at one-stop centers. Information on these topics could address challenges faced, strategies employed, results achieved, and remaining issues. As a part of this effort, Labor and HHS should examine the incentives for states and localities to undertake such initiatives and, as warranted, identify options for increasing such incentives. We provided the Departments of Agriculture, Defense, Education, HHS, the Interior, Justice, Labor, Veterans Affairs, and the Environmental Protection Agency (EPA) with the opportunity to comment on a draft of this report. Written comments from Education, HHS, and Labor appear in appendixes XII, XIII, and XIV. In addition to the comments discussed below, Education, HHS, Interior, Labor, and VA provided technical comments that we incorporated where appropriate. Agriculture, Defense, EPA, and Justice officials stated that they had no comments. Labor concurred with our recommendation and said that while it continues to work with its federal partners to ensure access to services, more can be done to disseminate information to the workforce and social service communities. It highlighted the uniqueness of its programs and noted that WIA provides flexibility to states and local areas. HHS agreed that states would benefit from the department developing and disseminating information in accordance with our recommendation and said it shared the view that it is important to minimize duplication, maximize administrative efficiency, and develop service structures that ensure that individuals in need receive appropriate and effective employment services. HHS noted that it lacks legal authority to mandate increased TANF-WIA coordination or to create incentives for such efforts, cautioned against the assumption that doing so would necessarily result in cost savings, and noted that some overlap is necessary and appropriate in order to provide coordinated and more comprehensive services. It also said that while there is much to learn from the experience of Florida, Texas, and Utah, there is no evidentiary basis from which it can confidently state that the performance of these states is either better or worse than states with less integration. We revised the report to add additional references to HHS’s limited legal authority and noted the Department’s perspective on the success of states’ integration efforts. HHS recommended that we clearly distinguish between employment and training programs and broad, multipurpose block grants that have multiple allowable uses, including employment and training and said that it is not accurate to count multipurpose block grants as employment and training programs. While we agree that multipurpose block grant programs have uses other than employment and training, each program we included in our study had an important component related to employment and training and met our definition of an employment and training program. To clarify the report, we modified it to say that multipurpose block grants with broader missions are included in our list of programs. HHS also recommended that the report provide data on total spending for employment and training for a set of years, rather than only comparing 2002 to 2009, because Recovery Act spending in 2009 was a year with exceptional circumstances in terms of funding. While we did not collect spending data for fiscal years 2003 through 2007, our report provides spending data for another year prior to passage of the Recovery Act— fiscal year 2008 (see figure 2). We also attributed the increase in funding for these programs since our 2003 report to the temporary funding provided by the Recovery Act. In its comments, Education recommended that we exclude from the report all programs authorized by the Perkins Act (a total of five programs) because the primary purpose of these programs is increasing students’ academic, career, and technical skill levels. Education disagreed with our rationale for including these programs and stated that the statutory amendments that Congress made in 2006 during the last reauthorization broadened the educational purposes of the Perkins Act to emphasize placing students in further education. During the course of our data collection, Education officials had informed us that programs met our definition of an employment and training program, but later asked us to remove the programs when they reviewed the draft report. While we agree that these programs have an educational purpose, we maintain that each of these programs meets our definition of an employment and training program, based on information provided to us by Education. For example, Education officials reported that the five programs provide various types of employment and training services, including some that were categorized as primary services, such as occupational or vocational training, or on-the- job training (see appendix IX). Education officials also reported that three of these five programs track entered employment and all five programs track credential attainment as outcome measures (see appendix V). Education also recommended that for programs authorized by the Perkins Act we delete from our report all estimates of funds used on employment and training activities and of the number of participants who received employment and training services. We revised the report to delete this information because Education said the data it reported to us were not accurate and could not be reliably estimated. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to the Secretary of Agriculture, Secretary of Defense, Secretary of Education, Administrator of Environmental Protection Agency, Secretary of Health and Human Services, Secretary of the Interior, Attorney General, Secretary of Labor, the Secretary of Veterans Affairs; and appropriate congressional committees. This report will be made available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7215 or Sherrilla@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs can be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix XV. We identified federally funded employment and training programs by reviewing the Catalog of Federal Domestic Assistance (CFDA), the American Recovery and Reinvestment Act of 2009 (Recovery Act), and interviewing agency officials. Using keywords related to employment and training, we conducted a systematic search in the CFDA to identify potential employment and training programs. In addition, to identify potential employment and training programs that were expanded under the Recovery Act, we searched the CFDA to identify programs that received Recovery Act funding. We reviewed the Recovery Act and interviewed agency officials to identify any other potential employment and training programs that were not included in the CFDA. From this search, we identified 100 potential employment and training programs. We did not conduct a legal analysis in order to identify the programs or to determine their objectives, requirements, or goals. We gathered additional information about the programs identified in our search to determine whether they should be included in our review. Using the CFDA program listings, we gathered information about program objectives, restrictions on the use of program funding, and program funding levels. To gather further information to assist us in making a determination, we reviewed program fact sheets and other relevant information available on agency Web sites. When necessary, we also met with agency officials to discuss programs in more detail. We limited our initial list of 100 programs to those that are specifically designed to enhance the specific job skills of individuals in order to increase their employability, identify job opportunities, and/or help job seekers obtain employment. We included programs with broader missions if a primary purpose of the program was to provide employment and training assistance. We excluded any programs that met one or more of the following criteria: Program objectives do not explicitly include helping job seekers enhance their job skills, find job opportunities, or obtain employment. Program does not provide employment and training services itself. Program is small or is a component of a larger employment and training program such as a pilot or demonstration program. Many of the excluded programs can be grouped into the following categories: Economic development programs that aim to increase job opportunities but do not provide services to individuals to enhance their job skills, identify job opportunities, or find employment. Programs that aim to achieve broad workforce-related goals, such as increasing educational opportunities for minority individuals in particular fields or improving the status of and working conditions for wage-earning women, but do not provide employment or training services themselves. Education programs that fund student loans for educational expenses, initiatives for student recruitment and retention, or other student support services. Programs that support training for training providers, such as vocational rehabilitation specialists who assist disabled individuals seeking employment, or other programs that support job-specific training for individuals who are already employed rather than provide training for the general public. This process led to 52 programs being initially included in our review. Forty-nine of these programs were operational in fiscal year 2009, while 3 of them were created by Recovery Act and were not operational in fiscal year 2009. As a result, we removed those three programs from our list. Once our determinations were made, we sent e-mails to agency liaisons asking them to confirm the list of programs to be included in and excluded from our review and the names and contact information for the officials who would be responsible for completing the questionnaire. When requesting confirmation, we asked that the list be reviewed by the agency office that would ultimately comment on our draft report. Agencies confirmed our final inclusion and exclusion decisions. After deploying our questionnaire, officials provided us with new information on two programs—the Refugee and Entrant Assistance—Wilson/Fish program and the Indian Job Placement—United Sioux Tribes Development Corporation program. After reviewing this information, we determined that these programs did not meet our definition of an employment and training program and we excluded them from our review. In addition, Department of Education officials said that five of their programs should be excluded from our list, even though they had confirmed the list at the outset and completed the questionnaire. They said the programs focused on education and training and had broader goals than employment. We did not exclude these programs because each one has an important component related to employment and training and met our definition. See Table 4 for a full list of excluded programs. At the end of this process, we had confirmed that 47 programs met our definition and should be included in our review. We developed a Web-based questionnaire to collect information on federal employment and training programs. The questionnaire included questions on objectives, eligibility requirements, appropriations levels, the amount of funds used to provide employment and training services, program services, population groups served, and outcome measures. In addition, to gauge whether the Recovery Act modified programs, we developed questions that asked respondents to identify the amount of appropriations that the Recovery Act provided and whether the Recovery Act modified program objectives, target populations, program activities, and outcome measures. To minimize errors arising from differences in how questions might be interpreted and to reduce variability in responses that should be qualitatively the same, we conducted pretests with six federal officials over the telephone. To ensure that we obtained a variety of perspectives on our questionnaire, we selected officials from multiple agencies within the Departments of Education and Labor, the two departments with the largest number of programs. Based on feedback from these pretests, we revised the questionnaire in order to improve question clarity. For instance, in response to a Department of Education official’s comment that it was unclear whether our budget-related questions pertained to federal or state funding, we modified the budget-related questions to clarify that we were asking for information on federal funding only. We conducted an additional pretest with budget staff from the Department of Labor to ensure that the budget-related terms used in the questionnaire were understandable. After completing the pretests, we administered the survey. On June 18, 2010, we sent an e-mail announcement of the questionnaire to the agency officials responsible for the programs selected for our review, notifying them that our online questionnaire would be activated within a week. On June 23, 2010, we sent a second e-mail message to officials in which we informed them that the questionnaire was available online and provided them with unique passwords and usernames. We made telephone calls to officials and sent them follow-up e-mail messages, as necessary, to clarify and gain a contextual understanding of their responses. We received completed questionnaires from 47 programs, for a 100 percent response rate. For three programs that were created by the Recovery Act that were not operational in fiscal year 2009, we sent a list of questions to officials responsible for these programs in which we asked them to provide information on the program objectives, the population groups that would be served, and the types of services that would be provided. We used standard descriptive statistics to analyze responses to the questionnaire. Because this was not a sample survey, there are no sampling errors. To minimize other types of errors, commonly referred to as nonsampling errors, and to enhance data quality, we employed recognized survey design practices in the development of the questionnaire and in the collection, processing, and analysis of the survey data. For instance, as previously mentioned, we pretested the questionnaire with federal officials to minimize errors arising from differences in how questions might be interpreted and to reduce variability in responses that should be qualitatively the same. We further reviewed the survey to ensure the ordering of survey sections was appropriate and that the questions within each section were clearly stated and easy to comprehend. To reduce nonresponse, another source of nonsampling error, we sent out e-mail reminder messages to encourage officials to complete the survey. In reviewing the survey data, we performed automated checks to identify inappropriate answers. We further reviewed the data for missing or ambiguous responses and followed up with agency officials when necessary to clarify their responses. For selected large programs, we reviewed information on agency Web sites, prior GAO reports, and pertinent regulations and laws to corroborate the budgetary and program services information reported in the questionnaire. On the basis of our application of recognized survey design practices and follow- up procedures, we determined that the data were of sufficient quality for our purposes. To identify areas of overlap among employment and training programs, we reviewed prior GAO reports and information reported by federal agency officials in our survey. Based on our prior work, we determined that overlap occurs when programs provide at least one similar service to a similar population. After reviewing survey responses regarding the primary population groups served by programs and the services they provide, we categorized programs according to the primary population group served and identified programs within each category that provide similar services. In order to report the survey results in a logical and consistent manner, we combined or expanded some of the population group categories used in the survey and also made changes to the primary population group served by some programs. To identify areas of potential duplication across programs, we applied a multiphase selection process to identify a few programs for more in-depth analysis. The starting point of the selection process was the assumption that the potential for duplication is greatest when programs have similar eligibility requirements and provide similar services to the same population groups to achieve similar objectives. First, we categorized programs according to the primary population group served and consulted program descriptions from the CFDA to select those programs from each category that have similar eligibility requirements. Next, we evaluated the services provided by programs, based on the findings of our 2003 review, to select those programs from each primary population group category that provide similar services. Third, based on the assumption that duplication is more likely to occur among programs administered across different agencies, we selected the primary population group categories that contained programs administered by more than one federal agency. The programs within these categories were selected for the next step of our selection process. Using the CFDA program descriptions, we reviewed the objectives of the remaining programs to select those programs with similar objectives. Finally, we reviewed program financial data from our 2001 review to select three programs that were among the largest programs in terms of the amount spent on employment and training services—the Department of Labor’s WIA Adult Program, Labor’s Employment Service/Wagner-Peyser Funded Activities Program, and the Department of Health and Human Service’s Temporary Assistance for Needy Families Program. Each of these programs spent between $750 million and about $1 billion on employment and training services in fiscal year 1999, the time period assessed in our 2001 review. To determine the extent of duplication across these programs, we interviewed federal agency officials, state officials, officials from other organizations, and obtained additional information. When meeting with agency officials, we discussed each program’s structure including service locations, staffing levels and staff responsibilities, and coordination efforts with agencies that provide similar programs. In addition, we obtained documentation regarding the administrative costs associated with providing employment and training services. We reviewed relevant reports and interviewed officials from three organizations familiar with these programs—the Center for Law and Social Policy, the American Public Human Services Association, and the National Governors Association—to obtain their perspectives on the extent of duplication across the three selected programs. We also reviewed documentation and conducted interviews with officials in Florida, Texas, and Utah, three of the states that are considered to be the furthest along in their efforts to consolidate the administrative structures for these and other programs. To analyze the studies identified by survey respondents as impact and performance evaluations of the 47 surveyed employment and training programs they managed, we reviewed each study cited to determine whether criteria for each evaluation type, as specified in the questionnaire, were met. Our questionnaire asked respondents whether their program had been evaluated by OMB’s PART since fiscal year 2004. For respondents who indicated that their programs had undergone a PART review, we searched OMB’s PART Web site (www.expectmore.gov) in order to verify that a review had been completed. Of the 47 surveyed programs, 23 respondents answered that they had undergone a PART review since 2004. The process of verifying these answers on OMB’s PART Web site clarified that 17 of the 23 programs’ responses were correct. The other 6 programs’ responses were inaccurate by 2 years or less: all 23 of the programs answering positively to this question have undergone a PART review since 2002, but only 17 have taken place during or since 2004. In the course of our work, we found that one additional program was assessed using OMB’s PART in 2004, but this review was not identified by the program official who completed our survey. The questionnaire asked respondents whether an impact study had been completed since 2004 to evaluate program performance with regard to employment and training activities and, if so, to provide a citation for at least one of these studies. An impact study assesses the net effect of a program by comparing program outcomes with an estimate of what would have happened in the absence of the program. This type of study is conducted when external factors are known to influence the program outcomes, in order to isolate the program’s contribution to the achievement of its objectives. Of the survey’s 47 respondents, 8 provided at least one citation of what they believed to be an impact study. Of the 8 cited studies, we determined that 5 can accurately be described as completed impact studies. To make this assessment, we reviewed the methodology section of each study, to the extent it had one. Two of the studies cited were deemed to be too methodologically limited to be classified as an impact study based on the description contained in the studies, and one of the studies was not yet completed at the time of our review. Our questionnaire also asked respondents whether any studies other than impact studies had been completed since 2004 to evaluate the program’s performance with regard to employment and training activities and, if so, to provide a citation for at least one of them. Of the survey’s 47 respondents, 13 provided at least one citation of a study that has evaluated program performance with regard to employment and training activities. In addition, one study cited by a program official as an impact study that was determined not to be an impact study was considered in this step. We determined that 13 of these 14 studies cited were based on research designs that allowed for the measurement of program performance with regard to employment and training activities and had been completed since 2004. One study cited in the questionnaire by a program official was not made available for review upon follow-up evaluations because it was said to not yet have been cleared for distribution. To make this assessment, we focused on the methodology section of the reports to the extent they had one. We conducted this performance audit from November 2009 through January 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Also known as the Native Employment Works program. Agency officials were unable to estimate the amount spent on employment and training activities. Also known as the Native Employment Works program. Officials were unable to estimate the amount that will be used on employment and training activities. Indian Vocational Training—United Tribes Technical College Brownfield Job Training Cooperative Agreements Migrant and Seasonal Farmworkers Program Career and Technical Education—Basic Grants to States Career and Technical Education—Indian Set-aside Native Hawaiian Career and Technical Education Refugee and Entrant Assistance—Targeted Assistance Discretionary Program Refugee and Entrant Assistance—Targeted Assistance Grants Second Chance Act Prisoner Reentry Initiative Tribally Controlled Postsecondary Career and Technical Institutions program grantees. Officials were unable to provide an estimate of the number of individuals who received employment and training services. Also known as the Native Employment Works program. Temporary Assistance for Needy Families (HHS) National Guard Youth Challenge Program (DOD) Disabled Veterans’ Outreach Program (DOL) Homeless Veterans Reintegration Project (DOL) Local Veterans’ Employment Representative Program (DOL) Registered Apprenticeship and Other Training (DOL) American Indian Vocational Rehabilitation Services (ED) Migrant and Seasonal Farmworkers Program (ED) State Supported Employment Services Program (ED) Refugee and Entrant Assistance—Voluntary Agency Matching Grant Program (HHS) Employment Service/Wagner-Peyser Funded Activities (DOL) Vocational Rehabilitation for Disabled Veterans (VA) Community-Based Job Training Grants (DOL) programs provided citations. We evaluated the methodology of each study and determined that 5 of them met the definition of an impact study provided in our questionnaire—a study that assessed the net effect of a program by comparing program outcomes with an estimate of what would have happened in the absence of the program—and had been completed since 2004. The other studies either did not meet our definition or were not completed. programs reported that their programs have been reviewed by OMB’s PART since 2004. We verified this against OMB’s PART Web site (www.expectmore.gov) and determined that 17 of the 2 programs have been reviewed by PART since 2004; the other 6 were reviewed by PART in 2002 or 200. programs provided citations for at least one other study that evaluated program performance with respect to employment and training activities since 2004. We reviewed these 1 studies, as well as another study that was identified as an impact study but did not meet our definition. We determined that 12 of the 14 studies evaluated program performance with respect to employment and training activities and had been completed since 2004. One study was cited but not made available for our review because it had not been cleared by the agency for distribution at the time of our survey. Creer nd Technicl Edtion – Basic Grnt to Ste (ED) Employment Service/Wgner-Peyer Fnded Activitie (DOL) H-1B Jo Trining Grnt (DOL) Regitered Apprenticehip nd Other Trining (DOL) Americn Indin Voctionl Rehabilittion Service (ED) Creer nd Technicl Edtion – IndiSet-aside (ED) Indin Voctionl Trining – United Tri Technicl College (DOI) Ntive Hiin Creer nd Technicl Edtion (ED) Tribal Work Grnt (HHS) Rehabilittion Service – Voctionl Rehabilittion Grnt to Ste (ED) Ste-Supported Employment Service Progrm (ED) Grnt to Ste for Workplce nd Commnity Trition Trining for Incrcerted Individua (ED) Refgee nd Entrnt Assnce – SociService Progrm (HHS) Refgee nd Entrnt Assnce – Trgeted Assnce Dicretionry Progrm (HHS) Refgee nd Entrnt Assnce – Trgeted Assnce Grnt (HHS) Refgee nd Entrnt Assnce – Volntry Agency Mtching Grnt Progrm (HHS) Locl Veters’ Employment Repreenttive Progrm (DOL) Conervtion Activitie y Yoth Service Orgniztion (DOI) Other Brownfield Jo Trining Coopertive Agreement (EPA)nity Service Employment Progrm (DOL)WANTO (DOL) The populations served are solely people with disabilities, and in most cases, those with significant disabilities. The primary population group served by the program is residents of Brownfield-impacted communities. Nearly all of the 47 programs offered participants a wide range of employment and training services in fiscal year 2009. For example, 43 programs offered participants at least 6 services, and 28 programs offered participants 10 or more services. Across all programs, the most commonly provided services were employment counseling and assessment, job readiness skills training, occupational or vocational training, and job search or job placement activities (see fig. 8). Agency officials also indicated whether each service provided by their programs was a primary or secondary service. The two most commonly provided primary services were occupational or vocational training and job search or job placement activities. Program provides career guidance and academic counseling that includes information regarding career awareness and planning, career options, financial aid, and postsecondary options, including baccalaureate degree programs. To provide adjustment assistance to qualified workers adversely affected by foreign trade which will assist them to obtain suitable employment. Under the Recovery Act, group eligibility was significantly expanded, and benefits were enhanced to focus more on retraining opportunities. A TAA beneficiary must: (1) be found by the Labor Department to have been adversely affected by increased imports or a shift in production to all countries, (2) be certified by the Secretary of Labor as eligible to apply for adjustment assistance, and (3) meet the following individual requirements: (a) his or her unemployment or underemployment must have begun on or after the impact date specified in the Secretary’s certification as the beginning of the import-impacted unemployment or underemployment; (b) his or her unemployment must begin before the expiration of the 2- year period beginning on the date on which the Secretary issued the certification for his or her group or before the termination date, if any, specified in the certification. In addition, to be eligible for weekly trade readjustment allowance (TRA) payments he or she must: (1) have been employed with wages at a minimum of $30 per week by the import-affected firm for at least 26 of the previous 52 weeks including the week of total layoff (up to 7 weeks of employer-authorized leave may be counted as qualifying weeks of employment or up to 26 weeks of disability compensation); and (2) be enrolled in or have completed a TAA- approved job training program, unless the determination is made that training is either not feasible or not appropriate, in which case a waiver of the training requirement may be issued. To receive TRA, the claimant must be enrolled in an approved training program within 26 weeks of the Secretary’s issuance of the certification or within 26 weeks of the worker’s most recent qualifying separation, whichever is later. Under the Recovery Act, group eligibility requirements now allow for certification of service workers as well as those who produce an article. In addition, government employees can now be certified when tasks are shifted abroad. Finally, workers who produce component parts of a product are also eligible. The purpose of the WIA Dislocated Workers program is to reemploy dislocated workers, improve the quality of the workforce, and enhance the productivity and competitiveness of the nation’s economy by providing workforce investment activities that increase the employment, retention, and earnings of participants, and increase occupational skill attainment by the participants. This program is designed to increase employment, as measured by entry into unsubsidized employment, retention in unsubsidized employment after entry into employment, and extent of recovery of prior earnings. Individuals eligible for assistance through the applicants receiving the funds include workers who have lost their jobs, including those dislocated as a result of plant closings or mass layoffs, and are unlikely to return to their previous industry or occupation; formerly self-employed individuals; and displaced homemakers who have been dependent on income of another family member, but are no longer supported by that income. Priority of Service is given to veterans and other covered persons. The NEGs have identical eligibility to the above and also includes certain military personnel and defense employees. Services through NEGs are targeted on individuals affected by mass layoffs, natural disasters, federal government actions, and other circumstances specified by the Secretary. WIA National Emergency Grants (Labor) The purpose of the National Emergency Grants program is to temporarily expand service capacity at the state and local levels by providing time-limited funding assistance in response to significant dislocation events. Significant events are those that create a sudden need for assistance that cannot reasonably be expected to be accommodated within the ongoing operations of the formula-funded Dislocated Worker program, including the discretionary resources reserved at the state level. Individuals who are eligible for assistance vary by type of National Emergency Grant project, however they must meet the criteria provided in the Workforce Investment Act: National Emergency Grants - Application Procedures, 69 Federal Register 23052 at 23057 (Apr. 27, 2004). To provide discretionary grant funds to state vocational rehabilitation agencies and public nonprofit organizations for special projects and demonstrations which hold promise of expanding and otherwise improving services to individuals with disabilities over and above those provided by the Basic Support Program administered by states. Individuals with disabilities and individuals with significant disabilities as defined in Sections 7(9)(A)(B) and 7(20)(A), respectively, of the Rehabilitation Act of 1973, as amended. National Farmworker Jobs Program (Labor) To provide job training and other employability development services and related assistance for those individuals, including their dependents, who are primarily employed in agricultural labor that is characterized by chronic unemployment and underemployment. The ultimate beneficiaries are low income individuals and their dependents who have, during any consecutive 12-month period in the 24 months preceding their application for enrollment, been primarily employed in agricultural labor that is characterized by chronic unemployment or underemployment due to the seasonal or migratory nature of the work. Individuals must also be legally available for work and males must not have violated the Selective Service Act registration requirement. Multiple groups (no specific target group) Career and Technical Education—Basic Grants to States (Education) To develop more fully the academic, career, and technical skills of secondary and postsecondary students who elect to enroll in career and technical education programs. A wide range of students pursuing academic and career and technical education will benefit. Community-Based Job Training Grants (Labor) Workers must have the skills needed to secure good jobs and pursue careers in high-growth, high-demand industries. Community colleges are important training providers for workers needing to develop, retool, refine, and broaden their skills in high- growth, high-demand occupations because of their close connection to local labor markets. Community- Based Job Training Grants strengthen the role of community colleges in promoting the U.S. workforce’s full potential. Community-Based Job Training Grants are awarded through a competitive process to support workforce training for workers to prepare them for careers in high- growth industries through the national system of community and technical colleges. current job have changed; untapped labor pools such as immigrant workers, individuals with disabilities, veterans, older workers, youth, etc; or entry-level workers who need basic skills and/or specific occupational skill training. Community Services Block Grant (Health and Human Services ) States make grants to qualified locally based nonprofit community antipoverty agencies and other eligible entities which provide services to low-income individuals and families. The official poverty line, as established by the Secretary of Health and Human Services, is used as a criterion of eligibility in the Community Services Block Grant program. When a state determines that it serves the objectives of the block grant, it may revise the income limit, not to exceed 125 percent of the official poverty line. Under the Recovery Act, states were able to revise the income limit to not exceed 200 % of the official poverty line for fiscal years 2009 and 2010. Appendix XI: Reported Objectives and Eligibility for Employment and Training Programs, by Target Group (h) make more effective use of other related programs; (3) to provide on an emergency basis for the services to low-income individuals. Employment Service/Wagner-Peyser Funded Activities (Labor) To assist persons to secure employment and workforce information by providing a variety of job search assistance and information services without charge to job seekers and to employers seeking qualified individuals to fill job openings. All employers seeking workers, persons seeking employment, and associated groups. Priority of service is given to veterans and other covered persons. Veterans receive priority referral to jobs, as well as specialized employment services and assistance. The Wagner-Peyser program also administers the work test for state unemployment compensation systems and provides job search and placement services for unemployment insurance claimants. The H-1B Job Training Grants Program funds projects that provide training and related activities to workers to assist them in gaining the skills and competencies needed to obtain or upgrade employment in high- growth industries or economic sectors. Generally, the scope of potential trainees under these programs can be very broad. Please review the Solicitation for Grant Application for specific requirements. Training may be targeted to a wide variety of populations, including unemployed individuals and incumbent workers. Registered Apprenticeship and Other Training (Labor) To stimulate and assist industry in the development, expansion, and improvement of registered apprenticeship and other training programs designed to provide the skilled workers required by U.S. employers, ensure equal employment opportunities in registered apprenticeship, and ensure the quality of all new and existing registered apprenticeship programs. Individuals applying for acceptance into an apprenticeship training program must be at least 16 years old and must satisfy the apprenticeship program sponsor that they have sufficient ability, aptitude, and education to master the rudiments of the trade/occupation and to satisfactorily complete the related theoretical instruction required in the program. To assist members of households participating in SNAP in gaining skills, training, work, or experience that will increase their ability to obtain regular employment. Households may have no more than $2,000 in countable resources, such as a bank account ($3,000 if at least one person in the household is age 60 or older, or is disabled). Certain resources are not counted, such as a home and lot. Special rules are used to determine the resource value of vehicles owned by household members. The gross monthly income of most households must be 130 % or less of the federal poverty guidelines ($2,389 per month for a family of four in most places, effective Oct. 1, 2009 through Sept. 30, 2010). Gross income includes all cash payments to the household, with a few exceptions specified in the law or the program regulations. Net monthly income must be 100 % or less of federal poverty guidelines ($1,838 per month for a household of four in most places, effective Oct. 1, 2009 through Sept. 30, 2010). Net income is figured by adding all of a household’s gross income, and then taking a number of approved deductions for child care, some shelter costs, and other expenses. Households with an elderly or disabled member are subject only to the net income test. Most able- bodied adult applicants must meet certain work requirements. All household members must provide a Social Security number or apply for one. SNAP participants who are not exempt from work requirements must participate in an Employment and Training (E&T) Program if referred. SNAP participants may also volunteer for the E&T Program, but the state agency decides who it will serve. This program provides assistance to state eligible agencies to award grants to consortia of local agencies and postsecondary education institutions for the development and operation of programs consisting of at least 2 years of secondary education and at least 2 years of postsecondary education or an apprenticeship program that follows secondary education. These programs provide Tech-Prep education to students, leading to a technical skills proficiency, an industry- recognized credential, a certificate, or a degree in a specific career field. Students desiring to participate in a combined secondary/postsecondary program leading to a technical skill proficiency, postsecondary degree, or 2-year certificate with technical preparation in at least one field of engineering, applied science, mechanical, industrial, or practical art or trade, or agriculture, health, or business will benefit. Temporary Assistance for Needy Families (HHS) To provide grants to states, territories, the District of Columbia, and federally recognized Indian Tribes operating their own Tribal TANF programs to assist needy families with children so that children can be cared for in their own homes; to reduce dependency by promoting job preparation, work, and marriage; to reduce and prevent out-of- wedlock pregnancies; and to encourage the formation and maintenance of two-parent families. Needy families with children, as determined eligible by the state, territory, or tribe in accordance with the state or tribal plan submitted to HHS. The purpose of this program is to improve the quality of the workforce, reduce welfare dependency, and enhance the productivity and competitiveness of the nation’s economy by providing workforce investment activities that increase the employment, retention, and earnings of participants, and increase occupational skill attainment by the participants. This program is designed to increase employment, as measured by entry into unsubsidized employment, retention in unsubsidized employment after entry into employment, and earnings. All adults 18 years and older are eligible for core services. Priority for intensive and training services must be given to recipients of public assistance and other low-income individuals where funds are limited. States and local areas are responsible for establishing procedures for applying the priority requirements. Priority of service is given to veterans and other covered persons. American Indian Vocational Rehabilitation Services (Education) To provide vocational rehabilitation services to American Indians with disabilities who reside on federal or state reservations in order to prepare them for suitable employment. American Indians with disabilities residing on or near a federal or state reservation (including Native Alaskans) who meet the definition of an individual with a disability in Section 7 (8)(A) of the Rehabilitation Act. Career and Technical Education—Indian Set-aside (Education) To make grants to or enter into contracts with Indian tribes, tribal organizations, and Alaska Native entities to plan, conduct, and administer programs or portions of programs authorized by and consistent with the Carl D. Perkins Career and Technical Education Act of 2006. Members of federally-recognized Indian tribes, tribal organizations, Alaska Native entities, and certain schools funded by the Department of the Interior’s Bureau of Indian Education. To provide vocational training and employment opportunities to eligible American Indians and reduce federal dependence. Members of federally recognized Indian Tribes who are unemployed, underemployed, or in need of training to obtain reasonable and satisfactory employment. Complete information on beneficiary eligibility is found in 25 CFR, Parts 26 and 27. Indian Vocational Training—United Tribes Technical College (Interior) To provide vocational training to individual American Indians through the United Tribes Technical College, located in Bismarck, North Dakota. Individual American Indians who are members of a federally recognized Indian Tribe and reside on or near an Indian reservation under the jurisdiction of the Bureau of Indian Affairs. Complete information on beneficiary eligibility is found in 25 CFR, Parts 26 and 27. To support employment and training activities for Indian, Alaska Native, and Native Hawaiian individuals in order: to develop more fully the academic, occupational, and literacy skills of such individuals; to make such individuals more competitive in the workforce; and to promote the economic and social development of Indian, Alaska Native, and Native Hawaiian communities in accordance with the goals and values of such communities. All programs assisted under this section shall be administered in a manner consistent with the principles of the Indian Self- Determination and Education Assistance Act (25 U.S.C. 450 et seq.) and the government-to- government relationship between the federal government and Indian tribal governments. Supplemental youth funding is also awarded to help low-income Native American youth and Native Hawaiian youth, between the ages of 14 and 21, acquire the educational skills, training, and the support needed to achieve academic and employment success and successfully transition to careers and productive adulthood. American Indians (members of federally recognized and state Indian tribes, bands, and groups); other individuals of Native American descent, such as, but not limited to, the Klamaths in Oregon, Micmac and Maliseet in Maine, the Lumbees in North Carolina and South Carolina; Indians variously described as terminated or landless, Eskimos and Aleuts in Alaska, and Hawaiian Natives. (“Hawaiian Native” means an individual any of whose ancestors were natives prior to 1778 of the area which now comprises the State of Hawaii.) Applicants must also be economically disadvantaged, unemployed, or underemployed. A Native American grantee may in some cases enroll participants who are not economically disadvantaged, unemployed, or underemployed in upgrading and retraining programs. See 20 CFR 668.300(b)(4) and (5). Native American youth between the ages of 14 and 21 who live on or near a reservation or in the States of Oklahoma, Alaska, and Hawaii and are low-income, are eligible to receive supplemental youth services. Under the Recovery Act, the program’s supplemental youth eligibility age requirements were extended to 24. Native Hawaiian Career and Technical Education (Education) To make grants to organizations primarily serving and representing Native Hawaiians for programs or portions of programs authorized by, and consistent with, the Carl D. Perkins Career and Technical Education Act. Native Hawaiians served by eligible entities will benefit. Eligible entities are community-based organizations primarily serving and representing Native Hawaiians. For purposes of this program, a community-based organization means a public or private nonprofit organization that provides career and technical education, or related services to individuals in the Native Hawaiian community. Any eligible community-based organization may apply individually or with one or more eligible community-based organizations or as a member of a consortium. To allow eligible Indian Tribes and Alaska Native organizations to operate a program to make work activities available. Service areas and populations as designated by the eligible Indian Tribe or Alaska Native organization. Tribally Controlled Postsecondary Career and Technical Institutions (Education) To make grants to tribally controlled postsecondary vocational and technical institutions to provide career and technical education services and basic support for the education and training of Indian students. American Indians served by eligible entities will benefit. Eligible entities are Tribally Controlled Postsecondary Career and Technical Institutions that receive no funds from either the Tribally Controlled College or University Assistance Act of 1978 (25 U.S.C. 1801 et seq.) or the Navajo Community College Act (25 U.S.C. 640a et seq.). To create and expand job and career opportunities for individuals with disabilities in the competitive labor market by partnering with private industry to provide job training and placement and career advancement services. An individual is eligible for services under this program if the individual to be served is an individual with a disability or an individual with a significant disability, as defined in Sections 7 (20)(A) and 7 (21)(A), respectively, of the Rehabilitation Act of 1973, as amended. In making this determination, the state vocational rehabilitation unit shall rely on the determination made by the recipient of the grant under which the services are provided, to the extent that the determination is appropriate, available, and consistent with the requirements of the Act. Rehabilitation Services—Vocational Rehabilitation Grants to States (Education) To assist states in operating comprehensive, coordinated, effective, efficient, and accountable programs of vocational rehabilitation; to assess, plan, develop, and provide vocational rehabilitation services for individuals with disabilities, consistent with their strengths, resources, priorities, concerns, abilities, and capabilities so they may prepare for and engage in competitive employment. Eligibility for vocational rehabilitation services is based on the presence of a physical and/or mental impairment, which for such an individual constitutes or results in a substantial impediment to employment, and the need for vocational rehabilitation services that may be expected to benefit the individual in terms of an employment outcome. To provide grants for time-limited services leading to supported employment for individuals with the most severe disabilities to enable such individuals to achieve the employment outcome of supported employment. Individuals with the most severe disabilities whose ability or potential to engage in a training program leading to supported employment has been determined by evaluating rehabilitation potential. In addition, individuals must need extended services in order to perform competitive work and have the ability to work in a supported employment setting. Grants to States for Workplace and Community Transition Training for Incarcerated Individuals (Education) To assist and encourage incarcerated individuals who have obtained a secondary school diploma or its recognized equivalent to acquire educational and job skills through: coursework to prepare such individuals to pursue a postsecondary education certificate, an associate’s degree, or bachelor’s degree while in prison or employment counseling and other related services which start during incarceration and end not later than 2 years after release from incarceration. An incarcerated individual who has obtained a secondary school diploma or its recognized equivalent shall be eligible for participation if such individual (1) is eligible to be released within 7 years (including an incarcerated individual who is eligible for parole within such time); (2) is 35 years of age or younger; and (3) has not been convicted of—(A) a ‘criminal offense’, or sexually violent offense’, as such terms are defined in the Jacob Wetterling Crimes Against Children and Sexually Violent Offender Registration Act (42 U.S.C. 14071 et seq.); or (B) murder, as described in section 1111 of title 18, United States Code. This program includes both Prisoner Reentry Initiative (PRI) grants to serve adult returning offenders and Youthful Offender grants aimed at youth involved or at risk of involvement in crime and violence. The objectives of the PRI grants include increasing the employment rate, employment retention rate, and earnings of released prisoners, and decreasing their recidivism. The objectives of the Youthful Offender grants include preventing in-school youth from dropping out of school, increasing the employment rate of out-of-school youth, increasing the reading and math skills of youth, reducing the involvement of youth in crime and violence, and reducing the recidivism rate of youth. PRI grants serve individuals, 18 years old and older, who have been convicted as an adult and have been imprisoned for violating a state or federal law, and who have never been committed a sex-related offense. Depending on the solicitation, enrollment may be limited based on whether the presenting offense was violent or whether the individual has previously committed a violent crime. Individuals eligible for Youthful Offender grants vary depending on the solicitation. To facilitate inmates’ successful reintegration into society. This initiative is a comprehensive effort that addresses both juvenile and adult populations of serious, high- risk offenders. Phase 1 programs are designed to prepare offenders to reenter society and the services provided include job training. Phase 2 programs work with offenders prior to and immediately following their release from correctional institutions and the services provided include job-skills development. Phase 3 programs connect individuals who have left the supervision of the justice system with a network of social service agencies and community- based organizations to provide ongoing services and mentoring relationships. The target population for the initiative must be a specific subset of the population of individuals aged 18 and older convicted as an adult and imprisoned in a state, local, or tribal prison or jail. Refugee and Entrant Assistance—Social Services Program (HHS) The Refugee Social Services Program is part of the Division of Refugee Assistance and allocates formula funds to states to serve refugees who have been in the U.S. less than 60 months (5 years). This program supports employability services and other services that address participants’ barriers to employment such as social adjustment services, interpretation and translation services, day care for children, citizenship and naturalization services, etc. Employability services are designed to enable refugees to obtain jobs within 1 year of becoming enrolled in services. Refugees who have been in the U.S. less than 60 months (5 years). Service priorities are (a) all newly arriving refugees during their first year in the U.S. who apply for services; (b) refugees who are receiving cash assistance; (c) unemployed refugees who are not receiving cash assistance; and (d) employed refugees in need of services to retain employment or to attain economic independence. The Targeted Assistance Discretionary Program is part of the Division of Refugee Assistance and provides grants to states and state-alternative programs to address the employment needs of refugees that cannot be met with the Formula Social Services or Formula Targeted Assistance Grant programs. Activities under this program are for the purpose of supplementing and/or complementing existing employment services to help refugees achieve economic self- sufficiency. Services funded through the targeted assistance program are required to focus primarily on those refugees who, either because of their protracted use of public assistance or difficulty in securing employment, continue to need services beyond the initial years of resettlement. This funding requirement also promotes the provision of services to refugees who are “hard to reach” and, thus, finding greater difficulty integrating. Refugees residing in the U.S. longer than 5 years, refugee women who are not literate in their native language, as well as the elderly are some of the special populations served by this discretionary grant program. Refugee and Entrant Assistance—Targeted Assistance Grants (HHS) To provide funding for employment-related and other social services for refugees, asylees, certain Amerasians, victims of a severe form of trafficking, entrants, and Iraqi and Afghan special immigrants in areas of high refugee concentration and high welfare utilization. Persons admitted to the U.S. within the last 5 years as refugees under Section 207 of the Immigration and Nationality Act; granted asylum under Section 208 of the Act; Cuban and Haitian entrants, as defined in Section 501 of the Refugee Education Assistance Act; and certain Amerasians from Vietnam and their accompanying family members, as defined by Section 584(c) of the Foreign Relations, Export Financing, and Related Programs Appropriation Act of 1988. Victims of a severe form of trafficking who have received a certified or letter of eligibility from ORR. Refugee and Entrant Assistance—Voluntary Agency Matching Grant Program (HHS) To assist refugees in becoming self-supporting and independent members of American society, by providing grant funds to private nonprofit organizations to support case management, transitional assistance, and social services for new arrivals. Refugees must be enrolled within 31 days of arrival. Entrants/asylees must be enrolled within 31 days of granting of parole or asylum. To provide intensive services to meet the employment needs of disabled and other eligible veterans with maximum emphasis in meeting the employment needs of those who are economically or educationally disadvantaged, including homeless veterans and veterans with barriers to employment. Eligible veterans and eligible persons with emphasis on Special Disabled veterans, disabled veterans, economically or educationally disadvantaged veterans, and veterans with other barriers to employment. Homeless Veterans Reintegration Project (Labor) To provide services to assist in reintegrating homeless veterans into meaningful employment within the labor force and to stimulate the development of effective service delivery systems that will address the complex problems facing homeless veterans. Individuals who are homeless veterans. The term “homeless” or “homeless individual” includes: (1) An individual who lacks a fixed, regular, and adequate nighttime residence; and (2) an individual who has a primary nighttime residence that is: (a) a supervised publicly or privately operated shelter designed to provide temporary living accommodations including welfare hotels, congregate shelters, and transitional housing for the mentally ill; (b) an institution that provides a temporary institutionalized; or (c) a public or private place not designed for, or ordinarily used as, a regular sleeping accommodations for human beings (Reference: 42 U.S.C. 1302). A “veteran” is an individual who served in the active military, naval, or air service, and who was discharged or released there from under conditions other than dishonorable. (Reference: 33 U.S.C. 101 (2)). Local Veterans’ Employment Representative Program (Labor) Conduct outreach and provide seminars to employers which advocates hiring of veterans; facilitate Transition Assistance Program (TAP) employment workshops to transitioning service members; establish and conduct job search workshops; facilitate employment, training, and placement services furnished to veterans in a state under the applicable state employment service or one-stop career center delivery systems whose sole purpose is to assist veterans in gaining and retaining employment. Eligible veterans and eligible persons. To provide employment instruction, information, and assistance to separating and retiring military personnel and their spouses through domestic and overseas installations and/or facilities by offering job search and other related services. Service members within 2 years of retirement or 1 year of separation and their spouses. Veterans’ Workforce Investment Program (Labor) To provide services to assist in reintegrating eligible veterans into meaningful employment within the labor force and to stimulate the development of effective service delivery systems that will address the complex problems facing eligible veterans. Service-connected disabled veterans, veterans who have significant barriers to employment, veterans who served on active duty in the armed forces during a war or in a campaign or expedition for which a campaign badge has been authorized, and veterans who are recently separated from military service (48 months). Vocational Rehabilitation for Disabled Veterans (VA) To provide all services and assistance necessary to enable service-disabled veterans and service persons hospitalized or receiving outpatient medical care services or treatment for a service- connected disability pending discharge to gain and maintain suitable employment. When employment is not reasonably feasible, the program can provide the needed services and assistance to help the individual achieve maximum independence in daily living. Veterans of World War II and later service with a service-connected disability or disabilities rated at least 20 % compensable and certain service-disabled servicepersons pending discharge or release from service if VA determines the servicepersons will likely receive at least a 20 % rating and they need vocational rehabilitation because of an employment handicap. Veterans with compensable ratings of 10 % may also be eligible if they are found to have a serious employment handicap. To receive an evaluation for vocational rehabilitation services, a veteran must have received, or eventually receive, an honorable or other than dishonorable discharge, have a VA service-connected disability rating of 10 % or more, and apply for vocational rehabilitation services. To utilize qualified youth or conservation corps to carry out appropriate conservation projects which the Secretary is authorized to carry out under other authority of law on public lands. Work cooperatively with the National Park Service (NPS) on cultural- and natural resource-related conservation projects such as trail development and maintenance; historic, cultural, forest and timber management; minor construction work; archaeological conservation; and native plant habitat restoration and rehabilitation. Promote and stimulate public purposes such as education, job training, development of responsible citizenship, productive community involvement, and further the understanding and appreciation of natural and cultural resources through the involvement of youth and young adults in care and enhancement of public resources. Continue the longstanding efforts of the NPS to provide opportunities for public service, youth employment, minority youth development and training, and participation of young adults in accomplishing conservation- related work. Private nonprofit institutions and organizations, state and local government agencies, and quasi- public nonprofit institutions and organizations that support youth career training and development in the areas of resource management, conservation, and cultural resources; individuals/families; graduate students; youth or corps located in a specific area that have a substantial portion of members who are economically physically, or educationally disadvantaged (Public Land Corps Act of 1993); general public, specifically, young people, minority groups, social and economically disadvantaged individuals will benefit from the education and skill development in the area of conservation as well as instilling conservation ethics. Job Corps is the nation’s largest federally funded training program that provides at-risk youth, ages 16-24, with academic instruction, toward the achievement of a High School Diploma or GED, and career training in high-growth, high-demand industries. Upon exit from the program, participants receive transition assistance to employment, higher education, or the military. The program is primarily residential, serving more than 60,000 students at 123 centers nationwide. To be eligible to become an enrollee, an individual shall be: (1) not less than age 16 and not more than age 21 on the date of enrollment, except that (A) not more than 20 % of the individuals enrolled in the Job Corps may be not less than age 22 and not more than age 24 on the date of enrollment; and (B) either such maximum age limitation may be waived by the Secretary, in accordance with regulations of the Secretary, in the case of an individual with a disability; (2) a low-income individual; and (3) an individual who is one or more of the following: (A) basic skills deficient; (B) a school dropout; (C) homeless, a runaway, or a foster child; (D) a parent; (E) an individual who requires additional education, vocational training, or intensive counseling and related assistance, in order to participate successfully in regular schoolwork or to secure and hold employment. The Secretary of Defense may use the National Guard to conduct a civilian youth opportunities program, to be known as the “National Guard Youth Challenge Program, which shall consist of at least a 22-week residential program and a 12-month postresidential mentoring period. The program shall seek to improve life skills and employment potential of participants by providing military-based training and supervised work experience, together with the core components of assisting participants to receive a high school diploma or its equivalent, leadership development, promoting fellowship and community service, developing life coping skills and job skills, and improving physical fitness and health and hygiene. A school dropout from secondary school shall be eligible to participate in the program. The Secretary of Defense shall prescribe the standards and procedures for selecting participants from among school dropouts. Selection of participants for the program established by the Secretary of Defense shall be from applicants who meet the following eligibility standards: (a) 16-18 years of age at time of entry into the program; (b) a school dropout from secondary school; (c) a citizen or legal resident of the United States; (d) unemployed or underemployed; (e) not currently on parole or probation for other than juvenile status offenses, not awaiting sentencing, and not under indictment, accused, or convicted of a felony; (f) free from use of illegal drugs or substances; (g) physically and mentally capable to participate in the program in which enrolled with reasonable accommodation for physical and other disabilities; and (h) application procedures shall, to the fullest extent possible, attempt to reach and include economically and educationally disadvantaged groups. To help low-income youth, between the ages of 14 and 21, acquire the educational and occupational skills, training, and support needed to achieve academic and employment success and successfully transition to careers and productive adulthood. Under the Recovery Act, any youth activities under WIA were allowable activities. While the Act did not limit the use of Recovery Act funds to summer employment, the congressional intent was to offer expanded summer employment opportunities for youth. ETA strongly encouraged states and local areas to use as much of the Recovery Act funds as possible to operate expanded summer youth employment opportunities during the summer of 2009, and to provide as many youth as possible with summer employment opportunities and work experiences throughout the year, ensuring that these summer employment opportunities and work experiences were high quality. ETA also expressed an interest in and encouraged states and local areas to develop work experiences and other activities that exposed youth to opportunities in “green” educational and career pathways. An eligible youth is an individual who: (1) is 14 to 21 years of age; and (2) is an individual who received an income or is a member of a family that received a total family income that, in relation to family size, does not exceed the higher of (a) the poverty line; or (b) 70 % of the lower living standard income; and (3) meets one or more of the following criteria: is an individual who is deficient in basic literacy skills; a school dropout; homeless; a runaway; a foster child; pregnant or a parent; an offender; or requires additional assistance to complete their education or secure and hold employment. There is an exception to permit youth who are not low-income individuals to receive youth services. Up to 5 % of youth participants served by youth programs in a local area may be individuals who do not meet the income criterion for eligible youth, provided that they are within one or more of the following categories: school dropout; basic skills deficient; are one or more grade levels below the grade level appropriate to the individual’s age; pregnant or parenting; possess one or more disabilities, including learning disabilities; homeless or runaway; offender; or face serious barriers to employment as identified by the local board. Under the Recovery Act, age eligibility for youth services funded by the Recovery Act increased from 21 to 24. Grant funds will be used to provide disadvantaged youth with: the education and employment skills necessary to achieve economic self sufficiency in occupations in high demand and postsecondary education and training opportunities; opportunities for meaningful work and service to their communities; and opportunities to develop employment and leadership skills and a commitment to community development among youth in low- income communities. As part of their programming, YouthBuild grantees will tap the energies and talents of disadvantaged youth to increase the supply of permanent affordable housing for homeless individuals and low-income families and to assist youth develop the leadership, learning, and high-demand occupational skills needed to succeed in today’s global economy. An eligible youth is an individual who is: (1) between the ages of 16 and 24 on the date of enrollment; and (2) a member of a disadvantaged youth population such as a member of a low- income family, a youth in foster care (including youth aging out of foster care), a youth offender, a youth who is an individual with a disability, a child of an incarcerated parent, or a migrant youth; and (3) an individual who has dropped out of high school and re- enrolled in an alternative school, if that re-enrollment is part of a sequential service strategy. Up to (but not more than) 25 % of the participants in the program may be youth who do not meet the education and disadvantaged criteria above but who are: (1) basic skills deficient, despite attainment of a secondary school diploma, General Education Development (GED) credential, or other state-recognized equivalent (including recognized alternative standards for individuals with disabilities); or (2) have been referred by a local secondary school for participation in a YouthBuild program leading to the attainment of a secondary school diploma. Appendix XI: Reported Objectives and Eligibility for Employment and Training Programs, by Target Group provides a year or more of educational services prior to entry into the formal YouthBuild program supported by Labor’s Employment and Training Administration. This definition is intended to encompass a charter school that is connected to a YouthBuild program. Brownfield Job Training Cooperative Agreements (Environmental Protection) The objective of the Brownfield Job Training Program is to recruit, train, and place unemployed and underemployed, predominantly low-income and minority, residents of Brownfield-impacted communities with the skills needed to obtain full-time, sustainable employment in Brownfield assessment and cleanup activities and the environmental field. The Brownfield Job Training Program promotes the facilitation of assessment, remediation, or preparation of Brownfield sites. Job training grants will provide environmental job training and help individuals of Brownfield neighborhoods take advantage of job opportunities created as a result of the assessment and clean up of Brownfield properties. In addition, this program benefits industry by increasing the supply of skilled labor for firms that engage in environmental assessment and clean up. Senior Community Service Employment Program (SCSEP) (Labor) To foster individual economic self- sufficiency; provide training in meaningful part-time opportunities in community service activities for unemployed low-income persons who are 55 years of age or older, particularly persons who have poor employment prospects; and to increase the number of older persons who may enjoy the benefits of unsubsidized employment in both the public and private sectors. Adults 55 years or older with a family income at or below 125 % of the HHS poverty level. Prospective participants must provide documentation relative to age and personal financial status, which is required to determine whether the individual is program eligible. With certain exceptions, the Census Bureau’s Current Population Survey definition of income governs the determination of SCSEP applicant income eligibility. Section 518 (a)(3)(A)OAA-2006 specifies that any income that is unemployment compensation, a benefit received under title XVI of the Social Security Act; a payment made to or on behalf of veterans or former members of the armed forces under the laws administered by the Secretary of Veterans Affairs, or 25 % of a benefit received under title II of the Social Security Act is excluded from SCSEP income eligibility determinations. To promote the recruitment, training, employment, and retention of women in apprenticeship and nontraditional occupations; help women obtain soft skills and industry-specific training; and help employers and labor unions recruit, place, and retain women in registered apprenticeship programs that lead to nontraditional occupations. Women who are seeking to enroll in a preapprenticeship program, an apprenticeship training program, or a nontraditional occupation must be at least 16 years old and must satisfy the apprenticeship program sponsor that they have sufficient ability, aptitude, and education to master the rudiments of the trade/occupation and to satisfactorily complete the related theoretical instruction required in the program. This program’s eligibility criteria were modified by the Recovery Act. Also known as the Native Employment Works program. Patrick Dibattista (Assistant Director) and Paul Schearf (Analyst-in- Charge) managed all aspects of the assignment. Sherwin Chapman, Caitlin Croake, and Chad Williams made significant contributions to this report, in all aspects. In addition, Pamela Davidson provided technical support in design and methodology; Jill Lacey provided technical support in survey design and survey research; Joanna Chan and Julia Kennon provided data analysis; Alex Galuten provided legal support; Mimi Nguyen provided graphic design assistance; and Kathleen van Gelder assisted in the message and report development. Multiple Employment and Training Programs: Funding and Performance Measures for Major Programs. GAO-03-589. Washington, D.C.: April 18, 2003. Multiple Employment and Training Programs: Overlapping Programs Indicate Need for Closer Examination of Structure. GAO-01-71. Washington, D.C.: October 13, 2000. Multiple Employment Training Programs: Information Crosswalk on 163 Employment Training Programs. GAO/HEHS-95-85FS. Washington, D.C.: February 14, 1995. Multiple Employment Training Programs: Major Overhaul Needed to Reduce Costs, Streamline the Bureaucracy, and Improve Results. GAO/T-HEHS-95-53. Washington, D.C.: January 10, 1995. Multiple Employment Training Programs: Overlap Among Programs Raises Questions About Efficiency. GAO/HEHS-94-193. Washington, D.C.: July 11, 1994. Multiple Employment Training Programs: Conflicting Requirements Underscore Need for Change. GAO/T-HEHS-94-120. Washington, D.C.: March 10, 1994. Multiple Employment and Training Programs: Major Overhaul is Needed. GAO/T-HEHS-94-109. Washington, D.C.: March 3, 1994. Multiple Employment Training Programs: Overlapping Programs Can Add Unnecessary Administrative Costs. GAO/HEHS-94-80. Washington, D.C.: January 28, 1994. Multiple Employment Training Programs: Conflicting Requirements Hamper Delivery of Services. GAO/HEHS-94-78. Washington, D.C.: January 28, 1994. | Federally funded employment and training programs play an important role in helping job seekers obtain employment. The Departments of Labor, Education, and Health and Human Services (HHS) largely administer these programs. GAO's objectives were to determine: (1) whether the number of federal employment and training programs and funding for them have changed since our 2003 report, (2) what kinds of outcome measures the programs use and what is known about program effectiveness, (3) the extent to which the programs provide similar services to similar populations, (4) the extent to which duplication may exist among selected large programs, and (5) what options exist for increasing efficiencies among these programs. To address these objectives, GAO searched federal program lists, surveyed federal agency officials, reviewed relevant reports and studies, and interviewed officials in selected states. Due to the American Recovery and Reinvestment Act of 2009 (Recovery Act), both the number of--and funding for--federal employment and training programs have increased since our 2003 report, but little is known about the effectiveness of most programs. In fiscal year 2009, 9 federal agencies spent approximately $18 billion to administer 47 programs--an increase of 3 programs and roughly $5 billion since our 2003 report. This increase is due to temporary Recovery Act funding. Nearly all programs track multiple outcome measures, but only five programs have had an impact study completed since 2004 to assess whether outcomes resulted from the program and not some other cause. Almost all federal employment and training programs, including those with broader missions such as multipurpose block grants, overlap with at least one other program in that they provide similar services to similar populations. These programs most commonly target Native Americans, veterans, and youth, and some require participants to be economically disadvantaged. Although the extent to which individuals receive the same employment and training services from the Temporary Assistance for Needy Families (TANF), Employment Service (ES), and Workforce Investment Act Adult (WIA Adult) programs is unknown, the programs maintain separate administrative structures to provide some of the same services, such as job search assistance, to low-income individuals. Agency officials acknowledged that greater administrative efficiencies could be achieved in delivering these services, but said factors, such as the number of clients that any one-stop center can serve and one-stops' proximity to clients, particularly in rural areas, could warrant having multiple entities provide the same services. Options that may increase efficiencies include colocating services and consolidating administrative structures, but implementation may pose challenges. While WIA Adult and ES services are generally colocated in one-stop centers, TANF employment services are colocated in one-stops to a lesser extent. Florida, Texas, and Utah have consolidated their welfare and workforce agencies, and state officials said this reduced costs and improved services, but they could not provide a dollar figure for cost savings. An obstacle to further progress in achieving greater administrative efficiencies is that little information is available about the strategies and results of such initiatives. In addition, little is known about the incentives states and localities have to undertake such initiatives and whether additional incentives may be needed. Labor and HHS should disseminate information about state efforts to consolidate administrative structures and colocate services and, as warranted, identify options for increasing incentives to undertake these initiatives. In their comments, Labor and HHS agreed that they should disseminate this information. |
FMCSA was established within DOT in January 2000 and was tasked with promoting safe commercial motor vehicle operations and preventing large truck and bus crashes, injuries, and fatalities. The commercial motor carrier industry is a vital part of the U.S. economy and, as of December 2015, FMCSA estimated that there were 551,150 active carriers and approximately 6 million commercial drivers operating in the United States. The domestic commercial motor carrier industry covers a range of businesses, including private and for-hire freight transportation, passenger carriers, and specialized transporters of hazardous materials. These carriers also range from small carriers with only one vehicle that is owned and operated by a single individual, to large corporations that own thousands of vehicles. In carrying out its mission, FMCSA is responsible for four key safety service areas. Registration Services: Motor carriers are required to register with FMCSA; have insurance; and attest that they are fit, willing, and able to follow safety standards. Vehicles must be properly registered and insured with the state of domicile and are subject to random and scheduled inspections by both state and FMCSA agents. Drivers must have a valid commercial driver’s license issued by their state of residence and pass a physical examination as evidenced by a current valid medical card every 2 years. In calendar year 2015, there were 57,358 active interstate new entrant carriers that registered with FMCSA. Inspection Services: Conducting roadside inspections is central to FMCSA’s mission. States and, to a lesser extent, FMCSA staff, perform roadside inspections of vehicles to check for driver and maintenance violations and then provide the data from those inspections to the agency for analysis and determinations about a carrier’s safety performance. FMCSA also obtains data from the reports filed by state and local law enforcement officers when investigating commercial motor vehicle accidents or regulatory violations. The agency provides grants to states that may be used to offset the costs of conducting roadside inspections and improve the quality of the crash data the states report to it. In addition, the field offices in each state, known as divisions, have investigators who conduct compliance reviews of carriers identified by state inspection and other data as unsafe or at risk of being unsafe. FMCSA and its state partners conduct about 3.4 million inspections a year. Compliance Services: FMCSA monitors and ensures compliance with regulations governing both safety and commerce. The compliance review process is performed by safety auditors and investigators who collect safety compliance data by visiting a motor carrier’s location to review safety and personnel records. In the instances of new carriers entering the commercial market, FMCSA audits these carriers within 12 months of service. In 2015, FMCSA conducted 14,656 investigations and 30,000 new entrant safety audits, and sent about 21,000 warning letters. FMCSA uses data collected from motor carriers, federal and state agencies, and other sources to monitor motor carrier compliance with the Federal Motor Carrier Safety Regulations and Hazardous Materials Regulations. These data are also used to evaluate the safety performance of motor carriers, drivers, and vehicle fleets. The agency uses the data to characterize and evaluate the safety experience of motor carrier operations to help federal safety investigators focus their enforcement resources by identifying the highest-risk carriers, drivers, and vehicles. Enforcement Services: FMCSA is responsible for bringing legal action against companies that are not in compliance with motor carrier safety policies. In fiscal year 2015, FMCSA closed 4,766 enforcement cases. FMCSA’s estimated budget for fiscal year 2017 is approximately $794.2 million. The agency employs more than 1,000 staff members who are located in its Washington, D.C., headquarters, 4 regional service centers, and 52 division offices. FMCSA’s Chief Information Officer (CIO) oversees the development, implementation, and maintenance of the IT systems and infrastructure that serve as the key enabler in executing FMCSA’s mission. The CIO reports directly to the Chief Safety Officer within FMCSA’s Office of Information Technology. This office supports a highly mobile workforce by operating the agency’s field IT network of regional and state service centers, and ensuring that inspectors have the tools and mobile infrastructure necessary to perform their roadside duties. In addition, the office supports FMCSA headquarters, regional, and state service centers, which depend on the agency’s IT infrastructure including servers, laptops, desktops, printers, and mobile devices. Currently, the Office of Information Technology is undergoing a reorganization to establish an Office of the CIO. While a revised structure has been proposed, it has not yet been approved. Of its total budget, in fiscal year 2017, FMCSA’s expected IT budget is $58 million, of which approximately 60 percent ($34.4 million) is to be spent on the O&M of existing systems. In fiscal year 2013, the Office of Information Technology led an effort to establish a new IT portfolio that was intended to provide FMCSA with the ability to look across the investments in these portfolios and identify the linkages of business processes and strategic improvement opportunities to enhance mission effectiveness. To do so, the office implemented a product development team to integrate activities within and across the portfolio, interacting with business and program stakeholders. Specifically, it established four key safety process areas—registration, inspection, compliance, and enforcement—and two operations process areas—mission support systems and infrastructure. The registration portfolio includes systems that process and review applications for operating authority. The inspection portfolio includes systems that aid inspectors in conducting roadside inspections of large trucks and buses and ensure inspection data are available and useable. The compliance portfolio includes systems that help investigators to identify and investigate carriers for safe operations and maintain high safety standards to remain in the industry. The enforcement portfolio includes systems to assist the agency in ensuring that carriers and drivers are operating in compliance with regulations. The mission support portfolio includes systems and services that crosscut multiple portfolios. The infrastructure portfolio includes those systems that provide support services, hardware, software, licenses, and tools. As of August 2016, FMCSA had identified and categorized 40 investments in its IT portfolio, as described in table 1. According to the Acting CIO, by creating the IT portfolio, the agency determined that the functionality of these investments was not redundant, but that the aging legacy systems were in need of modernization. Further, the Acting CIO stated that the agency is planning to consolidate many of the systems that are in O&M, which, as of fiscal year 2016, had a combined cost of $2.9 million. FMCSA has acknowledged the need to upgrade its aging systems to improve data processing and data quality, and reduce system maintenance costs. Accordingly, in 2013, it began a modernization effort that includes both developing new systems and retiring legacy systems for each of its four key safety process areas—registration, inspection, compliance, and enforcement. To modernize its registration systems, in 2013, the agency began developing the URS system to streamline and strengthen the registration process. When fully implemented, URS is intended to replace the current registration systems with a single, online federal system. Program officials stated that the Licensing and Insurance system, Operations Authority Management system, and the registration function in MCMIS are to be retired upon URS’s deployment. The Acting CIO stated that the agency has not determined when URS will be fully deployed. To modernize its inspection systems, FMCSA began planning efforts in 2014 to develop Integrated Inspection Management System (IIMS), which is intended to provide inspectors with a single system to perform checks. As of May 2017, the agency was still in the planning stage of this effort, as it was assessing the current state of its inspection processes and data management systems, and planning to issue a report detailing actions the agency needs to take. According to officials from the Office of Information Technology, subsequent to this report, a detailed analysis will be conducted, including development of acquisition and development plans. According to agency officials, its six operational inspection systems—Query Central, Safety and Fitness Electronic Records, SAFETYNET, Aspen, Inspection Selection System, and Commercial Driver’s License Information System Access—are intended to be retired upon deployment of IIMS. To modernize its compliance systems, FMCSA began developing Sentri 2.1. According to the Acting CIO, the agency’s three legacy compliance systems—ProVu, National Registry of Certified Medical Examiners, and Compliance Analysis and Performance Review Information—are to be retired upon deployment of Sentri 2.1. As of May 2017, agency officials from the Office of Information Technology stated they have stopped the development of Sentri 2.1. To modernize its enforcement systems, FMCSA intends to migrate the functionality of its current enforcement systems into an existing mission support system. Specifically, the functionality of FMCSA’s three operational enforcement systems—CaseRite, Electronic Management Information System, and Uniform Fine Assessment—is to be migrated into its Portal system, which is a website that provides users a single sign-on to access applications. The agency did not provide a date for when this effort is expected to be completed. A federal agency’s ability to effectively and efficiently maintain and modernize its existing IT environment depends, in large part, on how well it employs certain IT management controls, including strategic planning. Strategic planning is essential for an agency to define what it seeks to accomplish, identify strategies to efficiently achieve the desired results, and effectively guide modernization efforts. Key elements of IT strategic planning include establishing a plan with well-defined goals, strategies, measures, and timelines to guide these efforts. Our prior work stressed that an IT strategic plan should define the agency’s vision and provide a road map to help align information resources with business strategies and investment decisions. Additionally, as we have previously reported, effective modernization planning is essential. Such planning includes defining the scope of the modernization effort, an implementation strategy, and a schedule, as well as establishing results-oriented goals and measures. However, FMCSA lacks complete plans to guide its systems modernization efforts. Specifically, the agency’s IT strategic plan lacks key elements. While the agency has an IT strategic plan that describes the technical strategy, vision, mission, and direction for managing its IT modernization programs, and defines the strategic goals and objectives to support its mission, the plan lacks timelines to guide its goals and strategies related to integrated project planning and execution, IT security, and innovative IT business solutions, among others. For example, there were no identified milestones for achieving efficient, consolidated, and reliable IT solutions for IT modernization that meet the changing business needs of users and improve safety. The Acting CIO acknowledged that the strategic plan is not complete and that a date by which a revised plan will be completed has not been established. The official further acknowledged that updating the current strategic plan has not been a priority. However, until the agency establishes a complete strategic plan, it is likely to face challenges in aligning its information resources with its business strategies and investment decisions. In addition, FMCSA has not yet developed an effective modernization plan that defines the overall scope, implementation strategy, and schedule for its efforts. According to the Acting CIO, the agency has recognized the need for such a plan and has recently awarded a contract to develop one by June 2017. If FMSCA develops an effective modernization plan and uses it to guide its efforts, it should be better positioned to successfully modernize its aging legacy systems. GAO’s IT investment management framework is comprised of five progressive stages of maturity that mark an agency’s level of sophistication with regard to its IT investment management capabilities. Such capabilities are essential to the governance of an agency’s IT investments. At the Stage 2 level of maturity, an agency lays the foundation for sound IT investment management to help it attain successful, predictable, and repeatable investment governance processes at the project level. These processes focus on the agency’s ability to select, oversee, and review IT projects by defining and developing its IT governance board(s) and documented processes for directing the governance boards operations. According to the framework, Stage 2 includes the following three processes: Instituting the investment board: As part of this process, an agency is to establish an investment review board comprised of senior executives, including the agency’s head or a designee, the CIO or other senior executive representing the CIO’s interests, and heads of business units that are responsible for defining and implementing the department’s IT investment governance process. The agency’s IT investment process guidance should lay out the roles of investment review boards, working groups, and individuals involved in the agency’s IT investment processes. Selecting investments that meet business needs: As part of the process for selecting and reselecting investments, an agency is to establish and implement policies and procedures made by senior executives that meet the agency’s needs. This includes selecting projects by identifying and analyzing projects’ risks and returns before committing any significant funds to them and selecting those that will best support the agency’s mission needs. Providing investment oversight: This process includes establishing and implementing policies and procedures for overseeing IT projects by reviewing the performance of projects against expectations and taking corrective action when these expectations are not being met. FMCSA has partially addressed the three processes associated with having a sound governance structure to manage its modernization efforts. Table 2 provides a summary of the extent to which the agency’s IT investment management structure implemented the key processes. With regard to establishing an IT investment review board, FMCSA recently restructured its governance boards. Specifically, in January 2017, FMCSA finalized its IT governance order to have three major governance boards that are to serve as the decision-making structure for how IT investment decisions are made and escalated—the Executive Management Team, the Technical Review Board, and the Change Control Board. At the highest level, the Executive Management Team is to provide strategic direction and decision making for major IT investments. The team, which is to meet at least quarterly, is chaired by the FMCSA Deputy Administrator. Below this team, the Technical Review Board is to provide oversight for all IT investments and is chaired by the Director of the Office of Information Technology Policy, Plans, and Oversight. According to the governance order, this team is to meet monthly. Further, underneath the Technical Review Board is the Change Control Board that has responsibility for reviewing and approving system change requests associated with a new system, a major release or modification to an existing system, a change in contract funding, or a change in contract scope. This board, which also is to meet monthly, is chaired by the Enterprise Architect of the Office of Information Technology Policy, Plans, and Oversight. Figure 1 depicts the agency’s governance structure. Nevertheless, FMCSA has not yet clearly defined roles and responsibilities of all working groups and individuals involved in the agency’s IT governance process. For example, FMCSA’s governance order calls for the Office of Information Technology Policy, Plans, and Oversight to adopt specific IT performance measures, but does not define the manner in which these measures should be tracked. Moreover, in August 2016, the agency finalized an order that established 10 integrated functional areas of IT management and the development of an Office of the CIO. However, FMCSA has not yet finalized a new structure for the Office of the CIO or clearly defined how this office and the CIO will manage, direct, and oversee the implementation of these areas as it relates to the agency’s IT governance process. Further, FMCSA officials have not identified time frames for doing so. Without clearly defined roles and responsibilities for the agency’s working groups and individuals involved in the governance process, FMCSA has less assurance that its modernization investments will be reviewed by those with the appropriate authority and aligned with agency goals. With regard to selecting and reselecting IT investments, FMCSA’s January 2017 governance order requires participation and collaboration of the IT system owner, business owner, IT planning staff, and governance boards during the select phases for all investments. However, the agency lacks procedures for selecting new modernization investments and for reselecting investments that are already operational (which makes up the majority of the agency’s IT portfolio) for continued funding. For example, the order calls for the Executive Management Team, comprised of senior executives, to make decisions regarding the funding of the IT portfolio, among other things, and for the Technical Review Board to provide recommendations to the team on the prioritization of IT investments including the allocation of funds. However, the order does not specify the procedures for approving the movement of funds within the IT and capital planning and investment control portfolio. According to the Acting CIO, FMCSA is currently drafting procedures for selecting new investments and reselecting investments that are already operational and intends to finalize the procedures by the end of May 2017. Upon establishing and implementing such procedures, FMCSA’s decision makers should have a common understanding of the process and the cost, benefit, schedule, and risk criteria that will be used to reselect IT projects. With regard to IT investment oversight, the agency’s order established policies and procedures to ensure that governance bodies review investments and track corrective actions to closure. However, the policies and procedures for reviewing and tracking actions have not yet been fully implemented by the three governance bodies. For example, The boards have not met regularly to review the performance of IT investments, including those investments that are part of its modernization efforts, against expectations. In particular, in calendar year 2016, the Executive Management Team met once and the Technical Review Board met four times. The Change Control Board was not formally approved until January 2017 and, thus, has held no meetings. Also, while the Technical Review Board met four times in calendar year 2016, none of the meetings discussed the cost, schedule, performance, and risks for FMCSA’s major IT modernization investment, systems in development, or existing systems. For example, in February 2016, the IT Director presented to the board members an overview of the statutory provisions commonly referred to as the Federal Information Technology Acquisition Reform Act and their implications for FMCSA. In April 2016, the board members were provided with an overview of OMB’s regulatory guidance for the budget process. In addition, in August 2016, the Technical Review Board met to discuss the planned fiscal year 2017 budget for its IT investments and, in November 2016, the Director of the Office of Information Technology discussed with board members the status of the planning efforts for the IIMS project. The Acting CIO did not attend any of the four meetings. Further, neither the Executive Management Team nor the Technical Review Board discussed with its members the transition of FMCSA’s investments into the cloud environment, to include identifying any key risks. For example, in November 2016, over 70 issues regarding the migration effort were identified by the contractor and a FMCSA official, but none were discussed at the Technical Review Board or Executive Management Team board meetings. As a result, program officials stated that there were delays to program’s transition to the cloud environment because additional time was needed to securely migrate data from multiple legacy platforms into a new central database and conduct further testing. Action items have been noted in meeting minutes, but have not been fully addressed or updated to closure. For example, in August 2016, the Capital Planning and Investment Control Coordinator, within the Office of Information Technology, provided an overview of the fiscal year 2017 budget to the Technical Review board members. As part of this discussion, the Director of the Office of Information Technology stated that, during the next board meeting, additional details would be provided on the planned budget for fiscal year 2018. However, the meeting minutes from November 2016 did not include any evidence that this subject was discussed at the next meeting. These weaknesses were due, in part, to the agency not adhering to its IT orders and governance board charters, which establish FMCSA’s governance structure, as described above. As a result, the agency lacks adequate visibility into and oversight of IT investment decisions and activities, and cannot ensure that its investments are meeting cost and schedule expectations and that appropriate actions are taken if these expectations are not being met. According to OMB guidance, the O&M phase is often the longest phase of an investment and can consume more than 80 percent of the total lifecycle costs. Thus, it is essential that agencies effectively manage this phase to ensure that the investments continue to meet agency needs. As such, OMB and DOT direct agencies to monitor all O&M investments through operational analyses, which should be performed annually. These analyses should include assessments of four key factors: costs, schedules, investment performance (i.e., structured assessments of performance goals), and customer and business needs (i.e., whether the investment is still meeting customer and business needs, and identifies any areas for innovation in the area of customer satisfaction). FMCSA had not fully ensured that the selected systems—Aspen, MCMIS, Sentri 2.0, and URS—were effectively meeting the needs of the agency. Specifically, none of the program offices conducted the required operational analyses for the four systems. The program offices stated that, in lieu of conducting these analyses, they assessed the key factors of costs, schedules, investment performance, and customer and business needs as part of the capital planning and investment control process. Nonetheless, only one program office (URS) partially met the four key factors. Table 3 provides a summary of the extent to which the four selected systems implemented the key operational analysis factors. Aspen: The Aspen program office had partially implemented one of the required operational analysis factors and had not implemented the three other factors. Specifically, as part of its plans to modernize this system, FMCSA had taken steps to assess customer and business needs. For example, it reached out to users and found that 33 states use Aspen and the remaining states use their own in-house developed programs or third-party vendor-based systems. However, while the agency collected feedback from users via phone calls and meetings, it had not yet assessed this feedback, including identifying any opportunities for innovation in the areas of customer satisfaction, strategic and business results, and financial performance. In addition, the program office did not assess current costs against life-cycle costs, perform a structured schedule assessment, or compare current performance against cost baseline and estimates developed when the investment was being planned. MCMIS: The MCMIS program office had not implemented any of the required operational analysis factors. Specifically, program officials did not assess current costs against life-cycle costs, perform structured assessments of schedule and performance goals, or identify whether the investment supports business and customer needs and is delivering the services it was designed to, including identifying whether the system overlaps with other systems. This is particularly concerning given that all seven users we interviewed stated that the system does not interact well with other systems and users have to access other systems to gather information that they cannot obtain in MCMIS. Sentri 2.0: Sentri’s program office partially implemented one of the required operational analysis factors and did not implement the three other factors for the component that has been operational since May 2010, also known as Sentri 2.0. Specifically, the program had partially implemented assessments of customer and business needs by reviewing Sentri 2.0 user needs as it develops the business and user requirements for development of Sentri 2.1. However, while all five users we interviewed stated that their feedback regarding Sentri was provided to FMCSA, they were not sure whether the feedback was being implemented. Moreover, the program office had not identified whether the investment supports customer processes, as designed, and is delivering the goods and services it was intended to deliver. In addition, the program did not assess current costs against life-cycle costs or perform structured schedule and performance goal assessments. URS: The URS program office partially implemented four of the required operational analysis factors for functionality of the system that was delivered in December 2015. Specifically, the program office developed a business case that outlines costs, schedules, investment performance goals, and customer and business needs. Additionally, the program office communicated with stakeholders through meetings, conferences, webinars, and call centers. For example, it has hosted over 30 webinars to better understand how the system is working for the users. Nevertheless, the program office had not yet conducted an analysis to assess current costs against life-cycle costs, performed a structured assessment of the schedule or performance goals, or ensured the functionality delivered is operating as intended and is meeting user needs. The need for conducting an analysis is particularly pressing for this program since all four system users we interviewed stated that URS is difficult to use and does not work as intended: they stated that they are unable to complete filings, carrier registration, and request changes to DOT numbers. With regard to the deficiencies we identified, the Acting CIO stated that the agency does not yet have FMCSA-specific guidance to assist programs to conduct operational analyses on an annual basis. The Acting CIO stated that FMCSA has drafted guidance, including templates, to assist programs in conducting these analyses and officials in the Office of Information Technology stated that the agency planned to have the guidance finalized by end of June 2017. While finalizing this guidance is a positive step to assist programs in conducting operational analyses, FMCSA does not adequately ensure its systems are effective at meeting user needs. Until FMCSA fully reviews its O&M investments as part of its annual operational analyses, the agency will lack assurance that these systems meet mission needs, and the associated spending could be wasteful. While FMCSA has recognized the need to develop an effective modernization plan and has awarded a contract to do so, it has not completed an IT strategic plan needed for modernizing its existing legacy systems. In addition, while the agency has established governance boards for overseeing IT systems, these boards do not exhibit key processes of a sound governance approach, such as ensuring corrective actions are executed and tracked to closure. Further, FMCSA does not have the processes in place for ensuring that systems currently in use are meeting agency needs or for overseeing its IT portfolio. The four systems we reviewed did not have completed operational analyses that show if a system is, among other things, effective at meeting users’ needs. Until the agency addresses shortcomings in strategic planning, IT governance, and oversight, its progress in modernizing its systems will likely be limited and the agency will be unable to ensure that the systems are working effectively. To help improve the modernization of FMCSA’s IT systems, we are recommending that the Secretary of Transportation direct the FMCSA Administrator to take the following five actions: Update FMCSA’s IT strategic plan to include well-defined goals, strategies, measures, and timelines for modernizing its systems. Ensure that the IT investment process guidance lays out the roles and responsibilities of all working groups and individuals involved in the agency’s governance process. Finalize the restructure of the Office of Information Technology, including fully defining the roles and responsibilities of the CIO. Ensure that appropriate governance bodies review all IT investments and track corrective actions to closure. Ensure that required operational analyses are performed for Aspen, MCMIS, Sentri 2.0, and URS on an annual basis. We provided a draft of this report to the Department of Transportation for review and comment. In its written comments, reproduced in appendix II, the department concurred with our five recommendations. The department also described actions that FMCSA has completed or is finalizing to improve its IT strategic planning and investment governance processes. These actions include updating the FMCSA IT strategic plan and finalizing investment review board charters to better define all stakeholders roles and responsibilities. Effective implementation of these actions should help FMCSA improve the modernization of its IT systems. In addition to the written comments, the department provided technical comments on the draft report, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Transportation, the Administrator of FMCSA, and other interested parties. This report also is available at no charge on the GAO website at http://www.gao.gov. Should you or your staff have any questions on information discussed in this report, please contact me at (202) 512-4456 or Harriscc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. The Fixing America’s Surface Transportation Act included a provision for us to conduct a comprehensive analysis of the information technology (IT) and data collection management systems of the Federal Motor Carrier Safety Administration (FMCSA) by June 4, 2017. Our objectives were to (1) assess the extent to which the agency has plans to modernize its existing systems, (2) assess the extent to which FMCSA has implemented an IT governance structure, and (3) determine the extent to which FMCSA has ensured selected IT systems are effective. To address the first objective, we obtained and evaluated FMCSA IT systems modernization documentation that discuss future changes to ensure user needs are met, including its IT strategic plan for fiscal years 2014 to 2016 and systems modernization plans. We analyzed whether these plans complied with best practices that we have previously identified. These practices call for developing a strategic plan that includes defining the agency’s vision and providing a road map to help align information resources with business strategies and investment decisions. We also interviewed agency officials including those from the Office of Information Technology; Enforcement and Compliance, Information Security, and Privacy divisions to discuss the agency’s plans to modernize existing systems, including any actions the agency is taking to identify redundancies among the systems and explore the feasibility of consolidating data collection and processing systems. To corroborate this information, we reviewed the FMCSA’s budgetary data (i.e., its fiscal year 2016 IT portfolio summary) submitted to the Office of Management and Budget (OMB) that identifies all of the agency’s IT investments to identify whether it included any potentially redundant systems. Specifically, we reviewed the name and narrative description of each investment’s purpose to identify any similarities among related investments and discussed any potential redundancies with the Acting Chief Information Officer (CIO). For the second objective, we compared agency documentation, including executive board meeting minutes and briefings from fiscal years 2015 and 2016, FMCSA IT governance orders, and charters, against critical processes associated with Stage 2 of GAO’s IT investment management framework. In particular, Stage 2 of the framework includes the following key processes for effective governance: instituting the investment board; selecting and reselecting investments that meet business needs; and providing investment oversight. We also interviewed agency officials to better understand FMCSA’s governance structure, which included identifying whether the agency is taking appropriate steps with respect to IT governance. To address the third objective, we selected four existing IT systems to review. In selecting these investments, we analyzed FMCSA’s fiscal year 2016 IT portfolio summary submitted to OMB which included the agency’s existing IT, data collection, processing systems, data correction procedures, and data management systems and programs. To assess the reliability of the OMB budget data, we reviewed related documentation, such as OMB guidance on budget preparation and capital planning. In addition, we corroborated with FMCSA that the data was accurate and reflected the data it had reported to OMB. We determined that the budget data was reliable for our purposes of selecting these systems. Specifically, we used the following criteria to select four systems to review: At least one investment must have been identified as a major IT investment, as defined by OMB. FMCSA had only identified one major IT investment in fiscal year 2016. The remaining non-major systems must have had planned operations and maintenance (O&M) spending in fiscal year 2017. The system is mission critical. The program must not have been included in a recent GAO or inspector general review that examined the program’s effectiveness. Using the above criteria, we selected the following four systems: 1. Aspen: A non-major desktop application that collects commercial driver/vehicle inspection details, performs some immediate data analysis, creates and prints a vehicle inspection report, and transfers inspection data into the FMCSA information systems. 2. Motor Carrier Management Information System (MCMIS): A non- major information system that captures FMCSA inspection, crash, compliance review, safety audit, and registration data. It is FMCSA’s authoritative source for the safety performance records for all commercial motor carriers and hazardous materials shippers. 3. Safety Enforcement Tracking and Investigation System (Sentri): A non-major application used to facilitate safety audits and interventions by FMCSA and state users. It is intended to combine roadside inspection, investigative, and enforcement functions into a single interface. 4. Unified Registration System (URS): A major system that is intended to replace the existing registration systems with a single comprehensive, online system and provide FMCSA-regulated entities a more efficient means of submission and management of data pertaining to registration applications. We then assessed the agency’s efforts to determine the effectiveness of these systems in meeting the needs of the agency by reviewing documentation from the four selected systems and compared it to key factors identified in OMB’s guidance on conducting annual operational analysis, which are a key method for examining the performance of investments with O&M funding. More specifically, we assessed whether FMCSA had conducted an operational analysis on each of the systems. For those systems that did not have an analysis performed, we reviewed FMCSA’s IT documentation on the performance of these systems (i.e., business cases and performance management reviews) to determine whether key factors of an operational analysis were conducted. For example, we assessed whether the agency assessed cost, schedule, and investment performance, including its interaction with other systems; and customer and business needs, including adaptability of the system in order to make necessary future changes to ensure user needs are met and areas for innovation in the areas of customer satisfaction. We also conducted interviews with 22 selected system users to obtain insight into whether the identified systems are meeting their needs and any challenges users face in using these systems, including whether the systems are adaptable to future needs and methods to improve user interface. We selected these users based on recommendations from FMCSA program officials and industry stakeholder representatives. Based on these recommendations, we then selected users based on the type of users, including FMCSA users, state agencies, law enforcement officials, and private sector individuals involved in the motor carrier industry. While these user interviews are illustrative, they cannot be used to make generalizable statements about users’ experience as a whole. Based on our work to determine selected programs’ effectiveness, we made recommendations regarding deficiencies identified in the report. We did not make recommendations regarding methods to improve user interfaces since two of the selected systems (Aspen and MCMIS) are planned to be modernized and the remaining two systems (Sentri and URS) have components still under development, as discussed in our report. We conducted this performance audit from April 2016 to July 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact name above, the following staff also made key contributions to this report: Eric Winter (Assistant Director), Niti Tandon (Analyst in Charge), Rebecca Eyler, Lisa Maine, and Tyler Mountjoy. | FMCSA, established within the Department of Transportation in January 2000, is charged with reducing crashes involving commercial motor carriers (i.e., large trucks and buses) and saving lives. IT systems and infrastructure serve as a key enabler for FMCSA to achieve its mission. The agency reported spending about $46 million for its IT investments in fiscal year 2016. In December 2015, the Fixing America's Surface Transportation Act was enacted and required GAO to review the agency's IT, data collection, and management systems. GAO's objectives were to (1) assess the extent to which the agency has plans to modernize its existing systems, (2) assess the extent to which FMCSA has implemented an IT governance structure, and (3) determine the extent to which FMCSA has ensured selected IT systems are effective. To do so, GAO analyzed FMCSA's strategic plan and modernization plans; compared governance documentation to best practices; selected four investments based on operations and maintenance spending for fiscal year 2016, among other factors, and compared assessments for the investments against OMB criteria; and interviewed officials. The Federal Motor Carrier Safety Administration (FMCSA) initiated a modernization effort in 2011 and developed an information technology (IT) strategic plan that describes the technical strategy, vision, mission, direction, and goals and objectives to support the agency's mission; however, the plan lacks timelines to guide FMCSA's goals and strategies. In addition, the agency has not completed a modernization plan for its existing IT systems that includes scope, an implementation strategy, schedule, results-oriented goals, and measures, although it has recently awarded a contract to develop such a plan. The Acting Chief Information Officer (CIO) said that updating FMCSA's IT strategic plan had not been a priority for the agency. However, without a complete IT strategic plan, FMCSA will be less likely to move toward its ultimate goal of modernizing its aging legacy systems. FMCSA has begun to address leading practices of IT governance, but its investment governance framework does not adequately establish an investment board, select and reselect investments, and provide investment oversight. Specifically, regarding the practice of establishing an IT investment review board, FMCSA has not yet clearly defined roles and responsibilities for key working groups and individuals, including the Office of the CIO. Regarding selecting and reselecting IT investments, FMCSA requires participation and collaboration during the select phases for all IT investments; however, it lacks procedures for selecting new investments and reselecting investments that are already operational for continued funding. According to the Acting CIO, the agency is currently drafting these procedures and intends to finalize them by the end of May 2017. Regarding the practice of IT investment oversight, the agency has policies and procedures to ensure that corrective actions and related efforts are executed and tracked, but they have not yet been fully implemented by the three boards. These weaknesses are due to the agency not adhering to its IT orders that establish its governance structure. As a result, FMCSA lacks adequate visibility into and oversight of IT investment decisions and activities, which could ultimately hinder its modernization efforts. FMCSA had not fully ensured that the four systems GAO selected to review are effectively meeting the needs of the agency because none of the program offices completed operational analyses as required by the Office of Management and Budget (OMB). However, as part of its capital planning and investment control process, FMCSA assessed the four key factors of an operational analysis—costs, schedules, investment performance, and customer and business needs. One of the selected programs had partially implemented all four of these factors; two programs had partially implemented one factor, and one program had not addressed any of these factors. This was due to FMCSA not having guidance for conducting operational analyses for investments in operations and maintenance. Until FMCSA fully reviews its operational investments, the agency will lack assurance that these systems meet mission needs. GAO is making five recommendations to FMCSA to improve its IT strategic planning, oversight, and operational analyses. The Department of Transportation concurred with all of the recommendations. |
In seeking to provide their hospital customers with medical-surgical products at favorable prices, GPOs engage with manufacturers in certain contracting processes and sometimes use certain strategies to obtain price discounts. Many manufacturers bid for GPO contracts because hospital purchases with these contracts may increase manufacturers’ market share. GPOs are subject to federal antitrust laws. A statement developed by enforcement agencies helps GPOs determine whether their business practices are likely to be challenged under the antitrust laws. Many manufacturers use GPO contracts to sell their medical-surgical products. These products include two types—commodities and medical devices. Commodities such as cotton balls and bandages are examples of items for which physicians and other clinicians generally do not have strong preferences. Manufacturers commonly use GPO contracts to sell hospitals these non-preference products because hospitals purchase these items in large quantities. In contrast, medical devices can be “clinical preference” items—that is, those for which physicians and other practitioners are likely to express a preference. High-technology medical devices such as pacemakers and stents are examples of clinical preference items. Some manufacturers prefer to sell these items directly to hospitals. The GPO industry that purchases products for hospitals is large and moderately concentrated. Experts have not determined a precise number of GPOs currently in business, but some estimate that there are hundreds of GPOs. While some GPOs operate regionally, this study focused on seven national GPOs with purchasing volumes over $1 billion that account for more than 85 percent of all hospital purchases nationwide made through GPO contracts. In 2002, the combined purchasing volume of these GPOs totaled about $43 billion, excluding distribution dollars. (See table 1.) Among the GPOs in our study, the two largest GPOs account for about 66 percent of total GPO purchasing volume for all medical products (including, among other things, medical-surgical products, pharmaceuticals, capital equipment, and food). These two GPOs also account for 70 percent of the seven GPOs’ total medical-surgical product volume. One of the two largest GPOs has as members 1,569 of the nation’s approximately 6,900 hospitals; the other has 1,469 hospital members. One of the two largest GPOs permits its members to belong to other national GPOs, whereas the other largest GPO does not. A GPO’s contracting process for manufacturers’ medical-surgical products generally includes several phases—namely, product identification and selection, requests for proposals or invitations to bid, review of submitted proposals and applications, assessment of product quality, contract negotiation, and contract award. The contract negotiation phase may include the negotiation of a contract administrative fee. This fee is designed to cover a GPO’s operating expenses and serves as its main source of revenue. Contract administrative fees are calculated as a percentage of each customer’s purchases of the particular product included in a GPO contract. In negotiating contracts, GPOs use certain contracting strategies as incentives for manufacturers to provide deeper discounts and for hospital members to concentrate purchasing volume to obtain better prices. These strategies are not limited to use by GPOs, as some manufacturers also use them in negotiating contracts with GPOs to increase market share. Key contracting strategies include the following: Sole-source contracts give one of several manufacturers of comparable products an exclusive right to sell a particular product through a GPO. Commitment refers to a specified percentage of purchasing volume that, when met by the GPO’s customer (such as a hospital), will result in a deeper price discount. Commitment levels can be set either by the GPO or the manufacturer. For example, a manufacturer might offer greater discounts to GPO customers that purchase at least 80 percent of a certain group of products from that manufacturer. Commitment requirements can also be tiered, resulting in the opportunity for the customer to commit to different percentages of purchasing volume: the higher the percentage, the lower the price. Bundling links price discounts to purchases of a specified group of products. GPOs award several types of bundling arrangements. One type bundles combinations of products from one manufacturer. A manufacturer may find this arrangement advantageous because it allows increased sales of products in the bundle that may not fare well as stand-alone products. Another type bundles products from two or more manufacturers. Also, contracts can be bundled for complementary products, such as protective hats and shoe coverings used in hospital operating rooms, while others bundle unrelated products such as patient gowns and intravenous solutions. Hospitals that purchase bundles of unrelated products receive a price discount on all products included in the bundle. Contracts of long duration—those in effect for 5 years or more—can direct business to manufacturers for an extended period. When used by GPOs with a large market share, these contracting strategies have the potential to reduce competition. For example, if a large GPO negotiates a sole-source contract with a manufacturer, the contract could cause an efficient, competing manufacturer to lose business and exit from the market and could discourage other manufacturers from entering the market. Certain aspects of GPOs’ operations are specifically addressed by federal statute, regulation, and policy. While “anti-kickback” provisions of the Social Security Act prohibit payments in return for orders or purchases of items for which payment may be made under a federal health care program, the act also contains an exception for amounts paid by vendors of goods or services to a GPO. Therefore, GPOs are allowed to collect contract administrative fees from manufacturers and other vendors that could otherwise be considered unlawful. In addition, regulations issued by the Department of Health and Human Services establishing “safe harbors” for purposes of the “anti-kickback” provisions provide that GPOs are to have written agreements with their customers either stating that fees are to be 3 percent or less of the purchase price, or specifying the amount or maximum amount that each vendor will pay. The GPOs must also disclose in writing to each customer, at least annually, the amount received from each vendor with respect to purchases made by or on behalf of the customer. The Office of Inspector General in the Department of Health and Human Services is responsible for enforcing these regulations. Recognizing that GPO arrangements may promote competition among manufacturers and yield lower prices in some cases and may reduce competition in other cases, the U.S. Department of Justice and the Federal Trade Commission issued a statement in 1993 for joint purchasing arrangements. This statement sets forth an “antitrust safety zone” for GPOs that meet a two-part test, under which the agencies will not generally challenge GPO business practices under the antitrust laws. Essentially, the two-part test in the context of medical-surgical products is as follows: (1) purchases through the GPO account for less than 35 percent of the total sales of the product in the relevant market, and (2) the cost of the products purchased through the GPO accounts for less than 20 percent of the total revenues from all products sold by each GPO member. In recent years, some manufacturers of medical-surgical products have contended that GPOs employ a slow product selection process and set high administrative fees that have made it difficult for some firms to obtain GPO contracts. These firms tend to be small manufacturers that may have fewer financial resources available to successfully complete GPOs’ contracting processes than large manufacturers. The GPOs we studied reported generally having contracting processes that can be modified for certain types of products. They also reported receiving from manufacturers administrative fees that were generally consistent with federal regulations established by HHS. In discussing GPOs’ selection of products and negotiation of fees, several manufacturers we contacted pointed to the paperwork and duration of these processes as burdensome. Not all manufacturers shared the same perspective. One small manufacturer commented that the process could sometimes be relatively easy but that the selection process can be more difficult if the manufacturer is selling only one product. The GPOs we studied were able to alter the duration of their process for selecting products to place on contract, particularly when they considered these products to be innovative. Based on their reported information, GPOs’ product selection processes generally took 6 months, and ranged from as short as 1 month to as long as 18 months. One GPO specifically reported expediting or modifying its formal selection process when it considered a product to be innovative and wanted to award a contract quickly. Most GPOs did not have a distinctly separate process for selecting innovative technology but reported that these products were generally selected in a shorter amount of time compared with other products. Figure 1 shows, across the seven GPOs, the average minimum, most frequent, and maximum times taken for product selection. The GPOs in our study reported consulting various sources before making a decision, including the GPO’s customers requesting the product; published studies about the product; internal and external technology assessments; and different manufacturers of the product, both with and without a GPO contract. In all cases, the GPOs cited customer requests for products as the most important factor in identifying which products to place on contract. In selecting a manufacturer, six of the seven GPOs, including the two largest, solicit proposals publiclyeither through requests for proposals or requests for bids through their Web sites. The extent to which these processes are open to all manufacturers varies by GPO and by product. For example, one of the GPOs solicits proposals publicly for clinical preference products, but not for commodities. GPO-reported information on new contracts awarded in 2002 suggest that GPOs’ solicitations were not limited to manufacturers already on contract. Nearly one-third of all the newly negotiated contracts awarded by the seven GPOs in 2002 were awarded to manufacturers with which the GPO had not previously contracted. The percentage of such contracts ranged from 16 percent to 55 percent for the GPOs in our study. For the two largest GPOs, this share was 29 percent and 55 percent. We could not determine, from the information provided, whether these first-time contract awardees were, for example, small manufacturers or companies new to the industry or whether the products purchased through these contracts were clinical preference items or commodities. Manufacturers have expressed concerns that contract administrative fees, which are typically calculated as a percentage of each customer’s purchase of products under contract, can be too high for some manufacturers. These fees, combined with lower prices negotiated by the GPO, may decrease revenue for manufacturers and may make it more difficult to obtain a GPO contract for newer and smaller manufacturers with fewer financial resources than for larger, more established companies. Five out of seven GPOs reported that the maximum contract administrative fee received from manufacturers in 2002 did not exceed the 3-percent-of-purchase-price threshold contained in federal regulations established by HHS. The most frequent administrative fee level that 4 out of 7 GPOs received from manufacturers in 2002 was 2 percent; the lowest fee level received by each GPO was 1 percent or less. Except for one of the two largest GPOs, the GPOs reported that they have not negotiated any new or renewed contracts in 2003 that include administrative fees from medical-surgical product manufacturers that exceed 3 percent. In 2002, fee levels for private label products —products sold under a GPO’s brand name—were an exception: The typical contract administrative fee paid by private label manufacturers was 5 percent. For one of the two GPOs in our study with private label products, the maximum administrative fee was nearly 18 percent. In addition to an administrative fee, the other GPO charged a separate “licensing” fee for private-label products. GPOs use certain contracting strategies—which include sole-source contracts, product bundling, and extended contract duration—to obtain discounts from manufacturers in exchange for providing the manufacturer with increased sales from an established customer base. Manufacturers and other industry observers have expressed concerns that use of these strategies by the two largest GPOs can reduce competition. For example, when GPOs with substantial market shares award long-term sole-source contracts to large, well-established manufacturers, some newer, single- product manufacturers—left to compete with other manufacturers for a significantly reduced share of the market—may lose business and be forced to exit the market altogether. The seven GPOs we studied, including two with the largest market shares, used these contracting strategies to varying degrees. For example, while all study GPOs reported using sole-source contracts, some GPOs, including one of the two largest GPOs, used it extensively, whereas others used it on a more limited basis. GPOs also varied in their approach to requiring commitment levels from their customers. With respect to bundling, most GPOs used some form of bundling, and the two largest GPOs used either contracts or programs that bundled multiple products for a notable portion of their business. With respect to contract duration, the two largest GPOs typically negotiated longer contract terms than the other five GPOs. The use of sole-source contracting by the study GPOs varied widely with respect to the relative amount of sole source contracting they did and the types of products included in the contracts. For five of the GPOs, sole- source contracts accounted for between 2 percent and 46 percent of their medical-surgical product dollar purchasing volume. For the rest—the two largest GPOs—the shares of dollar purchasing volume accounted for by sole-source contracts were 19 percent and 42 percent. Such levels of sole- sourcing are worth noting, given the sizeable market shares of these two GPOs. GPOs also varied in their use of sole-source contracts for commodity products as compared to medical devices for which providers may desire a choice of products. Six of the seven GPOs in our study reported their use of sole-source contracts for commodity products as compared to clinical preference product. For one of the two largest GPOs, clinical preference products accounted for the bulk—82 percent—of its sole-source dollar purchasing volume. Two GPOs reported cases in which manufacturers refused to contract with the GPO unless they were awarded a sole-source contract. In contrast, commodities accounted for the bulk—between 62 percent and 91 percent—of the dollar purchasing volume that the smaller of the seven GPOs purchased through sole-source contracts. GPO- reported data indicate that the proportion of contracts that were sole source, as a share of all contracts for medical-surgical products for the past 3 years, remained relatively consistent for GPOs. The seven GPOs in our study reported that hospital customers’ commitment to purchase a certain percentage of their products through GPO contracts was an important factor in obtaining favorable prices with manufacturers, and all reported establishing commitment level requirements to some degree. Most of the smaller of the seven GPOs reported that customer adherence to commitment levels and contracts were the most important factor in obtaining favorable pricing with manufacturers. In principle, for GPOs with a smaller customer base, the assurance of customer commitment to purchasing helps enable them to achieve the higher volumes needed to leverage favorable prices from manufacturers. The two largest GPOs reported that volume was the most important factor for obtaining favorable prices and that customer compliance with commitment level and contracts was next in importance. For the two largest GPOs, a sizable customer base may provide the volume levels needed to obtain favorable prices. GPOs varied in their approach to requiring purchasing commitment levels. One GPO requires customers to commit to an overall average dollar purchasing level of 80 percent for those products available through the GPO, although the percentage could vary for individual products. The GPO reported terminating the membership of at least one customer that did not meet this target. Other GPOs reported establishing customer commitment levels in certain contracts in order to obtain a certain price level, but customers were not required to buy under the contract or buy at the commitment level in order to retain GPO membership. Some GPOs’ contracts include multiple, or tiered commitment levels so that customers can choose from a range of commitment levels and obtain price discounts accordingly. All but one of the GPOs in our study reported using some form of bundling, including the bundling of complementary products, bundling several unrelated products from one manufacturer, and bundling several products for which there are commitment-level requirements. One bundling arrangement that GPOs reported using gave customers a discount when they purchased a bundle of complementary products, such as protective hats and shoe coverings. Four GPOs reported bundling complementary products. These bundles were included in a small percentage of the GPOs’ contracts; each of the four GPOs reported having no more than three contracts that bundle complementary products. One GPO reported awarding only one bundling arrangement for two complementary products—the only bundling arrangement the GPO had in effect at the time it reported to us. A second type of bundling reported by three GPOs, including the two largest, gave customers a discount if they purchased a group of unrelated products from one manufacturer. We define this type of bundling as a corporate agreement. One of the two largest GPOs reported that corporate agreements for medical-surgical products accounted for about 40 percent of its dollar purchasing volume for medical-surgical products under contracts in effect on January 1, 2003. Four GPOs, including one of the two largest, used a third type of arrangement that typically bundled products from different manufacturers and required customers that chose this arrangement to purchase a certain minimum percentage from the product categories specified in the bundle in order to obtain the discount. We defined this type of bundling as a structured commitment program. A structured commitment program available through one GPO bundled brand name and GPO private label items for 12 product categories and had a 95 percent commitment-level requirement. In 2002, one of the two largest GPOs reported receiving about 20 percent of its medical-surgical dollar purchasing volume from its structured commitment programs. The use of bundling arrangements may be declining. For example, data reported by one GPO showed a decline in the percent of its contracts that were corporate agreements from 2001 to 2003. This trend was consistent with comments made by one manufacturer and two medical-surgical product distributors. The manufacturer told us that GPOs are less interested in bundling different manufacturers together. Two distributors’ representatives told us that since the summer of 2002, GPOs have fewer bundling arrangements and that some bundles were “pulled apart.” Our analysis of data reported by the study GPOs showed that, in 2002, the two largest GPOs typically awarded 5-year contracts, whereas the other five GPOs typically awarded 3-year contracts. For some of these contracts, potential renewal periods constitute a portion of the contract duration. Those contract terms remained fairly consistent between 2001 and 2003, although two of the five GPOs reported that their most frequent contract term declined by about 1 year. Some GPOs reported implementing policies that may lead to a future reduction in contract terms. One of the two largest GPOs began in the first quarter of 2003 to exclude from new contracts the option for two 1-year contract extensions, so that when a contract expires, this GPO will solicit proposals for a new contract. In response to congressional concerns raised in 2002 about GPOs’ potentially anticompetitive business practices, the group purchasing industry’s trade association established a code of conduct that directs member GPOs to, among other things, address their contracting processes. The conduct code also includes reporting and education responsibilities for the trade association. The seven GPOs we studied drafted or revised their own codes of conduct, but the conduct codes are not uniform in how they address GPO business practices. Moreover, some GPOs’ conduct codes include exceptions and qualified language that can limit the potential of the conduct codes to effect change. It is too soon to evaluate the effectiveness of these codes of conduct in addressing concerns about potentially anticompetitive practices, as many conduct codes are recently adopted and sufficient time has not elapsed for GPOs to demonstrate results. On July 24, 2002, the Health Industry Group Purchasing Association (HIGPA) adopted a code of conduct providing principles for GPO business practices. HIGPA represents 28 U.S.-based GPOs—including five of the seven major GPOs that we studied. HIGPA members also include health care systems and alliances, manufacturers, and other vendors. The HIGPA code of conduct principles address GPO business practices and actual, potential, or perceived conflicts of interest. Among other things, the HIGPA code of conduct provides that GPOs allow hospital and other provider members to purchase clinical preference items directly from all vendors, regardless of whether the vendors have a GPO contract; implement an open contract solicitation process that allows any interested vendor to seek contracts with the GPO; participate in processes to evaluate and make available innovative address conflicts of interest, such as disallowing staff in positions of influence over contracting to hold equity interest in, or accept gifts or entertainment from, “participating vendors”; and establish accountability measures, such as appointing a compliance officer and certifying annually that the GPO is in compliance with the HIGPA code. The HIGPA code also includes several provisions regarding the trade association’s education and reporting responsibilities, including assessing and updating the code of conduct to be consistent with new developments and best business practices; implementing industry wide educational programs on clinical innovations, contracting strategies, patient safety, public policy, legal requirements, and best practices; making available a Web-based directory that posts manufacturers’ and other vendors’ product information; and publishing an annual report listing GPOs that have certified their compliance for the year with the HIGPA code of conduct. As of May 19, 2003, HIGPA’s 28 U.S.-based GPO members certified that they are in compliance with the HIGPA code of conduct principles. Although the HIGPA code of conduct laid the groundwork for many GPOs to change their business practices, its guidelines do not comprehensively address certain business practices. Specifically, the HIGPA code of conduct requires GPOs to address business practices associated with contracting, conflicts of interest, and accountability, and it grants GPOs discretion in using contracting strategies. It recommends that GPOs consider factors such as vendor market share, GPO size, and product innovation when using multiple contracting strategies. However, the HIGPA code of conduct does not directly address levels of contract administrative fees or the offering of private label products. Since August 2002, the seven GPOs we studied, even those that were not HIGPA members, drafted and adopted their own codes of conduct or revised their existing conduct codes. One GPO stated that its revised code, while consistent with the HIGPA code, was more specific than HIGPA’s principles, particularly in the GPO’s rules on stock ownership, travel, and entertainment. Another GPO reported expanding on HIGPA’s code by including provisions to cap administrative fees and prohibit bundling. Similarly, GPOs who were not HIGPA members said they had revised their existing codes of conduct and that their conduct codes were in some respects stronger than HIGPA’s. Nevertheless, GPOs’ individual codes of conduct varied in the extent to which they addressed GPOs’ business practices, such as contracting processes and strategies. Figure 2 provides an overview of the seven GPOs’ conduct codes with respect to their business practices. The table indicates whether a business practice was identified in a code of conduct, but not how the practice was to be addressed. As figure 2 shows, the conduct codes of all the study GPOs explicitly mentioned conflict of interest issues such as those dealing with equity holdings and other conflicts such as receipt of gifts and entertainment and the need for internal accountability. In addition, the conduct codes of most GPOs, including the two largest, included provisions dealing with the contracting strategies, such as sole-source contracting and bundling. For GPOs that are HIGPA members, the lack of additional provisions in their individual conduct codes for certain business practices such as contracting processes may not be significant, as provisions covering these areas are included in the HIGPA code. However, for one of our study GPOs that is not a HIGPA member, the conduct code lacked any provisions pertaining to contracting processes, product selection, administrative fees, sole-source contracting, commitment level requirements, contract duration, and private labeling. The code of conduct provisions for the GPOs in our study were not uniform in how they addressed business practices. For example: Four GPOs, including one of the two largest, had unqualified provisions for capping administrative fees at the 3-percent threshold contained in federal regulations established by HHS. The other largest GPO had a provision for capping administrative fees at 3 percent only for clinical preference items and only for contracts awarded after the establishment of the GPO’s conduct code. Four conduct codes had provisions limiting the use of sole-source contracts for clinical preference items specifically. Another conduct code limited the use of sole-sourcing to contracts meeting certain criteria, such as approval for use by a 75-percent majority of the GPO’s contracting committee. The language of one of the remaining GPO’s conduct codes was vague with respect to sole-sourcing, stating that the GPO will provide customers with choices for each product or service, without explicitly mentioning the use of sole-source contracts. In their conduct codes, two GPOs had provisions prohibiting the practice of bundling of unrelated products, two GPOs prohibited and two limited bundling for clinical preference items, and three GPOs prohibited the practice of bundling products from different manufacturers. One GPO’s conduct code stated that the GPO would not obligate its customers to purchase bundles of unrelated products, allowing the possibility for bundles to be available to customers on a voluntary basis. Exceptions and qualified language in the provisions have the potential to weaken the codes of conduct. Table 2 shows examples of exceptions and qualified language that can limit the potential of the individual GPOs’ conduct codes to effect change. Given the individual GPOs’ relatively recent adoption of codes of conduct—since August 2002—sufficient time has not yet elapsed for GPOs to develop a history of compliance with certain conduct code provisions. Two of the manufacturers and two distributors we interviewed reported noticing improvements, stating that some GPOs are no longer using certain contracting strategies. This observation is consistent with the suggestion that the use of bundling may be declining. One manufacturer that had difficulty in obtaining a contract with a large national GPO prior to 2002 said it has since been awarded a contract for a clinical preference item. The manufacturer also noted that, since September 2002, it has been awarded several new contracts. However, two other manufacturers told us they are skeptical that improvements have been made with regard to business practices. Notwithstanding such anecdotal evidence, because of the recency of GPOs’ actions taken, the ability to assess the impact of the conduct codes systematically remains limited. One year is not sufficient time for the codes of conduct to produce measurable trends that could demonstrate an impact on the industry. For more information regarding this statement, please contact Marjorie Kanof at (202) 512-7101. Hannah Fein, Mary Giffin, Kelly Klemstine, Emily Rowe, and Merrile Sing made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Hospitals have increasingly relied on purchasing intermediaries--GPOs--to keep the cost of medical-surgical products in check. By pooling purchases for their hospital customers, GPOs'in awarding contracts to medical-surgical product manufacturers--may negotiate lower prices for these products. Some manufacturers contend that GPOs are slow to select products to place on contract and establish high administrative fees that make it difficult for some firms to obtain a GPO contract. The manufacturers also express concern that certain contracting strategies to obtain better prices have the potential to limit competition when practiced by GPOs with a large share of the market. GAO was asked to examine certain GPO business practices. It focused on seven large GPOs serving hospitals nationwide regarding (1) their processes to select manufacturers' products for their hospital customers and the level of administrative fees they receive from manufacturers, (2) their use of contracting strategies to obtain favorable prices from manufacturers, and (3) recent initiatives taken to respond to concerns about GPO business practices. The seven GPOs we studied varied in how they carried out their contracting processes. The GPOs were able to expedite their processes for selecting products to place on contract, particularly when they considered these products to be innovative. The GPOs also reported receiving from manufacturers administrative fees in 2002 that were generally consistent with the 3-percent-of-purchase-price threshold in regulations established by the Department of Health and Human Services. However, for certain products, they reported receiving higher fees--in one case, nearly 18 percent. The seven GPOs also varied in the extent to which they used certain contracting strategies as leverage to obtain better prices. For example, some GPOs, including one of the two largest, used sole-source contracting (giving one of several manufacturers of comparable products an exclusive right to sell a particular product through the GPO) extensively, whereas others used it on a more limited basis. Most GPOs used some form of product bundling (linking price discounts to purchases of a specified group of products), and the two largest GPOs used bundling for a notable portion of their business. In response to congressional concerns raised in 2002 about GPOs' potentially anticompetitive business practices, the Health Industry Group Purchasing Association (HIGPA) and GPOs individually established codes of conduct. The conduct codes are not uniform in how they address GPO business practices. In addition, some GPOs' conduct codes include exceptions and qualified language that could limit their potential to effect change. |
Between 1972 and 1990 the presence of foreign banks in the United States increased rapidly—from 105 offices and subsidiary banks with $95 billion in assets in 1972 (measured in 1995 dollars) to 737 offices and subsidiary banks with $933 billion in assets (measured in 1995 dollars) at the end of 1990. Since then their number has fallen and growth in the volume of their assets has slowed. At the end of 1995, there were 656 foreign bank offices and foreign-owned subsidiary banks with $974 billion in assets in the United States. Including an additional 247 representative offices, 371 foreign banks had a presence in the United States. Branches and agencies are the most common organizational forms—accounting for about 78 percent of foreign bank assets at the end of 1995. (See table 1.) Foreign-owned U.S. bank subsidiaries held over 21 percent of foreign bank assets. Commercial lending companies and Edge Act/Agreement Corporations accounted for less than 1 percent of foreign bank assets, and representative offices held no banking assets. U.S. branches and agencies are legal and operational extensions of their parent foreign banks and as such have no capital of their own. They may conduct a wide range of banking activities, including lending, money market services, trade financing, and other activities related to the service of foreign and U.S. clients. They can also access the U.S. payments system through the Federal Reserve and obtain other Federal Reserve services. Branches and agencies of foreign banks may be either state-licensed and therefore regulated and supervised by the respective state banking department, or federally licensed and regulated and supervised by the Office of the Comptroller of the Currency (OCC). As of December 1995, 473 branches and agencies were state-licensed and 72 were federally licensed. In addition, 41 of the branches were insured by the Federal Deposit Insurance Corporation (FDIC) and thus subject to additional supervision by FDIC. U.S. bank subsidiaries of foreign banks are U.S.-chartered banks that have all the powers of U.S.-owned banks. They are insured by FDIC and are subject to all the rules and regulations governing U.S.-owned banks. Their assets and liabilities are separate from those of their parent foreign banks, and they must maintain their own capital in accordance with U.S. laws and regulations. They may be either state or federally chartered. Branches and agencies of foreign banks were first subject to federal regulation with passage of the International Banking Act of 1978 (IBA). Adopting a policy of national treatment, IBA sought to allow foreign banks with branches and agencies to operate in the United States on an equal basis with U.S. banking organizations. Foreign banks were to receive neither significant advantages nor incur significant disadvantages. The act also gave the Federal Reserve responsibility for overseeing the combined U.S. operations of foreign banks. Although IBA substantially equalized the treatment of the U.S. operations of foreign and U.S. banks, it did not require prior federal review of foreign bank entry into the U.S. market nor did it permit a federal role in the termination of a state-licensed branch or agency. Cases of fraud and other criminal activity by some foreign banks in the 1980s and early 1990s convinced the Federal Reserve and Congress that both state and federal supervisors needed to increase the attention they paid to foreign banks operating in the United States. In particular, Federal Reserve officials believed that prior federal review of foreign bank entry and expansion in the U.S. market was necessary. They also believed that a federal role in terminating a state-licensed branch or agency for unsafe and unsound banking practices was desirable. In December 1991, Congress passed FBSEA. This act, which amended IBA, increased federal supervision of all foreign bank operations, giving the Federal Reserve authority to examine all foreign bank offices in the United States. FBSEA also mandated uniform standards for foreign banks establishing operations in the United States. Finally, it prohibited U.S. branches of foreign banks from obtaining deposit insurance and gave federal supervisors greater enforcement authority over the U.S. operations of foreign banks. FBSEA also directed the Federal Reserve to levy examination fees on foreign banks with a U.S. branch, agency, or representative office. However, the Riegle-Neal Interstate Banking and Branching Efficiency Act of 1994 imposed a 3-year moratorium on this provision. FBSEA increased the Federal Reserve’s supervisory and regulatory power over foreign banks by requiring Federal Reserve approval for all foreign banks seeking to establish U.S. offices, whether licensed by state or federal authorities. This requirement was designed to give the Federal Reserve, as the agency responsible for overall supervision of foreign banks in the United States, a role in determining whether such institutions may establish a U.S. banking presence. FBSEA established uniform standards for foreign banks entering the United States, requiring them to meet financial, managerial, and operational standards similar to those of U.S. banking organizations. The act made the Federal Reserve responsible for ensuring that these standards are met. Under FBSEA, foreign banks must meet two standards in order to establish a branch or an agency, or to acquire ownership or control of a commercial lending company. First, the Federal Reserve must determine that the foreign bank applicant (and any parent foreign bank) engages directly in the business of banking outside the United States and is subject to comprehensive supervision or regulation on a consolidated basis by its home country supervisor. Second, the foreign bank must furnish to the Federal Reserve the information that the Federal Reserve requires in order to assess the application adequately. In addition to the two mandatory standards, the Federal Reserve also considers other factors. Among others, these include (1) whether the applicant’s home country authorities have consented to the establishment of the proposed office, (2) the applicant’s financial and managerial resources, including its capacity to engage in international banking, and (3) whether the applicant has provided adequate assurances that it will provide access to information sufficient to allow the Federal Reserve to determine its compliance with applicable U.S. laws. Before FBSEA, the states were responsible for licensing representative offices and, at the federal level, applicants only had to register their office with the U.S. Department of the Treasury. FBSEA gave the Federal Reserve authority to approve establishment of these offices as well. However, it did not require the Federal Reserve to apply the standards mandated to establish other banking offices to its decisions regarding applications for representative offices. The Federal Reserve is to take these standards into account in evaluating a foreign bank’s application to establish a representative office, but it can approve applications where the parent foreign bank does not meet all of the standards required to establish a branch or agency. Before FBSEA, foreign banks wishing to establish a branch or agency in the United States were required to obtain approval from the appropriate banking regulator—OCC—for federal branches and agencies, or the state regulator for state branches and agencies. Since FBSEA, a foreign bank must also receive approval from the Federal Reserve. To receive approval from the Federal Reserve, a foreign bank must submit an application to the reserve bank located in the district where it plans to establish an office or to its already designated “responsible” reserve bank. A copy of its OCC or state application and any additional information necessary for the Federal Reserve to determine that the bank meets the standards set out in FBSEA are to be included in the application. The application is not to be accepted (i.e., deemed informationally complete) until these criteria are met. Once the application is accepted for processing, it is reviewed by staff and submitted to the Board for action. Before March 1993, applications were reviewed solely by the reserve bank before they were accepted. If an application lacked information, the reserve bank requested the applicant bank to provide the information. After the reserve bank determined that it had all necessary information to process the application, it was accepted and forwarded to the Board for review and disposition. At this point the Board could request additional information. This process often resulted in delays and multiple requests for additional information. In March 1993, the Federal Reserve issued guidelines changing its procedures for processing applications to establish U.S. offices of foreign banks. The changes were intended to expedite processing and reduce the burden on applicants of responding to multiple requests for additional information. The guidelines require the reserve bank to send copies of the application to the Board within one business day of receiving an application. Both the reserve bank and Board staffs are then to simultaneously review the application to ensure that the information is complete. If additional information is needed, coordinated requests are to be made to the applicant bank before the application is accepted. The guidelines also established time limits for Federal Reserve staff to review applications and ask for additional information. The reserve bank and Board staffs are to review an application and request additional information from the applicant bank within 15 business days of receipt of the application by the reserve bank. The applicant bank then has 20 business days to respond to these requests. If the applicant bank does not respond within that time, the application would normally be returned due to insufficient information. If the applicant responds within the time limit, the reserve bank and Board staffs have an additional 10 business days either to accept the application as complete or to request additional information. If additional information is requested, the applicant bank similarly has 10 business days to respond. The Federal Reserve encourages all foreign bank applicants to meet with reserve bank and/or Board staffs before filing applications. These meetings are intended to identify relevant issues, apprise applicants of required information, and enable Federal Reserve staffs to obtain necessary information at an early stage of the process. Once the reserve bank and Board staffs determine that the application is complete and it is accepted, the Federal Reserve has an internal guideline of 60 days to analyze it, have background checks completed, and make inquiries to home country authorities. After these tasks are completed, the application is to be presented to the Board for action. If the application cannot be presented for Board action within the 60-day period, the applicant is to be informed in writing of the reasons. As of January 29, 1996, the Federal Reserve had received 96 applications from foreign banks seeking to establish offices or bank subsidiaries under FBSEA. The Federal Reserve had approved 45 applications, had returned or applicant banks had withdrawn 23, and 28 were under review. Of the 45 applications approved by the Federal Reserve, 6 were for agencies, 15 for branches, 18 for representative offices, and 8 for bank acquisitions. The approved applications represented banks from 23 countries. Taiwan accounted for the most—7 of the 45 applications. In its decisions approving the applications for branches and agencies and subsidiary banks, the Federal Reserve found that the foreign banks had met the standards required under FBSEA and its implementing regulations. The Federal Reserve’s decisions indicated that the applicants had provided the necessary information, had met all conditions concerning their intended operation, and were in compliance with the requirements for approval. The Federal Reserve’s policy, as required by FBSEA, is to use the standards that apply to branches and agencies as guidance when considering an application to establish a representative office. Federal Reserve regulations do not require these standards to be met in every case because representative offices differ from branches and agencies in that representative offices cannot engage in a banking business and cannot take deposits or make loans. Federal Reserve staff told us that, in general, representative office applicants have not been required to meet the supervision standards required for branches and agencies. A review of the orders indicated that the Federal Reserve examined the home country supervision of the applicant bank in every representative office case, but a determination that the applicant bank or its parent foreign bank were subject to comprehensive consolidated supervision was not always made. Similarly, the Federal Reserve has not required foreign bank applicants wishing to establish representative offices to meet the same financial standards, including the standard related to capital, which are required for the establishment of branches and agencies. In our review of 17 orders approving representative offices, we found that in 13 cases the orders did not indicate whether the capital standards were being met by the parent foreign bank. Most of the 23 applications that had not been approved by the Federal Reserve and were no longer under review were withdrawn by the applicant bank for various reasons. (See table 2.) Of the 28 applications under review as of January 29, 1996, 3 were for agencies, 5 were for bank acquisitions, 8 were for branches, and 12 were requests to establish representative offices. Federal Reserve staff told us that they had not received any applications to establish a commercial lending company since FBSEA was passed. Processing foreign bank applications took more than a year on average, and this length of time concerned both the Federal Reserve and applicant foreign banks. Federal Reserve staff told us that the length of time it took to process applications can be attributed to the need for additional time to complete background checks and to review issues related to comprehensive supervision, bank operations, and internal controls. They also cited difficulties in obtaining translated information from some applicant banks, a lack of understanding by some applicants about the level of detail required to review comprehensive consolidated supervision, and some applicants’ unfamiliarity with FBSEA requirements as causes of delays. After the Federal Reserve issued its March 1993 guidelines, there was a decrease in the amount of time taken to process branch, agency, and representative office applications. (See fig. 1.) On average, the total time it took to process such applications (from date of initial filing to disposition) dropped from 574 days to 293 days. Of this, the average time between the date that applications were initially filed and the date they were accepted decreased from 170 days to 130 days, and the average time between acceptance and approval decreased from 404 days to 163 days. Federal Reserve staff attributed this decline to a number of reasons, including commitment to meet the guidelines, experience with the process, and improvements in the name check process. FBSEA directed the Federal Reserve to coordinate the supervision of foreign banking organizations with federal and state bank supervisors to ensure an efficient and uniform approach in overseeing the operations of foreign banks in the United States. The act gave the Federal Reserve the responsibility for ensuring that branches and agencies of foreign banks are examined every 12 months and gave it the power to examine representative offices. It also broadened the enforcement powers of the Federal Reserve and OCC. Specifically, the act permitted the Federal Reserve to terminate the activities of a state-licensed branch, agency, commercial lending company, or representative office for violations of law or for unsafe or unsound banking practices. The Federal Reserve may recommend to OCC similar action for federally licensed offices. modified and broadened the Federal Reserve’s and OCC’s authorities to assess civil money penalties on specific grounds against any foreign bank or office or subsidiary of a foreign bank and certain individuals of up to $25,000 for each day during which a violation continues. To meet the requirements set out in FBSEA, Federal Reserve staff told us that each year they develop, in cooperation with OCC, FDIC, and state bank supervisors, an annual examination plan, to supervise the U.S. operations of foreign banking organizations. This plan includes branches, agencies, commercial lending companies, Edge Act/Agreement Corporations, and significant nonbank subsidiaries. They said the supervisors discuss the focus of the year’s examinations and when they will be conducted. Their goal is to ensure that each branch and agency is examined every 12 months without undue burden imposed on the entity and that all supervisory issues are addressed in the examination process. To meet this goal, the Federal Reserve may conduct an independent examination, rely on the other agencies to conduct the examination, or participate in a joint examination. Federal Reserve staff told us that, in order to form a baseline understanding of foreign bank operations, in 1992, they examined either independently or jointly all foreign bank branches and agencies in the United States. In 1993, the Federal Reserve, OCC, FDIC, and state bank supervisors developed a joint examination manual for branches and agencies. The purpose of the manual is to ensure to the extent possible that each regulatory agency examines branches and agencies of foreign banks in a consistent manner. Federal Reserve staff told us that in the future they intend to examine fewer foreign branches and agencies and rely more on the examinations conducted by OCC and the states. Table 3 shows the number of independent and joint examinations conducted by each agency for 1993 through 1995. Federal Reserve examination data indicated that federal and state banking supervisors have substantially been meeting the requirement that all branches and agencies be examined annually. For 1993, 1994, and 1995, we found that, on average, 97 percent of branches and agencies had been examined at least annually. In 1995, 542 of the 549 branches and agencies operating in the United States at the beginning of the year were examined. Federal Reserve staff reported that enhanced monitoring tools have been developed to quickly identify cases where the mandate appears to have been missed. FBSEA did not establish a required frequency for examinations of representative offices. It is currently Federal Reserve policy to examine all representative offices at least once every 24 months. Examinations of representative offices differ from those of foreign branches and agencies in that they are intended primarily to verify that the type of business being conducted by an office is limited to that customarily viewed as a representative office function and to ensure that the office is operating in conformance with sound operating policies. The Federal Reserve conducted a survey in 1992 to determine the number of representative offices operating in the United States. Federal Reserve staff told us that between 1993 and 1994 examiners visited all representative offices in the United States to verify that they were engaging only in activities appropriate for representative offices. From our review of Federal Reserve data, we found that for 1993 through 1995, 93 percent of representative offices, net of closures and new entrants, were examined at least once. The examination rates were 87 percent, 54 percent, and 66 percent for 1993, 1994, and 1995, respectively. Examinations by federal and state supervisors are intended to determine the safety and soundness of foreign branches and agencies. They result in a composite examination rating for the entity. These ratings range from 1 (fundamentally sound) to 5 (unsatisfactory). As table 4 shows, of the foreign branches and agencies examined during 1995, 88 percent received a rating of 1 or 2 at year-end, indicating that their operations were at least satisfactory and required only normal supervisory attention. Nine percent were rated 3 (fair). Only 3 percent received a rating of 4 or 5, meaning that they were considered to have significant weaknesses or were identified as having so many severe weaknesses that they required urgent attention by their head offices. These results are similar to those in 1993 and 1994 in which 79 percent and 85 percent, respectively, were found to have sound operations. Federal and state banking supervisors may issue enforcement actions against foreign banks as well as their U.S. branches and agencies in cases where a branch, agency, or other U.S. office of the parent bank is determined to be operating in an unsafe or unsound manner in violation of applicable laws, regulations, or written conditions imposed during the applications process. These actions may be either formal or informal, depending upon the severity of the problem(s) and the bank’s willingness to correct them. Although the Federal Reserve had authority to initiate enforcement actions against foreign banks and their U.S. branches and agencies under the IBA and the Federal Deposit Insurance Act, FBSEA enhanced its enforcement powers. Specifically, it gave the Federal Reserve the authority to order a foreign bank with a state-licensed branch, agency, commercial lending company, or representative office to terminate its activities in the United States and the authority to recommend such action to OCC for federally licensed branches and agencies. Federal Reserve staff stated that the Federal Reserve had the authority to levy civil money penalties for violation of IBA and for failure to make certain reports and FBSEA modified and broadened this authority for both the Federal Reserve and OCC. FDIC can issue formal enforcement actions against foreign banks by virtue of its authority under the Federal Deposit Insurance Act. Between 1993 and 1995, federal banking supervisors issued 40 formal enforcement actions against foreign banks operating in the United States.In the most serious case, the Federal Reserve, in conjunction with FDIC, the New York State Banking Department, and several other state bank supervisors, used its termination authority to order Daiwa Bank to cease its U.S. banking operations. During this period, the Federal Reserve also issued three civil money penalties for failures to file regulatory reports and one for inadequate Bank Secrecy Act policies and procedures. Neither OCC nor FDIC issued any civil money penalties during this time. The remaining 35 formal enforcement actions issued by the Federal Reserve, OCC, and FDIC included 16 cease-and-desist orders. In practice, OCC exercises primary enforcement authority over federal branches and agencies, and the Federal Reserve takes the lead in issuing formal enforcement actions against state-licensed branches and agencies. In addition to formal enforcement actions, each of the federal and state banking supervisors may take informal enforcement actions, such as memorandums of understanding and commitment letters, in which an institution agrees to remedy specific areas of supervisory concern. These actions are taken when supervisory concerns are identified that, while not overly serious, warrant some type of remedial action. In 1995, the Federal Reserve in conjunction with state bank supervisors issued 50 informal enforcement actions against foreign banks, OCC issued 9, and FDIC issued 5. To discuss the implementation of FBSEA, we reviewed the act and focused on those provisions that pertained specifically to the entry and examination of foreign banks in the United States. Although FBSEA contained provisions restricting some activities of foreign banks and set additional reporting and approval requirements, as agreed with the subcommittee, we did not do independent work to determine that these provisions have been followed. We focused our work on branches and agencies of foreign banks because this form of organization accounts for the largest concentration of foreign bank offices and assets in the United States. We did limited work on representative offices because their activities are limited and they hold no banking assets in the United States. FBSEA also applies to commercial lending companies. However, there are only three of these companies in the United States and there have been no applications for this form of entry since FBSEA was implemented. Finally, since subsidiary banks are U.S.-chartered, they are governed by all of the laws and regulations applicable to U.S. banks and are supervised and examined in the same way as U.S. banks. Accordingly, FBSEA should have had minimal effect on the regulation and supervision of these banks. To describe the Federal Reserve’s applications process for foreign banks, we reviewed its implementing regulations and other banking correspondence and regulations. We also interviewed staff in the Federal Reserve’s Division of Banking Supervision and Regulation and its Legal Division and officials and staff at the Federal Reserve Bank of New York, which is where most foreign banks operating in the United States are located. They gave us their views on the applications process and how it corresponds to the requirements set forth in FBSEA. We also reviewed all of the Federal Reserve’s decisions approving foreign bank applications since 1992 to determine whether it addressed the statutory and regulatory requirements of FBSEA. In addition, we compared the length of time it took to process applications to the guidelines set forth by the Federal Reserve to determine whether the Federal Reserve was in compliance with its own policies. We did not attempt to subjectively evaluate the Federal Reserve’s decisions on foreign bank applications. To do so, we would have had to analyze and judge the merits of the facts presented by the foreign bank applicants and the reasoning in each application. To describe the examination process and the results of examinations, we reviewed examination data for foreign branches, agencies, and representative offices provided by the Federal Reserve for 1993, 1994, and 1995. Because the Federal Reserve has overall responsibility for ensuring that foreign branches, agencies, and representative offices are examined in a timely manner, it maintains examination data for all such offices operating in the United States. The Federal Reserve did not maintain such data in a summary format prior to 1993. We also interviewed staff and officials from the Federal Reserve, OCC, and FDIC, in both Washington, D.C., and New York, and officials from the New York State Banking Department to determine how the Federal Reserve coordinates with other bank supervisors. To determine the extent to which federal supervisors have used enforcement actions against foreign banks operating in the United States, we collected data on enforcement actions from the Federal Reserve, OCC, and FDIC. Our work was done in Washington, D.C., and New York, NY, between January and May 1996 in accordance with generally accepted government auditing standards. We received both written and oral comments on a draft of this report from the Federal Reserve. In its letter, the Federal Reserve stated that the information provided in the report accurately describes the policies and processes with respect to applications and examinations of foreign banks. The oral comments were technical in nature and have been incorporated where appropriate. We are sending copies of this report to the Chairmen and Ranking Minority Members of the House Committee on Banking and Financial Services and the Senate Committee on Banking and Urban Affairs, the Chairman of the Federal Reserve Board, the Chairman of the Federal Deposit Insurance Corporation, the Comptroller of the Currency, and other interested parties. We will also make copies available to others on request. Major contributors to this report are listed in appendix II. If you have any questions, please call me at (202) 512-8678. Rachel DeMarcus, Assistant General Counsel The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on the Federal Reserve's implementation of the Foreign Bank Supervision Enhancement Act, focusing on: (1) the Reserve's examination process and process for approving foreign bank applications for U.S. entry and expansion; and (2) enforcement actions that the Federal Reserve has taken since 1993. GAO found that: (1) the act established minimum standards for foreign bank entry and expansion into the United States, strengthened federal bank supervision and regulation, and required that the Federal Reserve approve foreign banks' applications for acquiring bank subsidaries; (2) although the Federal Reserve approved 45 applications after determining that the applicant banks met the act's standards, Federal Reserve staff believed that the application process was too lengthy; (3) new guidelines, established in 1993, reduced application processing times; (4) between 1993 and 1995, the Federal Reserve met its mandate to coordinate with the other federal and state bank supervisors to examine foreign branches and agencies once every 12 months; (5) the Federal Reserve also examined over half of the representative offices of foreign banks operating in the United States even though it did not establish a time frame for such examinations; (6) the foreign banks examined were generally in satisfactory condition and only 3 percent of the foreign branches and agencies received low safety and soundness ratings in 1995; and (7) of the 40 formal enforcement actions the Federal Reserve issued against U.S. foreign banks between 1993 and 1995, 6 required voluntary terminations of deposit insurance, 4 required the use of civil money penalty authority, and one foreign bank was ordered to terminate its U.S. banking operations. |
According to TSA, the majority of air carrier passengers present a low risk to aviation security, but until recently, with the exception of passengers matched to watchlists, TSA has used the same security screening procedures for passengers without regard for the risk passengers posed. TSA, through the Secure Flight system, utilizes watchlists derived from the Terrorist Screening Database—the U.S. government’s consolidated watchlist of known or suspected terrorists—and other sources to identify potentially high-risk passengers. For example, the No Fly List contains information on individuals who are prohibited from boarding an aircraft, and the Selectee List contains information on individuals who must undergo additional security screening before being permitted to board an aircraft. As part of TSA PreTM, TSA conducts terrorism related checks using lists derived from the database as a part of the security threat assessment process. In 2011, TSA began to explore the benefits of using risk-based, intelligence-driven security screening procedures that would allow TSA to learn more about passengers prior to travel because, according to TSA, it can better assess passenger risk when it knows more about passengers. In addition, TSA worked to develop procedures to ensure screening resources are focused on passengers determined to be high risk and passengers about whom TSA has less information while expediting the screening of passengers TSA has assessed as being lower risk based on information it has on such passengers. Using this risk-based approach, TSA began to identify and define lower risk passenger populations or “known travelers”—that is, those who have volunteered personal information to TSA so that TSA can confirm these known travelers are lower risk. When TSA began offering expedited airport screening in the summer of 2011, TSA initially provided such screenings in standard lanes to passengers aged 12 and younger, and subsequently extended expedited screening to certain flight crew members and then to passengers aged 75 and older. In October 2011, TSA began to expand the concept of expedited airport screening to more of the flying public by piloting the TSA PreTM program. This pilot program allowed certain frequent fliers of two air carriers to experience expedited screening at four airports. These frequent fliers became eligible for screening in dedicated expedited screening lanes, which became known as TSA PreTM lanes, because they had opted into the TSA PreTM program through the air carrier with which they had attained frequent flier status. TSA also allowed certain members of the U.S. Customs and Border Protection’s (CBP) Trusted Traveler programs to experience expedited screening as part of the TSA PreTM pilot. TSA provided expedited screening in dedicated screening lanes to these frequent fliers and CBP’s Trusted Travelers during the TSA PreTM pilot program because TSA had information about these passengers, and TSA used this information to determine that passengers in these groups were lower risk. When traveling on one of the air carriers and departing from one of the airports participating in the pilot, these passengers were eligible to be screened in dedicated TSA PreTM screening lanes where the passengers were not required to remove their shoes; divest light outerwear, jackets, and belts; or remove liquids, gels, and laptops from carry-on baggage. After the pilot concluded, in February 2012, and transitioned into a formal program, TSA began to add additional air carriers and passenger groups to the TSA PreTM program. For an air carrier to participate in TSA PreTM, the air carrier must have the technological capability to send the necessary passenger information to Secure Flight for vetting against federal government watchlists, and print the low-risk designation in the encrypted boarding pass bar code and the TSA PreTM designation on the boarding pass. As each air carrier joined TSA PreTM, the carrier’s frequent fliers became eligible to opt in for expedited screening only when traveling on that air carrier. Since October 2011, TSA further expanded the known traveler populations eligible for expedited screening. After TSA piloted TSA PreTM with certain passengers who are frequent fliers and members of CBP’s Trusted Traveler programs, TSA established separate TSA PreTM lists for additional low-risk passenger populations, including members of the U.S. armed forces, Congressional Medal of Honor Society Members, members of the Homeland Security Advisory Council, and Members of Congress, among others. In addition to TSA PreTM lists sponsored by other agencies or entities, TSA created its own TSA PreTM list composed of individuals who apply to be preapproved as low-risk travelers through the TSA PreTM Application Program, an initiative launched in December 2013. To apply, individuals must visit an enrollment center where they provide biographic information (i.e., name, date of birth, and address), valid identity and citizenship documentation, and fingerprints to undergo a TSA Security Threat Assessment. TSA leveraged existing federal capabilities to both enroll and conduct threat assessments for program applicants using enrollment centers previously established for the Transportation Worker Identification Credential Program, and existing transportation vetting systems to conduct applicant threat assessments. Applicants must be U.S. citizens, U.S. nationals or lawful permanent residents, and cannot have been convicted of certain crimes. As of April 2014, there were about 5.6 million individuals who, through TSA PreTM lists, were eligible for expedited screening. Figure 1 shows the populations for each TSA PreTM list. In addition to passengers who are included on one of the TSA PreTM lists, in October 2013, TSA began implementing the TSA PreTM Risk Assessment program, which evaluates passenger risk using data available to TSA to determine a certain likelihood that passengers will be designated as eligible to receive expedited screening through TSA PreTM. According to TSA officials, in February 2013, TSA established a policy to notify passengers of their eligibility for expedited screening using the air carrier reservation systems in order to improve passenger movement through airports for TSA PreTM eligible passengers. For every passenger, TSA uses the Secure Flight system to automatically match passenger information collected by the air carriers against the various watchlists (e.g., the No Fly and Selectee Lists) up to 72 hours before passengers’ scheduled air travel. After checking for matches to the watchlists, Secure Flight directs the air carrier to mark a passenger’s boarding pass for enhanced screening or expedited screening, or to identify a passenger as being prohibited from boarding an aircraft, or to identify a passenger for standard screening.TSA uses a similar process to identify passengers who are eligible for expedited screening at the airport by using the same information provided by the air carriers to the Secure Flight system to match against lists of individuals who have been designated as low risk. TSA informs passengers of this eligibility by directing the air carriers to mark the boarding pass with the TSA PreTM designation. Figure 2 shows examples of boarding passes with the TSA PreTM designation. At airports, the mechanism TSA uses to screen passengers who have the TSA PreTM designation on their boarding pass depends on the configuration of the airport. At some airports, TSA has dedicated TSA PreTM expedited screening lanes where passengers with the TSA PreTM designation on their boarding passes are not required to divest shoes, light outerwear, laptops, liquids, and gels. Because TSA PreTM expedited screening is voluntary, a passenger designated as eligible for TSA PreTM expedited screening may choose not to use a TSA PreTM dedicated screening lane. Also, some airports do not have dedicated TSA PreTM expedited screening lanes either because of space restrictions that preclude the airport from installing a dedicated screening lane in a security checkpoint or because the number of passengers with the TSA PreTM boarding pass designations is low and therefore does not warrant a separate dedicated lane. According to TSA officials, at these airports, passengers with a TSA PreTM boarding pass can still experience expedited screening of “their persons” (i.e., passengers are not required to divest shoes, light jackets, and belts) and use a walk through metal detector in the standard screening lane; however, they must divest their liquids, gels, and laptops from baggage because the screening process used in the standard screening lanes should result in the transportation security officer (TSO) identifying these items and searching the baggage—slowing throughput in the standard screening lane—if these items are not removed. For airports at which there are dedicated screening lanes, security checkpoints may not consistently have a volume of TSA PreTM passengers that warrants operating such lanes because the TSO working in the lanes would be underutilized. Also, an underutilized expedited screening lane can result in longer wait times in standard screening lanes because airports that offer expedited screening in dedicated screening lanes generally do so at the expense of standard screening lane availability, according to airport and TSA officials. To ensure the utility of expedited screening lanes for non-TSA PreTM passengers, TSA implemented the Managed Inclusion process in November 2012. The Managed Inclusion process involves using real-time threat assessment methods, including randomization procedures and behavior detection officers (BDO), as well as either canine teams or explosives trace detection (ETD) devices to screen non-TSA PreTM passengers in lanes that are otherwise dedicated to TSA PreTM passengers. TSA operates Managed Inclusion at the discretion of the airport’s FSD and is available at airports that have dedicated TSA PreTM expedited screening lanes, and either canine teams, ETD devices, or both. The Managed Inclusion process will be more fully described later in this report. As of April 2014, TSA officials stated they provided expedited screening at essentially all of the approximately 450 airports at which TSA performs, or oversees the performance of, security screening, including 118 airports where TSA offers expedited screening in dedicated TSA PreTM screening lanes. The 118 airports where expedited screening is offered in dedicated TSA PreTM screening lanes represent about 95 percent of total air carrier enplanements based on Federal Aviation Administration calendar year 2012 data. Figure 3 shows the locations of the airports with dedicated TSA PreTM screening lanes. Also, appendix I provides a list of the airports included on the map. TSA’s implementation of the TSA PreTM Risk Assessments, expansion of Managed Inclusion, and increasing the number of TSA PreTM airports accounted for a significant increase in the overall number of passengers designated as eligible for expedited screening, as well as the number of passengers who generally underwent such screening. Specifically, according to TSA data and as shown in figure 4, the number of TSA PreTM boarding passes issued each month grew slowly from October 2011, when TSA PreTM was launched, through September 2013, increasing from about 673,000 to about 3 million. In October 2013, when TSA began the TSA PreTM Risk Assessment process, TSA issued almost 9 million TSA PreTM boarding passes. Furthermore since October 2011, air carrier participation has expanded from two air carriers to nine air carriers as of April 2014. Figure 4 also shows the number of passengers receiving TSA PreTM expedited screening, the dates when air carriers began participating in the TSA PreTM program, and the dates when various programs intended to expand the use of expedited screening were implemented. Figure 4 also shows the difference between the number of TSA PreTM boarding passes issued and the number of passengers who receive expedited screening at the airport. According to TSA officials, this difference occurs because TSA PreTM is a voluntary program and not all passengers who are eligible necessarily use expedited screening. For example, a passenger may be traveling with a group in which not all passengers in the group are eligible for expedited screening so the passenger may choose to forgo expedited screening. Prior to October 2013, TSA designated passengers as eligible for expedited screening because they were members of one of the populations TSA had designated as low risk. These populations included individuals who opted to participate in the TSA PreTM lists shown in figure 1, as well as almost 1.5 million frequent fliers who opted in to participate in the TSA PreTM program, and passengers aged 12 and under and 75 and older. In October 2013, TSA began to provide expedited screening to a much larger population of travelers using the TSA PreTM Risk Assessment program and Managed Inclusion. After TSA implemented the TSA PreTM Risk Assessment program any passenger flying on a participating air carrier could be designated as low risk and provided a TSA PreTM boarding pass designation. At the same time the number of TSA PreTM boarding pass designations increased, the expedited screening throughput in TSA PreTM lanes also increased. For example, according to TSA data and as shown in figure 4, the number of passengers receiving expedited screening in September 2013 was about 2 million. In October 2013, about 8 million passengers received expedited screening, about a 300 percent increase. According to TSA officials, the increased throughput in TSA PreTM lanes was due to the implementation of the TSA PreTM Risk Assessments and the expansion from 40 to 100 airports at that time with dedicated expedited screening lanes. In addition, TSA increased the number of passengers with the opportunity to experience expedited screening in October and November 2013 by expanding Managed Inclusion, in which TSA uses real-time threat assessment methods to screen standard passengers in lanes that are otherwise dedicated to TSA PreTM passengers. Whereas TSA first implemented Managed Inclusion in November 2012 using canine teams to screen passengers for explosives as part of a real-time threat assessment, TSA determined it could operate Managed Inclusion at more airports by screening for explosives using ETD devices. TSA piloted Managed Inclusion with ETD devices in Boston and Seattle in July 2013, and implemented the program nationwide in October and November 2013 when it increased the number of airports operating Managed Inclusion with ETD devices. Figure 5 shows a snapshot from May 11, 2014, through May 18, 2014, of the percentage of weekly passengers receiving non-expedited screening and expedited screening, and further shows whether known crew members experienced expedited screening, and whether expedited screening occurred in TSA Pre™ lanes (for passengers designated as known travelers or through the TSA Pre™ Risk Assessment program, or passengers chosen for expedited screening using Managed Inclusion), or in standard lanes. As noted in figure 5, out of the 41 percent of passengers nationwide receiving expedited screening during the week ending May 18, 2014, nearly 40 percent of them were issued TSA Pre™ boarding passes, but were provided expedited screening in a standard screening lane, meaning that they were provided expedited screening of their persons and did not have to remove their shoes, belts, and light outerwear, but they had to divest their liquids, gels, and laptops. TSA provides expedited screening to TSA PreTM-eligible passengers in standard lanes when airports do not have dedicated TSA PreTM screening lanes because of airport space constraints and limited TSA PreTM throughput. TSA collaborated with stakeholders about how TSA PreTM works in various ways, including presenting information about TSA PreTM at industry conferences and events, holding monthly meetings with air carriers and airport authorities, and conducting stakeholder briefings with air carriers and airport authorities when implementing TSA PreTM at airports. In addition, TSA officials stated that they have provided Power Point presentations, reference materials, and promotional materials such as advertisements to air carrier and industry stakeholders. TSA officials said that they have also worked with a number of airports to set up tables and information booths where passengers can obtain information about TSA PreTM and expedited screening. Further, officials stated TSA has updated their website and provided TSA PreTM information via the My TSA App to inform passengers about how to enroll in the TSA PreTM Application Program, which air carriers are participating in TSA PreTM, and which airports have dedicated TSA PreTM screening lanes. TSOs informally communicate information about TSA PreTM at airports by explaining to passengers waiting in the screening queue how they could enroll in the TSA PreTM Application Program to be eligible for expedited screening. Stakeholders told us that TSA coordinated with them at the local level to implement dedicated TSA PreTM lanes. For example, one airport authority we spoke with stated that TSA held regular meetings with airport management and with the air carriers and included them in the implementation process for opening expedited screening lanes. In addition, representatives from two of the six industry associations that represent passenger groups and that we interviewed noted that TSA’s coordination with them was effective and that TSA gave the associations the opportunity to provide input to TSA on how the program was first implemented. According to these industry associations, TSA met with them to encourage the associations to advertise expedited screening and TSA PreTM to members. Further, three of five air carriers said that they worked closely with TSA officials at the local airport level and provided input to TSA on issues like where to place TSA PreTM lanes within the airport. Eight of the 16 stakeholders that we interviewed stated that TSA could do a better job of communicating to passengers details about expedited screening in dedicated TSA PreTM lanes. For example, representatives from air carriers and industry associations stated that TSA could better inform passengers about (1) the TSA PreTM expedited screening eligibility requirements, including the fact that expedited screening is not always guaranteed even when a passenger is on one of the TSA PreTM lists; (2) how to know if they are eligible for expedited screening on a flight on a given day; and (3) the divestiture requirements and procedures in TSA PreTM screening lanes, among other things. In addition, officials from one airport authority noted that they observe passengers who are confused and improperly using expedited screening and stated that passengers who travel infrequently could be better educated about TSA PreTM. Furthermore, at two of the six airports we visited we observed customers divesting liquids, laptops, shoes, and outerwear in TSA PreTM screening lanes, which we noted caused the throughput to decrease and wait times to increase for other passengers in the TSA PreTM expedited screening lanes. TSA officials stated that it takes time to train passengers about the expedited screening process and that some confusion on the screening procedures is to be expected as passengers are retrained on this process after becoming accustomed to the security measures instituted since the September 11, 2001, terrorist attacks. TSA tracks customers’ experiences using the TSA Contact Center, where passengers can call or e-mail TSA officials about passenger screening experiences or ask TSA staff for information about security screening. Contact Center staff record details of each call and label the calls regarding TSA PreTM as compliments, complaints, or information requests about the program. TSA has collected the number of calls received regarding TSA PreTM since October of 2011, reviews these data weekly, and produces a weekly report on the call data. Our analysis of Contact Center data shows that each month since October 2011, passengers have submitted between 303 and 4,211 information requests about TSA PreTM screening lanes. As shown in table 1, as of April 2014 TSA had received over 97,000 calls regarding TSA PreTM since October of 2011. As of July 2014, TSA officials stated they are beginning to analyze trends and patterns from the Contact Center data in order to determine the effectiveness of their advertising and messaging to passengers, and to identify potential ways to make improvements. The Office of the Chief Risk Officer has developed a trend analysis to record the number of TSA PreTM-related calls the Contact Center received per month since October of 2012. The trend analysis also tracks the types of calls received over time to identify changes in the number of compliments, complaints, or information requests and highlights trends relative to events such as the start up of TSA PreTM Risk Assessments in October 2013 and TSA’s roll-out of the TSA PreTM Application Program in December 2013. While these efforts could address concerns raised by stakeholders, they are still in the early stages, so it is too soon to determine whether TSA’s planned actions will effectively address these concerns. TSA determines a passenger’s eligibility for or opportunity to experience expedited screening at the airport using one of three risk assessment methods. These include (1) inclusion on a TSA PreTM list of known travelers, (2) identification of passengers as low risk by TSA’s Risk Assessment algorithm, or (3) a real-time threat assessment at the airport using the Managed Inclusion process. TSA has determined that the individuals included on the TSA PreTM lists of known travelers are low risk by virtue of their membership in a specific group or based on group vetting requirements. For example, TSA determined that members of the Congressional Medal of Honor Society, a group whose members have been awarded the highest U.S. award for valor in action against enemy forces, present a low risk to transportation security and are good candidates to receive expedited screening. In other cases, TSA determined that members of groups whose individual members have undergone a security threat assessment by the federal government, such as individuals working for agencies in the intelligence community and who hold active Top Secret/Sensitive Compartmentalized Information clearances, are low risk and can be provided expedited screening. Similarly, TSA designated all active and reserve service members of the United States armed forces, whose combined members total over 2 million people, as a low risk group of trusted travelers. TSA determined that active duty military members were low risk and good candidates to receive expedited screening because the Department of Defense administers common background checks of its members. Except for those who joined through the TSA Pre™ Application program, the TSA Pre™ lists include populations for which TSA coordinated with a lead agency or outside entity willing to compile and maintain the lists. TSA has entered into separate agreements with the various agencies and entities to administer these lists. Generally, according to these agreements, Secure Flight has responsibility for receiving and processing the lists, but the originating agencies or entities are to maintain them by ensuring that individuals continue to meet the criteria for inclusion and to update the lists as needed. TSA also continues to provide expedited screening on a per-flight basis to the almost 1.5 million frequent fliers who opted to participate in the TSA Pre™ program pilot. According to TSA, this group of eligible frequent fliers met the standards set for the pilot based on their frequent flyer status as of October 1, 2011. TSA determined that these frequent fliers were an appropriate population to include in the program for several reasons, including the fact that frequent fliers are vetted against various watchlists each time they travel to ensure that they are not listed as known or suspected terrorists and screened at the checkpoint. The TSA PreTM Risk Assessment program evaluates passenger risk based on certain information available for a specific flight and determines the likelihood that passengers will be designated as eligible to receive expedited screening through TSA PreTM. Beginning in 2011, TSA piloted the process of using the Secure Flight system to obtain Secure Flight Passenger Data from air carriers and other data to assess whether the passenger is low risk on a per-flight basis and thus eligible to receive a TSA PreTM designation on his or her boarding pass to give the flier access to expedited screening. In September 2013 after completing this pilot, TSA decided to explore expanding this risk assessment approach to every traveler. In order to develop the set of low-risk rules to determine the passengers’ relative risk, TSA formed an Integrated Project Team consisting of officials from the Offices of Security Operations, Intelligence and Analysis, Security Capabilities, and Risk-Based Security. The team used data from multiple sources, including passenger data from the Secure Flight system from calendar year 2012, to derive a baseline level of relative risk for the entire passenger population. Our review of TSA’s documentation showed that TSA considered the three elements of risk assessment—Threat, Vulnerability, and Consequence—in its development of the risk assessment. These three elements constitute the framework for assessing risk as called for in the Department of Homeland Security’s National Infrastructure Protection Plan. TSA worked with a contractor to evaluate the data elements and the proposed risk model rules used for the baseline level of relative risk. In its assessment of the algorithm used for the analysis, the contractor agreed with TSA’s analysis of the relationship between the data elements and relative risk assigned to the data elements. Although TSA determined that certain combinations of data elements in its risk-based algorithm are less likely to include unknown potential terrorists, it also noted that designating passengers as low risk based solely on the algorithm carries some risk. To mitigate these risks, TSA uses a random exclusion factor that places passengers, even those who are otherwise eligible for expedited screening, into standard screening a certain percentage of the time. TSA adjusts the level of random exclusion based on the relative risk of the combinations of various data elements used in the algorithm, such that data combinations carrying more risk are randomly excluded from expedited screening more often than other data combinations. For example, TSA’s assessment indicated that combinations of certain data elements are considered relatively more risky than other data groups and passengers who fit this profile for a given flight should seldom be eligible for expedited screening, while combinations of other data on a given flight pose relatively less risk and could therefore be made eligible for expedited screening most of the of the time. TSA developed a risk algorithm that scores each passenger on each flight, and passengers with a high enough score receive a TSA PreTM boarding pass designation making them eligible for expedited screening for that trip. For both the TSA PreTM known traveler lists and the TSA PreTM Risk Assessments, TSA uses the Secure Flight system to determine passengers’ risk levels and to assign low-risk passengers the TSA PreTM designation to their boarding passes. Air carriers collect each passenger’s name and date of birth when the passenger books travel. When a passenger has a known traveler number, he or she may enter the number into the air carrier reservation system when booking travel. The air carrier sends this passenger information, along with the travel itinerary, to the Secure Flight system 72 hours before the passenger’s scheduled air travel. The Secure Flight system checks passenger data against watchlists and TSA PreTM lists, and runs the data through the low-risk rules in the TSA PreTM Risk Assessment algorithm, including applying the random exclusion rate to some passengers. Secure Flight then directs the air carrier to mark a passenger’s boarding pass for enhanced screening or expedited screening, or to identify a passenger as prohibited from boarding an aircraft or for standard screening. Figure 6 illustrates the process by which TSA uses Secure Flight to determine the boarding pass result. Managed Inclusion is designed to provide expedited screening to passengers not deemed low risk prior to arriving at the airport. TSA uses Managed Inclusion as a tool to direct passengers who are not on a TSA PreTM list or designated as eligible for expedited screening via the TSA PreTM Risk Assessments into the expedited screening lanes to increase passenger throughput in these lanes when the volume of TSA PreTM- eligible passengers is low. In addition, Managed Inclusion was developed to improve the efficiency of dedicated TSA PreTM screening lanes as well as to help TSA reach its internal goal of providing expedited screening to at least 25 percent of passengers by the end of calendar year 2013. TSA randomly selects passengers to enter the Managed Inclusion queue using a randomizer device that directs a certain percentage of passengers not previously designated that day as eligible for expedited screening to the TSA PreTM expedited screening lane. To screen passengers who have been randomly directed into the expedited screening lane, TSA uses real time threat assessments including combinations of BDOs, canine teams and ETD devices to ensure that passengers do not exhibit high-risk behaviors or otherwise present a risk at the airport. According to TSA, it designed the Managed Inclusion process using a layered approach to provide security when providing expedited screening to passengers via Managed Inclusion. Specifically, the Office of Security Capabilities’ proof of concept design noted that the Managed Inclusion process was designed to provide a more rigorous real-time threat assessment layer of security when compared to standard screening or TSA PreTM screening. According to the design concept, this real-time threat assessment, utilizing both BDOs and canine teams, allows TSA to provide expedited screening to passengers who have not been designated as low risk without decreasing overall security effectiveness. These layers include (1) the Secure Flight vetting TSA performs to identify high-risk passengers required to undergo enhanced screening at the checkpoint and to ensure these passengers are not directed to TSA PreTM expedited screening lanes, (2) a randomization process that TSA uses to include passengers into TSA Pre™screening lanes who otherwise were not eligible for expedited screening, (3) BDOs who observe passengers and look for certain high-risk behaviors, (4) canine teams and ETD devices that help ensure that passengers have not handled explosive materials prior to travel, and (5) an unpredictable screening process involving walk-through metal detectors in expedited screening lanes that randomly select a percentage of passengers for additional screening. When passengers approach a security checkpoint that is operating Managed Inclusion, they approach a TSO who is holding a randomizer device, typically an iPad that directs the passenger to the expedited or standard screening lane. TSA officials stated that the randomization layer of security is intended to ensure that passengers cannot count on being screened in the expedited screening lane even if they use a security checkpoint that is operating Managed Inclusion. FSDs can adjust the percentage of passengers randomly sent into the Managed Inclusion lane depending on specific risk factors. Figure 7 illustrates how these layers of security operate when FSDs use Managed Inclusion lanes. According to TSA, it designed the Managed Inclusion process to use BDOs stationed in the expedited screening lane as one of its layers of security when Managed Inclusion is operational to observe passengers’ behavior as they move through the security checkpoint queue. When BDOs observe certain behaviors that indicate a passenger may be higher risk, the BDOs are to refer the passenger to a standard screening lane so that the passenger can be screened using standard or enhanced screening procedures. We have conducted past work on TSA’s behavior detection analysis program, including the Screening of Passengers by Observation Techniques (SPOT) program, which BDOs use to identify potential high-risk passengers. In our November 2013 report, we reported that although TSA has taken several positive steps to validate the scientific basis and strengthen program management of behavior detection analysis and the SPOT program, TSA has not demonstrated that BDOs can reliably and effectively identify high-risk passengers who may pose a threat to the U.S. aviation system. In our 2013 report, we recommended that the Secretary of Homeland Security direct the TSA Administrator to limit future funding support for the agency’s behavior detection activities until TSA can provide scientifically validated evidence that demonstrates that behavioral indicators can be used to identify passengers who may pose a threat to aviation security. The Department of Homeland Security did not concur with this recommendation; however, in August 2014, TSA noted that it is taking actions to optimize the effectiveness of its behavior detection program and plans to begin testing this effort in October 2014. According to a TSA decision memorandum and its accompanying analysis, TSA uses canine teams and ETD devices at airports as an additional layer of security when Managed Inclusion is operational to determine whether passengers may have interacted with explosives prior to arriving at the airport. In airports with canine teams, passengers must walk past a canine and its handler in an environment where the canine is trained to detect explosive odors and to alert the handler when a passenger has any trace of explosives on his or her person. For example, passengers in the Managed Inclusion lane may be directed to walk from the travel document checker through the passageway and past the canine teams to reach the X-ray belt and the walk-through metal detector. According to TSA documents, the canines, when combined with the other layers of security in the Managed Inclusion process provide effective security. According to TSA, it made this determination by considering the probability of canines detecting explosives on passengers, and then designed the Managed Inclusion process to ensure that passengers would encounter a canine a certain percentage of the time. Our prior work examined data TSA had on its canine program, what these data show, and to what extent TSA analyzed these data to identify program trends. Further we analyzed the extent to which TSA deployed canine teams using a risk-based approach and determined their effectiveness prior to deployment. As a result of this work, we recommended in January 2013, among other things, that TSA take actions to comprehensively assess the effectiveness of canine teams. The Department of Homeland Security concurred with this recommendation and has taken steps to address it. Specifically, according to TSA canine test results, TSA has conducted work to assess canine teams and to ensure they meet the security effectiveness thresholds TSA established for working in the Managed Inclusion lane; and the canines met these thresholds as a requirement to screen passengers in managed inclusion lanes. In those airports where canines are unavailable, TSA uses ETD devices as a layer of security when operating Managed Inclusion. TSOs stationed at the ETD device are to select passengers to have their hands swabbed as they move through the expedited screening lane. TSOs are to wait for a passenger to proceed through the Managed Inclusion queue and approach the device, where the TSO is to swab the passenger’s hands with an ETD pad and place the pad in the ETD device to determine whether any explosive residue is detected on the pad. Once the passenger who was swabbed is cleared, the passenger then proceeds through the lane to the X-ray belt and walk-through metal detector for screening. TSA procedures require FSDs to meet certain performance requirements when ETD devices are operating, and TSA data from January 1, 2014, through April 1, 2014, show that these requirements were not always met. Beginning in May 2014, TSA’s Office of Security Operations began tracking compliance with the ETD swab requirements and developed and implemented a process to ensure that the requirements are met. According to TSA, it uses unpredictable screening procedures as an additional layer of security after passengers who are using expedited screening pass through the walk-through metal detector. This random selection of passengers for enhanced screening after they have passed all security layers TSA uses for Managed Inclusion provides one more chance for TSA to detect explosives on a passenger. TSA officials stated that they tested the security effectiveness of the individual components of the Managed Inclusion process before implementing Managed Inclusion, and determined that each layer alone provides an effective level of security. For example, TSA tested the threat detection ability of its canines using a variety of variables such as concealment location and the length of time the item was concealed prior to the encounter with the canine team. We did not evaluate the security effectiveness testing TSA conducted on the individual layers of the Managed Inclusion process. However, we have previously conducted work on several of the layers used in the Managed Inclusion process, including BDOs, ETD, and canine teams and raised concerns regarding their effectiveness and recommended actions to address those concerns. As discussed earlier in this report, TSA has made progress in addressing those recommendations. TSA determined through the initial testing of the Managed Inclusion layers that Managed Inclusion provides a higher level of security than TSA baseline security levels. In addition, according to TSA standard operating procedures, Managed Inclusion passengers are more likely than other passengers to be screened for explosives. TSA officials stated that they have not yet tested the security effectiveness of the overall Managed Inclusion process as it functions as a whole, as TSA has been planning for such testing over the course of the last year. TSA documentation shows that the Office of Security Capabilities recommended in January 2013 that TSA test the security effectiveness of Managed Inclusion as a system. According to officials, TSA anticipates that testing will begin in October 2014 and estimates that testing could take 12 to 18 months to complete. However, TSA could not provide us with specifics or a plan or documentation showing how the testing is to be conducted, the locations where it is to occur, how those locations are to be selected, or the timeframes for conducting testing at each location. Testing the security effectiveness of the Managed Inclusion process is consistent with federal policy, as laid out in Executive Order 13450—Improving Government Program Performance. We have previously reported on challenges TSA has faced in designing studies and protocols to test the effectiveness of security systems and programs in accordance with established methodological practices. For example, in our March 2014 assessment of TSA’s acquisition of Advanced Imagining Technology, we found that TSA conducted operational and laboratory tests, but did not evaluate the performance of the entire system, which is necessary to ensure that mission needs are met. A key element of evaluation design is to define purpose and scope, to establish what questions the evaluation will and will not address. Further, in November 2013 we identified methodological weaknesses in the overall design and data collection of TSA’s April 2011 validation comparison study to determine the effectiveness of the SPOT program. For example we found that TSA did not randomly select airports to participate in the study, so the results were not generalizeable across airports. In addition, we found that TSA collected the validation study data unevenly and experienced challenges in collecting an adequate sample size for the randomly selected passengers, facts that might have further affected the representativeness of the findings. According to established evaluation design practices, data collection should be sufficiently free of bias or other significant errors that could lead to inaccurate conclusions. Ensuring its planned effectiveness testing of the Managed Inclusion process adheres to established evaluation design practices will help TSA provide reasonable assurance that the effectiveness testing will yield reliable results. The specific design limitations we identified in TSA’s previous studies of Advanced Imaging Technology and SPOT may or may not be relevant design issues for an assessment of the effectiveness of the Managed Inclusion process, as evaluation design necessarily differs based on the scope and nature of the question being addressed. In general, evaluations are most likely to be successful when key steps are addressed during design, including defining research questions appropriate to the scope of the evaluation, and selecting appropriate measures and study approaches that will permit valid conclusions. TSA has two goals and one measure intended to assess the performance of its expedited screening programs, but our analysis shows that the goals and measure are not aligned. Also, TSA has estimated savings from expedited screening and has reduced its fiscal year 2015 budget request by the amount of the estimated savings. TSA has two goals intended to assess the performance of its expedited screening programs and uses one measure to track progress toward these goals. However, our analysis shows that the measure used does not align with the goals. Specifically, TSA’s program goals are to ensure: (1) that 25 percent of air passengers were eligible for expedited screening by the end of calendar year 2013; and (2) that 50 percent of passengers are eligible for expedited screening by the end of calendar year 2014. According to TSA documents, TSA uses one measure—the total number of air passengers screened daily using expedited screening as a percentage of the total number of passengers screened daily—to assess progress towards these goals. TSA collects data for this measure by reporting, not the number of passengers designated as eligible for expedited screening, but the number of passengers who actually receive such screening. As noted earlier in this report, because expedited screening is voluntary, not all passengers who are eligible necessarily use expedited screening. For example, a passenger may be traveling with a group in which not all passengers in the group are eligible for expedited screening, so the passenger may choose to forgo expedited screening. Also, TSA may not use boarding pass scanners at the airport or at a specific checkpoint, which would preclude TSA from offering expedited screening at the airport to passengers with TSA PreTM boarding passes who were not otherwise eligible for expedited screening due to the age of the passenger. As a result, the information that TSA is reporting to show that it is meeting its goal may be understated and inaccurate. TSA’s Chief Risk Officer agreed that the goals and the measure are not linked, but said that tracking actual screening data rather than eligibility data presents a more accurate picture of the expedited screening program performance. Specifically, the Chief Risk Officer noted that TSA’s definition of “eligible” is broader than passengers who receive the TSA PreTM designation on the boarding pass and that eligible passengers also include those individuals who have opted into the TSA PreTM program as members of one of the TSA PreTM lists, and passengers who fly on one of the participating TSA PreTM air carriers who could become eligible through the TSA PreTM Risk Assessment process. He stated that broadly measuring the number of passengers eligible rather than those who receive a TSA PreTM boarding pass result (making them eligible for expedited screening) would overstate progress toward the eligibility goal. According to TSA, the Administrator set an internal TSA target in September 2012 that 25 percent of daily travelers screened by TSA were to receive some form of expedited screening by the end of the calendar year 2013 and 50 percent by the end of the calendar year 2014 and did so to encourage the development and expansion of TSA’s risk-based security initiatives and expedited screening of passengers. TSA also noted that it began achieving the 25 percent of passengers receiving expedited screening goal in November 2013 and attributed reaching this goal to an increase in the number of participating air carriers from seven in July 2013 to nine in November 2013, an increase in the number of airports where TSA PreTM dedicated screening lanes are available to over 100 airports in October 2013, the implementation of TSA PreTM Risk Assessments on a flight-by-flight basis in October 2013, and the increased use of Managed Inclusion in October 2013. While we agree that tracking actual screening data may provide insights about expedited screening program performance, ensuring that the goals and measure are aligned is important to provide more accurate performance measurement data to guide program performance. Best practices regarding the key attributes of successful performance measurement state that performance measures should link and align with agency-wide goals and the mission should be clearly communicated throughout the organization. Furthermore, in response to a requirement of the Department of Homeland Security Appropriations Act, 2014, TSA submitted a report to the Committees on Appropriations of the Senate and House of Representatives certifying that one in four (or 25 percent of) passengers are eligible for expedited screening without lowering security standards and outlined a strategy to increase the number of passengers eligible for expedited screening to 50 percent by the end of calendar year 2014. Aligning TSA’s measures and goals could help ensure that TSA, as well as lawmakers, has a clear and accurate picture of the program’s performance and can make improvements as needed. TSA’s fiscal year 2015 budget request includes savings noted as a budget decrease resulting from efficiencies anticipated from risk-based security initiatives including expedited screening. In its fiscal year 2015 budget request, TSA estimated savings of $100 million as a result of risk- based security initiatives and primarily based these savings on an expected reduction in screening staff at airports. Specifically, TSA estimated that it could reduce staff costs by about $92 million (or 1,441 full-time-equivalent staff) with an additional savings of almost $8 million realized from an associated reduction in indirect costs such as training, information technology support, and recruitment, among other expenses. According to TSA officials, implementing expedited screening allows for staff reductions because TSA is able to operate fewer screening lanes while maintaining throughput rates and short wait times. Also, TSA officials noted that these staff reductions are to be realized through staff attrition. Table 2 shows TSA’s estimated cost savings as a result of risk- based security initiatives. Identifying potential cost savings and including these savings amounts in proposed budgets is consistent with Office of Management and Budget federal budgeting guidance. Specifically, for the fiscal year 2014 budget cycle, the Office of Management and Budget instructed agencies to identify programs where legislative, budget, or administrative changes could improve program effectiveness and efficiency and result in cost savings so that the deficit reductions included in the Budget Control Act of 2011 could be realized. Further, the Office of Management and Budget’s fiscal year 2015 budget guidance instructed agencies to continue to identify areas where cost savings might be realized, consistent with the fiscal year 2014 budget guidance. In addition, identifying potential savings from program efficiencies in spending estimates is consistent with Office of Management and Budget guidance for developing budget submissions and included in the instructions provided to federal agencies for preparing annual budgets. We compared TSA’s savings estimates with the Office of Management and Budget guidance for preparing agency budgets and found that TSA’s estimates were consistent with this guidance. Specifically, the guidance states that agencies’ budgets must reflect all requirements anticipated by the agency at the time of the budget submission, including budget decreases for activities proposed for reduction. TSA’s new methods to assess passenger risk, such as TSA PreTM Risk Assessments and Managed Inclusion, have significantly increased the use of expedited screening. As a result, it will be important for TSA to evaluate the security effectiveness of the Managed Inclusion process as a whole, to ensure that it is functioning as intended and that passengers are being screened at a level commensurate with their risk. According to TSA, this testing is scheduled to begin in October 2014 and could take 12 to 18 months to complete. Ensuring that its planned effectiveness testing of the Managed Inclusion process adheres to established evaluation design practices will help TSA provide reasonable assurance that the effectiveness testing yields reliable results. Regarding overall program performance, although TSA collects information to assess the performance of its expedited screening programs, its assessment could be improved because the performance goals and the measures used to evaluate those goals are currently not aligned. Aligning TSA’s measures and goals could help ensure that TSA, as well as lawmakers, has a clear and accurate picture of the program’s performance and can make improvements as needed. GAO is recommending that TSA take the following two actions. To ensure that TSA’s planned testing yields reliable results, we recommend that the TSA Administrator take steps to ensure that TSA’s planned effectiveness testing of the Managed Inclusion process adheres to established evaluation design practices. To ensure that TSA has accurate information by which to measure the performance of its expedited screening programs, we recommend that the TSA Administrator ensure that the expedited screening performance goals and measures align. We provided DHS with a copy of this report for review and comment. On November 19, 2014, DHS provided written comments, which are summarized below and reproduced in full in appendix II. DHS concurred with our two recommendations and described actions under way or planned to address them. In addition, DHS provided written technical comments, which we incorporated into the report as appropriate. DHS concurred with our first recommendation that TSA take steps to ensure that its planned effectiveness testing of the Managed Inclusion process adheres to established evaluation practices. DHS stated that TSA plans to use a test and evaluation process—which calls for the preparation of test and evaluation framework documents including plans, analyses, and a final report describing the test results—for its planned effectiveness testing of Managed Inclusion. In addition, TSA has begun collaborating with National Institute of Standards and Technology statistical engineering staff who are to share their knowledge and experience with TSA as well as assist with the design, analysis, and reporting of the test and evaluation process. These actions, if implemented effectively, should address the intent of our recommendation. DHS concurred with our second recommendation that TSA ensure that the expedited screening performance goals and measure align. According to DHS, all measures are approved by DHS and the Office of Management and Budget, and changes must be approved by both agencies. In June 2014, TSA began working with both agencies to modify the measure to ensure that it aligned with the expedited screening goals. This action, once completed, should address the intention of our recommendation. We are sending copies of this report to the appropriate Congressional committees, the Secretary of the Department of Homeland Security, and other interested parties. In addition, this report will be made publicly available at no extra charge on the GAO website at http://www.gao.gov. Should you or your staff have any questions about this report, please contact Jennifer A. Grover at 202-512-7141 or GroverJ@gao.gov. Key contributors to this report are acknowledged in appendix III. In addition to the contact named above, Glenn Davis (Assistant Director), Ellen Wolfe (Analyst-in-Charge), Chuck Bausell, Eric Hauswirth, Suling Homsy, Susan Hsu, Brendan Kretzschmar, Stanley Kostyla, Thomas Lombardi, Linda Miller, and Jean Orland made key contributions to this report. | TSA screens or oversees the screening of more than 650 million passengers annually at more than 450 U.S. airports. In 2011, TSA began providing expedited screening to selected passengers as part of its overall emphasis on risk-based security. Specifically, by determining passenger risk prior to travel, TSA intended to focus screening resources on higher-risk passengers while expediting screening for lower-risk passengers. GAO was asked to determine how TSA implemented and expanded expedited screening via TSA Pre✓ TM . This report examines, among other things, (1) how TSA has developed, implemented, and used expedited screening, (2) how TSA assesses passenger risk, and (3) the extent to which TSA has determined the Managed Inclusion system's effectiveness. GAO analyzed TSA procedures and data from October 2011 through January 2014 on expedited screening and interviewed officials at TSA, airport authorities, air carriers, and industry associations about expedited screening. Since the Transportation Security Administration (TSA) implemented its expedited screening program—known as TSA Pre✓ TM in 2011, the number of passengers receiving expedited screening grew slowly, and then increased about 300 percent in October 2013 when TSA expanded its use of methods to increase passenger participation, such as conducting automated risk assessments of all passengers. In conducting these assessments, TSA assigns passenger scores based upon information available to TSA to identify low risk passengers eligible for expedited screening for a specific flight prior to the passengers’ arrival at the airport. To assess whether a passenger is eligible for expedited screening, TSA considers (1) inclusion on an approved TSA Pre✓ TM list of known travelers; (2) results from the automated risk assessments of all passengers; and (3) threat assessments of passengers conducted at airport checkpoints known as Managed Inclusion. Managed Inclusion uses several layers of security, including procedures that randomly select passengers for expedited screening, behavior detection officers who observe passengers to identify high-risk behaviors, and either passenger screening canine teams or explosives trace detection devices to help ensure that passengers selected for expedited screening have not handled explosive material. Prior to Managed Inclusion’s implementation, TSA relied primarily on approved lists of known travelers to determine passenger eligibility for expedited screening. TSA has tested the effectiveness of individual Managed Inclusion security layers and determined that each layer provides effective security. GAO has previously conducted work on several of the layers used in the Managed Inclusion process, raising concerns regarding its effectiveness and recommending actions to TSA to strengthen them. For example, in January 2013, GAO recommended that TSA take actions to comprehensively assess the effectiveness of canine teams. TSA subsequently addressed this recommendation by conducting the assessment. In October 2014, TSA planned to begin testing Managed Inclusion as an overall system, but could not provide specifics or a plan or documentation showing how the testing is to be conducted, the locations where it is to occur, how these locations are to be selected, or the timeframes for conducting testing at each location. Moreover, GAO has previously reported on challenges TSA has faced in designing studies to test the security effectiveness of its other programs in accordance with established methodological practices such as ensuring an adequate sample size or randomly selecting items in a study to ensure the results can be generalizable—key features of established evaluation design practices. Ensuring its planned testing of the Managed Inclusion process adheres to established evaluation design practices will help TSA provide reasonable assurance that the testing will yield reliable results. This is a public version of a sensitive report that GAO issued in September 2014. Information that the Department of Homeland Security deemed sensitive has been removed. GAO recommends that TSA take steps to ensure and document that its planned testing of the Managed Inclusion system adheres to established evaluation design practices, among other things. DHS concurred with GAO's recommendations. |
In 1991, HRSA announced that it would fund 10 Healthy Start sites and issued guidance on how communities could obtain a grant. By July 1991, HRSA had received 40 applications, and in September of that year, it began funding 15 communities for a 5-year demonstration project. In 1996, funding for these communities was extended for a sixth year. In 1994, HRSA began funding seven new communities—called special projects—and funding for these was also extended in 1996 for an additional year. Forty-one additional communities have been awarded grants since 1997, and these now share funding with the 15 original sites and 5 of the special projects judged by HRSA to have been successful. To be eligible for the original grants, a community had to have an average annual infant mortality rate of at least 1.5 times the national average between 1984 and 1988—that is, 15.7 deaths per 1,000 live births—and at least 50 but no more than 200 infant deaths per year. Applicants had to be local or state health departments, other publicly supported provider organizations, tribal organizations, private nonprofit organizations, or consortia of these organizations. HRSA required only a few specific activities of all sites to provide grantees flexibility to make their projects relevant to local circumstances. Healthy Start’s principal goal to reduce infant mortality has usually been stated as a 50-percent reduction in infant mortality, attributable to the program, over 5 years. Healthy Start also aims to achieve improvements in other outcomes—such as reductions in low birthweight, improved maternal health, and increased community awareness of threats to infant health—that are expected to help reduce infant mortality. In addition, Healthy Start was designed to demonstrate how a program based on innovation, community commitment and involvement, increased access to care, service integration, and personal responsibility could work in a variety of locations with high infant mortality. From fiscal year 1991 (a planning year that preceded the 5-year demonstration), through fiscal year 1998, program funding for Healthy Start has totaled more than $600 million. Healthy Start’s fiscal year 1992 funding was less than half of what was initially proposed, and the number of grantees was greater. Instead of $171 million being spread over 10 sites, funding for the first year of the demonstration was $64 million spread over 15 sites. In 1997, HRSA concluded the demonstration phase of Healthy Start and began the “replication phase” in 40 (now 41) new sites. In addition to providing Healthy Start services in their own communities, the established Healthy Start communities—the original 15 sites and 5 of the special projects—are mentoring several of the new sites. While the new sites receive, on average, somewhat less funding than the established sites, funding is shared among all sites. In September 1993, HRSA contracted with MPR to conduct the national evaluation of the Healthy Start program. This is currently funded with about $4.8 million, paid from the 1-percent set-aside for evaluation of health programs. The original contract called for MPR to evaluate the first 4 years of the 5-year demonstration program and contained an option for HRSA to request evaluation of the fifth year. In 1995, HRSA exercised that option, and the contract now requires that the evaluation cover all 5 years of the originally planned demonstration. Although the demonstration was extended for a year, HRSA currently has no plans to request that MPR evaluate the sixth and final year of the demonstration phase. The national evaluation, focused only on the original 15 sites, is designed to determine whether Healthy Start changed the rate of infant mortality and related outcomes, what factors contributed to any effects the program may have had, and how successful approaches to lessening infant mortality can be replicated in other communities. Although each Healthy Start community is unique and the details of the delivery of any one service may differ across communities, many of the services are common to all sites: outreach and case management; support services, such as transportation and nutrition education; enhancements to clinical services; and public information campaigns. The national evaluation has two major components: an impact evaluation and a process evaluation. The impact evaluation is used to determine whether the infant mortality rates in Healthy Start communities have declined and whether related outcomes have improved. The process evaluation describes how the program actually operates. In its final evaluation report, MPR intends to synthesize these two components, linking outcomes with processes to determine why Healthy Start has or has not succeeded in communities and which strategies are likely to be successful elsewhere. In the fall of 1997, MPR reported some preliminary evaluation results, including a draft interim report on its impact evaluation, which led to press accounts suggesting a variety of interpretations about the success of the Healthy Start program. We believe these preliminary evaluation results were not conclusive. Although the impact evaluation suggested that Healthy Start has not reduced infant mortality, such conclusions about the program would be premature because the impact evaluation does not include data from all the program sites or data from all the years of the program. Moreover, the process evaluation indicates that program implementation in many communities was slow and, therefore, that the impact data analysis may not be representative of a mature Healthy Start program. The national evaluation’s analysis of Healthy Start’s effect on infant mortality and related outcomes is preliminary—MPR characterized its October 1997 report as a draft. Because of problems obtaining data from some of the states’ departments of health, only 9 of the 15 program sites to be evaluated were represented in the analysis. In addition, the analysis is related to only the first 3 of the 6 years of program operation. Moreover, for illustrative purposes, MPR has limited its principal impact analysis to data from only the last of those 3 years, 1994. However, if, as HRSA believes, fiscal year 1995 was the first fully operational year, even 1994 data may not reflect the communities’ mature programs. To determine program impact, MPR is conducting two types of analysis: availability and participation. The availability analysis compares a Healthy Start community and two similar communities without Healthy Start to determine if the presence of the program in a community has an effect on infant mortality and related outcomes. The participation analysis compares, within a Healthy Start community, mothers who were clients of the program and mothers who were not. Both analyses can be used to study infant mortality; however, the availability analysis directly addresses the issue of reducing infant mortality in entire communities, while the participation analysis is restricted to outcomes for program participants. The national evaluation’s availability analysis found that for 1994, the overall infant mortality rate in Healthy Start communities was about the same as that in comparison communities. Applied to the individual sites, the analysis found that of the nine Healthy Start communities analyzed, only one experienced a significant reduction in infant mortality relative to its comparison sites. MPR similarly found that the neonatal and postneonatal mortality rates—two components of infant mortality—were not significantly reduced in the Healthy Start communities relative to the comparison sites. In its analysis of birth outcomes considered to be risk factors for infant mortality at eight of the Healthy Start communities, MPR found that in 1994, the low birthweight rate was reduced in only one community, the preterm birth rate was reduced in two other communities, and the rate at which women received adequate or better prenatal care was improved in five communities. None of the analyses of data pooled from all sites yielded significant differences between sites with and without Healthy Start. The national evaluation’s participation analysis of Healthy Start’s effect on infant mortality has not been completed because of problems of data availability. The participation analysis of related outcomes, like the availability analysis, yielded little evidence of program effect in the eight communities analyzed. Participation in Healthy Start was not associated with reductions in low or very low birthweight rates or preterm birth rates. In postpartum interviews with participants and nonparticipants, conducted in 1996, after all sites became fully operational, MPR found that participants were more likely to rate their prenatal care experience more highly and to be using birth control. However, no significant differences between participants and nonparticipants were reported for the receipt of services or health behaviors during pregnancy. The national evaluation’s process evaluation is intended to provide a detailed picture of what happened over time when Healthy Start was implemented at the various sites and assess its success in meeting its process objectives, such as hiring and retaining staff and putting the planned program in place. The evaluation, which, according to HRSA’s project officer for the evaluation, is to result in a series of reports, indicates thus far that the Healthy Start program was implemented largely as originally envisioned but more gradually than expected. MPR’s implementation report, a major portion of the process evaluation, provides an overview of program implementation in 14 of the 15 original sites and draws conclusions about these projects. The report includes detailed information on the development of the projects, the barriers to successful implementation, and the gaps between what was planned and what resulted. In addition, the report presents perceptions of variations across projects with respect to a variety of criteria, such as staff stability and consumer participation in the process. It also contains timelines indicating for each site when specific program components became operative. These timelines demonstrate, for example, that only 4 of the 14 sites had all their planned services operational by October 1994. The implementation report concludes with lessons learned, which are organized into four categories: community context, organization and administration, community involvement, and service delivery. MPR has also completed a report on the infant mortality review process at the various Healthy Start sites. It indicates that, in general, the review programs are operational but with varying degrees of success in identifying the factors leading to infant mortality in their communities. Two detailed reports on specific interventions are available only in draft form. One describes, in greater detail than the implementation report, program participants, including comparisons of participants and nonparticipants with respect to the use of health and social services, satisfaction with services, and health-related behaviors, such as birth control and breast feeding. The other describes how outreach and case management were delivered, with consideration given to both similarities and differences across sites. Because the impact and process evaluations are not finished, their synthesis has not yet begun. MPR expects to be able to draw conclusions about the program characteristics that are most effective in improving maternal and child health outcomes and the circumstances under which they are most likely to succeed when it integrates the impact and process evaluations. The synthesis of the impact and process components of the evaluation to be presented in the final report will be based on impact data from years one through four of the demonstration. This synthesis may have to be revised when more impact data are available. The final report as currently planned will not be the final evaluation of Healthy Start. The final report will contain an analysis of outcomes through 1995 and a synthesis of this with the findings of the process evaluation reports. Thus, it will assess the program’s impact on infant mortality through the fourth year of the demonstration, not the fifth year as planned. Further, these data will reflect the impact of only 1 or 2 years during which the program was fully operational. An addendum to the report, planned to follow a year later, will contain an updated analysis of outcomes through 1996. The addendum will assess impact on infant mortality through the fifth year of the demonstration and thus will reflect the impact of only 2 or 3 years during which the program was fully operational. However, by evaluating the sixth year of the demonstration, it would be possible to obtain an analysis of 3 or 4 years of impact data from the mature program. Evaluating the sixth year of the demonstration would likely enhance the value of the investment in MPR’s evaluation for several reasons. First, including data from the sixth year of the demonstration would allow evaluation of the years in which all 15 sites have been fully operational. Second, additional data would represent the effects of a more mature and potentially more effective program, which would likely provide more definitive answers about Healthy Start’s success. Third, having more years of data would increase the likelihood of detecting small but real effects of the program. Further, it is possible that data from the more mature years of the program will reflect program impact on the wider Healthy Start community, not just direct participants in program services and activities. In addition to hoped-for effects on the pregnant clients of Healthy Start, there may be effects of program services and education on those same women at other times, such as before or early in their next pregnancy; indirect effects on their social network, such as their male partners, friends, and sisters; and indirect effects on the community in general. HRSA’s project officer for the national evaluation notes that the cost associated with analyzing results for an additional year would be about $100,000; this would be inexpensive relative to the total national evaluation cost of about $5 million. Since states collect vital records on births and deaths routinely, funds would be needed only to obtain, analyze, and report on the data and to revise the synthesis of these data with the process evaluation. Since the national evaluation of the Healthy Start program has yet to be completed, preliminary results should not be used to conclude that the program has or has not achieved its goals. HRSA and MPR plan for the “final” report of the national evaluation to include an extensive description of the program, indicate whether it has reduced infant mortality rates at Healthy Start sites, and provide an analysis of how program characteristics have influenced outcomes. However, the final report will analyze infant mortality data from only 4 years of the demonstration. Primarily because implementation of the projects was slower than anticipated, data from the first 4 years of the demonstration may be insufficient for judging the success of Healthy Start in lowering infant mortality. Thus, even the final report will be inconclusive. Analysis of the fifth year of the demonstration, as planned, will help strengthen the evaluation, but this analysis will not reflect as many years of mature program operation as possible. Thus, at a relatively modest cost, MPR’s evaluation would be further strengthened by including data from the sixth and final year of the demonstration. To increase the value of the investment in the national evaluation of Healthy Start, we recommend that HRSA contract with MPR to expand the evaluation to include impact data from the sixth year. In commenting on a draft of this report, HRSA agreed with our findings and indicated that it intends to add funds to the MPR contract to include impact data from the sixth year of the demonstration. HRSA and MPR provided a number of technical comments that we incorporated as appropriate. As arranged with your office, unless you announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. We will then send copies to the Secretary of Health and Human Services, to the Administrator of HRSA, and to others who are interested. We will also make copies available to others on request. Please contact me at (202) 512-7119 if you or your staff have any questions. You may also contact Michele Orza, Assistant Director, at 512-9228, or Donald Keller, Senior Evaluator, at 512-2932. The availability analysis of infant mortality and related birth outcomes is part of Mathematica Policy Research’s (MPR) attempt to determine if Healthy Start has, as intended, reduced infant mortality at program sites, looking at the vital statistics for entire program and comparison areas where, respectively, the program is or is not available. It does this without concern about the participation in the program of specific persons. It attempts to separate any change that may occur in outcomes at program sites that is attributable to Healthy Start from change in outcomes at those sites that would have occurred without the program—for example, changes stemming from the national trends not related to health and social interventions, such as the persisting decline in infant mortality experienced almost everywhere in the United States. It does this for each outcome of interest, obtained from the state health department’s vital records of linked births and deaths for the sites of interest, by (1) comparing each program site with two comparison sites without Healthy Start, selected (matched) for similarity to the program site with respect to race and ethnicity, infant mortality rate, and trend in infant mortality over the pre-Healthy Start period and (2) statistically adjusting the data for differences between program and comparison site mothers on variables, also obtained from vital records, believed to affect the outcome. To the extent that, as a result of site selection and statistical adjustment, the program and comparison sites do not differ in expected infant mortality rate, then the comparison of the program and comparison site adjusted outcomes should be a valid indication of the effectiveness of the program. MPR’s approach involves accepted statistical methods with known limitations. One limitation stems from the possibility that program and comparison site mothers will systematically differ in ways, such as poverty level, that affect outcomes but are not taken into account in the selection of comparison sites and are not available for use in statistically adjusting the data. Such a difference could bias the estimation of the difference between program and comparison sites in outcomes. Nevertheless, MPR appears to have taken reasonable precautions to minimize the likelihood of bias. MPR did this, for example, by using two comparison sites, not just one, for each program site and by avoiding the selection of comparison areas known to have interventions similar to Healthy Start. Further, MPR shared information on its site selections with each of the 15 sites and sought their comments and agreement on the choices. reductions when such differences, in fact, exist. Roughly speaking, power depends on the number of observations—in this case, live births—in the analysis, and it is therefore often a problem when that number is not controlled by the design of the study. In the case of Healthy Start, this implies that the ability to detect a real difference between program and comparison communities depends upon whether the comparison involves, for example, communities relatively small in population, large communities, or all communities pooled. With respect to infant mortality, MPR reports that, using all data from 1984 to 1994, the minimal detectable difference in infant mortality is computed to be 31 percent, 7 percent, and 6 percent for small, large, and all communities, respectively. This means that if there are real differences between Healthy Start and comparison communities, but these differences are smaller than we have the power to detect (that is, smaller than the percentages listed above), then we will mistakenly conclude that the program has no effect on infant mortality. Since power depends on the number of observations, increasing the number of years of data included in the analysis will increase the ability to detect any difference that may exist. A third potential limitation concerns the number of statistical tests performed in the complete impact analysis. If 15 sites and seven different birth-related outcome variables are considered, then at least 105 statistical tests will be done. With as large a number of tests as this being done, it is likely that a portion of them will yield statistically significant results by chance alone. This means that even if Healthy Start has no effect on infant mortality, we will mistakenly conclude that it does have one in a certain percentage of the statistical tests conducted. There are statistical methods of dealing with this problem. If they are not employed in the final analysis, then differences that are statistically significant by chance alone will occur more often than is considered acceptable by statistical convention. MPR’s participation analysis of birth outcomes is part of MPR’s attempt to determine the effect of participation in Healthy Start within program sites. It compares 1995 birth outcomes between participants and nonparticipants in each project area. Participants in the program’s prenatal activities are identified from program files, the Minimum Data Set required of all Healthy Start sites, and their birth certificates are flagged. Their birth outcomes are then compared with those of nonparticipants or participants with limited prenatal program involvement. This kind of analysis is limited by possible preexisting differences between participants and nonparticipants and by the very definition of participant. Since participation in Healthy Start is voluntary, it is possible that participants systematically differ from nonparticipants. Program providers may, for example, tend to attract persons who are especially knowledgeable about services or already well connected to the health care system. Under these circumstances, program participants would be expected to have better outcomes even without Healthy Start. Alternatively, participants might be especially needy and at high risk for poor outcomes, in which case they would be expected to have relatively poor outcomes. MPR deals with this by statistically adjusting the data on the basis of information that may reflect these preexisting differences, background information from birth certificates, and any other available sources. Although it is difficult to be certain that outcomes have been adjusted for all possible systematic differences between the groups being compared, MPR has stated that participants tend to be at high risk for poor birth outcomes, thereby making any potential finding of better outcomes for them than for nonparticipants more convincing of the program’s value. The question of who is a participant must be answered in order to conduct the participation analysis. It turns out not to be easily answered because (1) the Minimum Data Sets of many sites have been slow in developing into accurate record systems, (2) it is not always clear whether a participant’s involvement has been intense enough to classify that person as a participant, and (3) when supplementary information has been sought from new mothers about their involvement in the program it is not always clear what criteria they use for judging whether or not to claim to be participants. Moreover, these problems vary somewhat from site to site, making it difficult to be sure that all participation analyses are comparable. meaningful to the extent that the preexisting differences between participants and nonparticipants can be taken into account. MPR’s process evaluation is an effort to use both qualitative and quantitative information to assess the degree to which Healthy Start has implemented its program as conceived of, how it serves its target population, and how these processes developed over time. This description of Healthy Start can be considered the documentation of the program’s second goal—to demonstrate what happens when this kind of effort is mounted. Its methods are varied, including making site visits with and telephone calls to project staff, examining the client records of the Minimum Data Set, the postpartum survey of participants and nonparticipants, running focus groups with service providers and with members of the communities, as well as using documents and vital records. Many aspects of the process evaluation are not complete and will not therefore be described further, but one major document—the implementation report—is complete. The implementation report is based mainly on site visits, expenditure reports from each project, and the client records of the Minimum Data Set. Further, two independent teams of site visitors rated certain dimensions of administrative success using a modified Delphi consensus reaching process. Although they may not be avoidable, the limitations of this report are those common to most process evaluations that are heavily qualitative. The methods employed provide a wealth of information suitable to inform those who would develop similar programs about what to expect if different options of organization, administration, and mode of service delivery are attempted. However, the essential subjectivity of interview methods makes it difficult to know how closely other evaluators would agree with the conclusions drawn. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the preliminary results of a national evaluation of the Healthy Start program, focusing on: (1) the plan for national evaluation; (2) what Mathematica Policy Research, Inc.'s (MPR) preliminary evaluation results indicate; and (3) what is expected from the final evaluation. GAO noted that: (1) MPR's preliminary reports from the national evaluation of Healthy Start do not provide a complete assessment of the program and, therefore, should not be used to judge the program's success; (2) even the final report will not contain all the data expected to be analyzed for the national evaluation; (3) if the evaluation plan were expanded to include data from the sixth and final year of the demonstration, conclusions about whether the program has met its goals of reducing infant mortality could be strengthened; (4) the national evaluation of the Healthy Start program had two major components: (a) an impact evaluation, to determine whether the infant mortality rates in Healthy Start communities have declined; and (b) a process evaluation to describe how the program actually operates; (5) once these evaluations are completed, MPR plans to link outcomes with processes in its final report to determine why Healthy Start has or has not succeeded and what would be required for a similar intervention elsewhere; (6) while MPR's draft report on its impact evaluation suggests that Healthy Start has little effect in reducing infant morality in targeted communities, drawing such a conclusion at this time would be premature for several reasons; (7) the process evaluation is also incomplete; (8) only some of the reports that it comprises are available; (9) eventually, MPR plans to cover program implementation at all sites, the characteristics of program participants, and details about some of the most important strategies used by the program; (10) with these two major components of the evaluation in preliminary stages or incomplete, MPR cannot yet relate process to impact; (11) the final evaluation is expected to include an analysis of infant morality data from the original 5 years of the demonstration for all 15 sites; (12) however, the final report on the evaluation, now planned for early 1999, will include data from only the first 4 years; and (13) further, because implementation of the program was slower than anticipated, and the program was mature for fewer years of the original demonstration period than planned for in the evaluation, even results from the final report are likely to be inconclusive and should be considered preliminary. |
Over the last decade, assisted living has emerged as an increasingly popular long-term care option. Within the continuum of long-term care, assisted living facilities typically provide a level of care between independent living and nursing homes for persons who need assistance with one or more ADLs, such as bathing or dressing. However, states vary in the term they use for assisted living—it appears in the licensing regulations of most states but some refer instead to personal care homes, boarding homes, residential care facilities, adult homes, and homes for the aged—and in the characteristics of the facilities encompassed by the term used. A 2002 study of assisted living policies in each of the 50 states and the District of Columbia showed that states differ in the facilities included under their assisted living regulations based on facility size, services provided, and whether or not the facilities offer specified types of accommodations such as private apartments. In addition, the study found that many states incorporate a distinctive philosophy of care in their regulation of assisted living facilities to emphasize residents’ choice, independence, dignity, and privacy. Specifically, 28 states have included an assisted living philosophy statement in their regulations, but specifics of the statements vary. Unlike nursing homes, which are subject to extensive federal regulations, assisted living facilities generally have considerable flexibility to determine the resident populations that they serve and the services they provide. As a result, assisted living facilities vary widely on both of these dimensions. Nevertheless, most facilities provide housing, meals, housekeeping, laundry, supervision, and assistance with some ADLs and other needs, such as medication administration. The majority of assisted living residents are between the ages of 75 and 85 and more than two thirds are females. About a quarter of assisted living residents need help with three or more ADLs. Eighty-six percent of residents require or accept help with medication. Facilities differ in the extent to which they admit residents with certain needs (including residents who meet the criteria for admission to nursing homes) and whether they retain residents as their needs change. For example, a 2000 study found that less than half of the assisted living facilities are willing to admit or retain persons who require assistance to transfer from bed to chair or wheelchair. This study also found that less than half of the facilities would admit or retain residents with moderate to severe cognitive problems. The type, size, and cost of assisted living facilities also vary widely. Some facilities are freestanding while others are located on a campus that contains multiple units offering different levels of care (such as nursing homes and independent living residences). Those built in the 1980s generally provide semiprivate accommodation while the newer facilities typically offer private apartments. Facilities range in size from a few beds to over a thousand. The average facility in a nationwide study had 53 beds. Many facilities are independently owned while others belong to regional or national chain corporations. Assisted living fees vary widely across and within states depending on the facility’s size, service, and location. For example, the average monthly base rate ranged from $1,020 in Mississippi to $4,429 in Washington, D.C., according to a recent industry survey. Residents often pay additional fees for special care units and other services, such as medication administration and transportation. Two thirds of assisted living residents pay out-of-pocket, but many states use Medicaid and other federal and state funds to help finance such care. As of October 2002, 41 states used Medicaid reimbursement to cover assisted living or related services for more than 102,000 people. The federal government exercises minimal oversight over assisted living, leaving to the states primary responsibility for ensuring that assisted living residents have adequate protections. Some states fulfill this responsibility by establishing licensing standards, inspection procedures, and enforcement measures. Nevertheless, the regulatory approaches to assisted living adopted by states vary widely in scope and structure. For example, some states delineate the services that assisted living facilities may or may not provide—sometimes with multiple tiers of licenses for more specialized care—while others grant broad flexibility to providers to meet the individual needs of residents and their families. All states have long-term care ombudsmen with potential jurisdiction over assisted living facilities. Among other things, ombudsmen may provide services to protect assisted living residents and resolve complaints that they file. Ombudsmen may monitor quality of care, educate residents about their rights, and mediate disputes between residents and providers. Prior GAO reports have addressed a number of consumer protection and quality of care issues that remain at the forefront of public concerns about assisted living. These reports raised questions about the adequacy of information available to prospective consumers to help them choose a facility that meets their needs. The 1999 report also discussed states’ varying approaches to oversight and the type and frequency of consumer protection and quality of care problems that state agencies identified. Given the wide diversity among assisted living facilities in the services they offer and the populations they are prepared to serve, prospective assisted living residents can have difficulty finding an appropriate—let alone the most appropriate—facility to meet their individual needs. Initiatives such as the Florida “Find-a-Facility” Web site and the Texas standardized disclosure statement help consumers make better choices by providing them the information they need in an easier-to-absorb format. Available studies and interviews with our experts indicate that consumers choosing among their assisted living options often lack the information they need to make a fully informed selection. The limitations in the information currently provided to consumers relate to both its substantive content and mode of presentation. To make appropriate choices among the wide range of facility options available in the market, consumers need to learn about facility services, costs, and policies that impact residents. Moreover, they need this information to be not only complete and accurate, but also presented in a timely way and in a form that they can understand. When consumers do not receive adequate information before selecting an assisted living facility, they are less likely to find a facility that can satisfactorily address their personal care needs. In making selection decisions, consumers rely on facility information that they receive in various ways, including marketing brochures, facility tours, and interviews with providers. Consumers also rely on the advice of family, friends, or health care professionals. Our 1999 report stated that marketing materials, contracts, and other written materials that facilities give consumers were often vague, incomplete, or misleading. Specifically, the report found that facilities’ written materials often did not contain key information, such as a description of services not covered or available at the facility, the staff’s qualifications and training, circumstances under which costs might change, assistance residents would receive with medication administration, facility practices in assessing needs, or criteria for discharging residents if their health changes. Subsequent studies, including the 2003 Workgroup report, as well as experts that we interviewed, indicate that consumers continue to have difficulty obtaining full disclosure of the information they need. In response to this deficiency, 18 states have instituted information disclosure policies, such as requirements on the use of uniform disclosure statements or the contents of written materials provided to prospective residents. Our expert interviews and the studies we reviewed identified information about staffing levels and qualifications, costs and potential cost increases, and facility policies regarding discharge criteria as critical to informed decision making. Consumers need to know, for example, whether a facility has staff to provide full 24-hour service to address recurring care needs, such as assistance administering medications, as distinct from a facility whose overnight staff is only available to deal with emergency situations. While some facilities reportedly disclose only aggregate staffing data, the most important information for consumers concerns the number of staff directly involved in providing care to residents. Expert interviews and reviewed studies also indicated that consumers do not always receive information clearly explaining the circumstances under which resident costs can increase. Similarly, according to a consumer advocate organization, providers do not always inform consumers about the circumstances under which they could be involuntarily discharged from their facility, even when state regulations dictate that residents must leave if their needs reach a certain level. The experts we interviewed underscored the importance of conveying critical information about assisted living choices in a way that consumers can readily absorb. The experts explained that prospective residents and family members often have difficulty grasping the information presented to them, especially when they have to make decisions quickly to address a crisis situation. Under these circumstances, consumers often do not know what questions to ask or how to assess and compare the responses that they receive in order to identify the facility that can best meet their individual needs. When consumers do not get complete and accurate information on the assisted living alternatives available to them, in a form that they can understand, they run the risk of choosing a facility that cannot adequately meet their personal care requirements. A likely consequence is that they will have to move again within a short time. Both consumers and providers benefit if they can minimize this risk by ensuring that the consumer has, and can use, the critical information relevant to making an informed choice among different facilities. In the summer of 2003, Florida’s Department of Elder Affairs (DOEA) launched its Affordable Assisted Living Web site to enhance public access to information on assisted living. One of its features is called “Find-a- Facility,” a search tool that allows anyone with internet access to identify those Florida assisted living facilities that match the preferences set by the user. The available options include geographic location, price range, housing configurations (such as private apartments), whether the facility accepts residents with government subsidies or certain disabilities, and clinical and social services offered. (For examples of the Web site pages, see app. II.) Once the user selects his preferences among the available options, the site generates a list of licensed facilities, with those most closely matching the chosen preferences ranked highest. For each of these facilities, the user can print out a one-page description that includes the facility’s contact information, number of beds, specific government subsidy programs it participates in, any specialized care licenses, and all of its entries on the list of selection options. Development of the Web site occurred through a collaboration of public and private entities. It began under Florida’s Coming Home Program, sponsored by the Robert Wood Johnson Foundation. DOEA established a committee comprised of representatives of providers, consumers, and regulators. They found a need for a comprehensive information clearinghouse to inform both providers and consumers about assisted living options and the multiple long-term care and housing assistance programs designed to make these options more widely available. The “Find-a-Facility” feature developed from discussions with social workers and case managers who had helped elderly clients find appropriate assisted living residences. They underlined the need to identify the facilities that met their clients’ needs and preferences and that the clients could afford, often with the assistance of government subsidies. Many had been relying on placement agencies, which would only list facilities that had paid the agency a fee. Larger, more expensive, private pay facilities were more likely to sign on with the placement agencies, meaning that prospective residents were less likely to find out about smaller, less expensive, or subsidized facilities in their area. Several state agencies then joined together in the technical development of the Web site. Specifically, DOEA, the Florida Agency for Health Care Administration, and the Florida Housing Financing Corporation contributed staff time and services, in addition to state funding of about $29,000. The state tested the prototype site for several months with different consumer groups, such as Alzheimer caregivers and visitors to neighborhood senior centers. Based on the feedback received, state officials made further refinements in the wording of entries, their organization, and the instructions provided to users. DOEA subsequently developed a Spanish-language version of the site, which came into operation in April 2004. To promote the Web site, the state informed providers and potential residents of assisted living facilities about the site and how to use it. DOEA took care to contact professionals who typically help place residents in assisted living, distributing brochures to social workers and hospital discharge planners as well as local area agencies on aging. Consumer advocacy groups such as AARP and the Alzheimer’s Association were also encouraged to help get the word out about the Web site. Usage rates have increased steadily, reaching about 250 visitors a day by February 2004. DOEA also provided training to assisted living providers, to help them enter much of the data presented on the Web site. All licensed facilities are included in the basic database, with information on facility location, number of beds, state licenses held, and contact information downloaded from Agency for Health Care Administration files. However, providers voluntarily enter virtually all of the descriptive information on price range, housing configurations, populations served, and services offered. A provider representative indicated that entering the Web site data initially takes 10 to 15 minutes. Providers can update their information at any time. By February 2004, approximately 40 percent of assisted living facilities had filled in their data fields. DOEA receives about two inquiries a week from providers asking for assistance, but in general the providers find this process relatively easy. Initial skepticism among some providers has diminished as they hear from providers already in the system and they recognize the inherent advantage of free advertising. This is especially beneficial for smaller, independent facilities that cannot match the commercial advertising of the national and regional chains. A state administrator noted that maintenance of the Web site requires some continuing effort. With substantial turnover among facility providers and professionals assisting prospective residents, outreach and training is an ongoing process. DOEA also tries to spot-check at least some key data elements entered into the system, even though the Web site itself prominently displays a disclaimer that provider-entered data have not been verified for accuracy. No formal evaluations of the Web site have yet been undertaken, but informal feedback has been uniformly positive according to both provider and consumer representatives, as well as the state official responsible for its operation. Consumers, and those acting on their behalf, are finding that the Web site has several distinct advantages over previously available information sources. Most importantly, it provides a way to efficiently narrow their search. They can quickly identify the universe of facilities within a given area and determine which offer the services they are looking for at a price they can afford. Current information about participation in government subsidy programs is especially valuable for many prospective residents of limited means. In addition, because “Find-a- Facility” is on the internet, out-of-state family members can actively participate in the process of locating an appropriate facility. Similarly, the Web site makes it much easier for professionals assisting elderly clients, such as social workers and hospital discharge planners, to determine the full list of available placement options. In 1999, Texas enacted a law requiring assisted living facilities to provide each prospective resident a consumer disclosure statement that follows a standard format approved by the Department of Human Services. Its purpose is to enable consumers to better compare facilities by describing their policies and services in terms of uniform categories. However, its effectiveness depends not only on its content but also on how and when facilities distribute it to consumers. This five-page checklist form addresses many of the topics identified in our expert interviews as critical for consumers choosing among alternative assisted living facilities. It describes the services and amenities provided to all residents, as well as those offered at additional cost. (See app. III.) The form also lists circumstances that could lead a resident to be discharged from the facility and the training received by staff. It includes a chart showing the number and type of staff on duty for each daily shift, which is also posted in public view at the facility. While a number of other states have developed similar forms—particularly for specialized dementia units—Texas is notable for having been among the first to develop a standardized disclosure statement for all assisted living facilities, and to include detailed information on staffing levels. The standardized response categories specified by the form make the furnished information consistent across facilities, allowing consumers to make comparisons more readily among them. The checklist format means that consumers see what services the facility does not provide as well as those it does. There is one version of the form for assisted living facilities in general and another, covering many of the same topics, adapted specifically for units specializing in dementia care. Neither form, though, has been translated into any languages other than English. State officials described the process of developing these forms as proactive on their part—rather than in response to external complaints— and relatively uncontroversial. The disclosure statement for specialized dementia units emerged from a state-organized advisory committee including provider and consumer advocates. That served as the model for the more generic assisted living form issued by the department shortly thereafter. Since then, according to both state officials and an official of a state provider association, providers have accepted both forms without complaint. State officials believe that this extensive involvement of providers, along with consumer representatives, in the development of the form, contributed greatly to its wide acceptance among providers as a whole. Providers vary considerably in the way they distribute the form. Some send it out to people making phone inquiries, some provide it when prospective residents or their family members visit the facility, and some wait to distribute it when the contract is signed. Although the form states that copies should be provided to anyone who requests information about the facility, providers are only held accountable for ensuring that those who ultimately become residents in their facility received the completed form by the time they were admitted. According to the consumer representative we interviewed, residents who obtain the disclosure statement during the admissions process often pay little attention to it given all the other papers they receive and sign at that time. Once instituted, the Texas disclosure form has imposed few burdens on either assisted living providers or state officials. According to the provider association official we interviewed, it takes no more than 20 to 30 minutes to complete. The biggest challenge is remembering to revise affected entries on the form when a facility changes its services or staffing patterns. Such revisions happen perhaps four or five times a year, on average. To meet regulatory requirements, providers need to document that residents have seen the form prior to their admission. As part of their annual inspection of licensed assisted living facilities, state inspectors can assess whether a facility has a form ready to distribute and that current residents received the disclosure form before signing their residence agreement. However, the inspection process does not include an explicit examination of the accuracy of the information provided on the form. Available evidence suggests that the assisted living disclosure statement provides useful information to prospective residents, though it does have certain limitations. None of the state, provider, or consumer representatives we spoke with knew of any formal studies conducted on the effectiveness of the form in enhancing consumer decision making on assisted living facilities. However, the anecdotal evidence they conveyed was largely positive. The consumer and provider representatives we spoke with generally thought that the form was clear and covered the major topics that consumers need to know about. Nonetheless, the consumer representative indicated that some residents and their families still encountered “surprises” after the resident was admitted. These typically involved the conditions under which residents could be discharged or aggregate charges assessed. According to this representative, such misunderstandings reflected, in part, the intrinsically subjective nature of certain decisions, such as whether a facility could continue to meet the needs of a resident whose level of disability may have increased over time. The provider official we interviewed suggested that the form itself could be revised to more clearly convey how increases in services used would affect the resident’s total charges. The Texas disclosure form addresses several challenges that consumers of assisted living can face. The categories of information provided on the form help to describe for consumers, who often know little about the industry and may need to make a decision quickly, what facilities can and cannot do for their residents. They also highlight important issues, such as the facility’s discharge criteria, that prospective residents and their families should pay attention to in making their selection. In addition, having comparable information in a concise format for multiple facilities should make it easier to identify key differences among the facilities under consideration. However, these benefits depend on when the residents or their representatives receive the form. If facilities do not distribute the form to consumers until they sign a contract, it cannot help them in deciding among available facilities. Assisted living providers may fall short of meeting state licensing standards in part because they lack a full understanding of what the standards require and how to meet them. The experience of Washington State, which for 2 ½ years employed a staff of consultants to advise and train assisted living providers, shows the potential benefits of licensing assistance programs in improved provider compliance and resident outcomes, as well as the challenge of sustaining them over time. Regulations that address consumer protection and quality of care generally cover such areas as admission and discharge criteria, services and level of care provided, staffing levels and staff training, safety and health standards, and resident rights. To examine regulatory compliance, states periodically conduct inspections of assisted living facilities. To ensure that facilities correct their deficiencies, states may require the facility to prepare a written plan of correction. In addition, states may conduct reinspections and impose financial penalties, license revocations, and criminal sanctions. Generally, when deficiencies are found, the facility has an opportunity to correct them. However, regulatory agencies expect providers to determine how to accomplish this, drawing on outside technical advice, if needed, to resolve the issue. According to experts we interviewed, state agencies face the challenge of inspecting a rapidly increasing number of assisted living facilities with limited resources. While national data are not available, a number of inspection reports and media articles indicate that typical problems relate to inadequate care, inappropriate discharges, insufficient staffing and training deficiencies, improper drug storage or errors dispensing medications, and other safety issues. One way to facilitate compliance with licensing regulations is to help providers achieve a better understanding of what the regulations actually require. The experts that we interviewed stated that providers often express confusion about actions they need to take to meet state policy or regulatory requirements. They noted that providers perceive ambiguities in regulations that can lead to inconsistent interpretations among different facility managers as well as individual state inspectors. Moreover, the rapid industry expansion has brought many new providers into the assisted living industry whose administrators may not fully understand what they need to do to meet regulatory requirements. Experts also said that uncertainties about state requirements could have negative effects on consumers. For example, confusion about state rules could induce some providers to drop out of the market, which might lead to access problems in some areas, particularly in rural communities that tend to have fewer assisted living providers to begin with. According to experts we interviewed, state licensing agencies or other entities can help providers understand regulations by providing guidance and training. Licensing assistance can take various forms, including informal phone conversations, on-site consultation and technical advice, or training courses. Such assistance may be especially critical for administrators who are new or relatively inexperienced in the assisted living industry. Even for established managers, helping them to keep their facilities in compliance with regulatory requirements benefits consumers by preventing potentially serious health and safety problems. While many experts we interviewed noted the value of combining such assistance with traditional regulatory enforcement measures, not all agreed that state agencies should provide it. Several noted that industry associations could also furnish this kind of support for their members. Moreover, representatives from one advocacy organization argued that efforts by licensing agencies to provide technical assistance to providers could draw scarce resources away from their primary responsibility of enforcing state licensing standards. Washington enacted a law in 1997 to establish a consultative approach to help assisted living providers meet state licensing requirements. In 2000, the state put this approach into operation with the Quality Improvement Consultation (QIC) program, which created a staff of consultants within the state’s Department of Social and Health Services (DSHS) to provide training and advice to individual providers. The staff of nine regionally based consultants conducted site visits, led training sessions, and responded to telephone inquiries from assisted living providers throughout the state. These activities continued for 2 ½ years until, in the midst of a state budget crisis, the state stopped funding the program. The QIC program came about in response to provider concerns about a major structural reorganization in the state’s regulation of assisted living. In 1995, the state moved licensing and oversight responsibility for assisted living from the Department of Health to DSHS. Because DSHS also had enforcement authority over nursing homes, providers anticipated that the state would approach assisted living regulation as it had nursing home oversight and lobbied for a more consultative approach. The state legislature responded by requiring DSHS, within available funding, to develop the QIC program. DSHS expected the program to enhance provider and resident satisfaction, improve resident safety and quality of care, and prevent compliance problems. A quality improvement advisory group consisting of representatives of providers, consumers, and the state came together to develop the QIC program. Most of the group’s discussion revolved around the meaning of “consultation.” Provider and consumer representatives differed on whether providers could be required to participate in the program. Providers insisted that the program be entirely voluntary, while some ombudsmen believed that the providers most in need of help might be least likely to ask for it. Provider representatives also expressed concern about the relationship of the consultants with the DSHS inspectors who enforced the state’s licensing regulations. In particular, they worried that inspectors could have access to private information that providers had shared with a consultant, leading to enforcement actions rather than assistance. In addition, they wanted to prevent such information from appearing in public records. After much discussion, the group reached consensus to make the QIC program voluntary and to define the consultants as adjuncts to, but separate from, the licensing enforcement process. The consultants would not forward information to inspectors unless they identified a situation involving immediate harm to residents. In addition, information obtained from providers would not be released publicly except in aggregated form. The state hired nine quality improvement consultants who had extensive education and experience in quality improvement, training, and consultation in the assisted living industry. The consultants conducted onsite facility visits initiated by providers in order to help them develop and implement quality improvement plans that addressed identified needs. They also led regional provider training and were available by telephone to respond to provider inquiries. Two evaluations of the QIC program indicated overall positive results in meeting its goals. The first evaluation took place 6 months into the program. It measured effectiveness through analysis of resident outcomes and responses to satisfaction questionnaires completed by residents, ombudsmen, providers, facility staff, and consultants. The second evaluation occurred 2 years later. It assessed provider compliance with licensing regulations and satisfaction levels among providers and ombudsmen who participated in the onsite portion of the program. After 6 months of operation, about 82 percent of providers voluntarily participated in the QIC program in some way. Moreover, in both evaluations, a large majority of participating providers expressed satisfaction with the QIC program. Over 90 percent of those providers indicated in the first evaluation that the program had effectively assisted them with compliance. Although this level of satisfaction declined slightly to about 79 percent 2 years later, providers indicated in the second evaluation that consultation in a voluntary, mutually respectful, and collegial manner was the program’s most beneficial component. Assisted living residents also reported positive outcomes from the program. In the first evaluation, 90 percent of residents expressed satisfaction with the results of the program’s on-site visits. Among those residents assessed by consultants on more than one visit, 86 percent showed improvement in identified areas of concern. These areas involved a variety of quality of care issues, including administration of medications and ADL assistance. Similarly, with respect to safety issues, 65 percent of the residents seen on more than one visit demonstrated improvement in areas such as prevention of falls. Finally, both providers and the state attributed improvements in regulatory compliance partly to the work of the QIC program. The second evaluation included an analysis of statewide provider compliance prior to (1998 to 2000) and after implementation (2001 to 2002) of the QIC program. Although there was a slight increase in the number of state inspections conducted, the number and percentage of facilities that had penalties imposed fell substantially. The state imposed fewer civil fines, conditions on licenses, license revocations, and summary suspensions. Finding fewer problems during inspections also meant that each inspection required less time to complete and document, thereby allowing more efficient use of inspection resources. Despite its broad support and favorable outcomes, the QIC program ended in July 2002. After 2 ½ years of operation, it lost its state funding and has since remained an unfunded program. According to state officials and consumer representatives, the program’s end was primarily due to funding constraints. A severe state budget crisis in 2002 put significant pressure on DSHS to cut costs while maintaining its core functions of conducting inspection and complaint investigations. The department decided that it needed more inspectors for this work, and that licensing assistance through the QIC program had lower priority. However, the provider representative emphasized that insufficient trust between providers and the state also contributed to the program’s end. While the evaluation results pointed to substantial success overall in building functioning relationships, the provider representative described several incidents of broken confidentiality between providers and consultants that tended to undermine the providers’ willingness to participate in the program. A state official as well as consumer and provider representatives noted that the QIC program required collaboration and the sharing of sensitive information. Such collaboration depended on providers and consultants developing and sustaining trust among themselves, as well as between consultants and other state officials, such as inspectors and ombudsmen. Washington’s QIC program illustrates both the challenges and potential benefits of state efforts to provide licensing assistance to assisted living providers. A large number of providers chose to take advantage of the consultative services and training offered by the program. Moreover, the documented improvements in resident outcomes and in provider compliance with regulations demonstrate the impact that programs of this sort can have. However, the staff resources needed to provide this level of assistance make these programs highly vulnerable in times of budgetary constraint. Some assisted living residents have difficulty pursuing complaints with their providers, particularly in cases involving an involuntary discharge. Georgia has established a spectrum of procedural remedies specifically for assisted living residents that appear to strengthen their bargaining position vis-a-vis providers. Massachusetts created a separate ombudsman staff dedicated to assisted living residents. As a result, these staff members have become expert in dealing with the particular problems of assisted living residents. Concerns about problems in assisted living facilities reinforce the need to ensure that consumers have adequate mechanisms to raise complaints about the care they receive in these facilities. For the most part, these mechanisms fall into two broad categories: Internal procedures, which specify how residents may lodge complaints with the facility’s management and how management may respond. External procedures, which designate an entity outside of the facility to hear resident complaints and decide on an appropriate resolution. The outside entity may be a state agency or an independent third party. Such procedures are most commonly applied to major disputes, such as involuntary discharges. A national study found that some states require assisted living facilities to establish internal complaint procedures, some offer residents a venue for external appeals, and some offer both. In addition, it noted that some states take measures to ensure that assisted living residents are aware of these rights, for example by requiring that facilities prominently post appropriate telephone numbers and the list of resident rights in that state. However, the national study also found that in 2000 over half of the states had no requirements that assisted living facilities establish procedures for residents to voice complaints or appeal provider decisions that adversely affect them. Regardless of their rights to file complaints either internally or externally, many residents may hesitate to do so for fear of retribution. According to the experts we interviewed and studies of ombudsmen programs, many assisted living residents do not want to risk alienating their providers. Even when state agencies permit the residents to file complaints anonymously, they may find it difficult to maintain their anonymity, especially in smaller facilities. Among the avenues for residents to seek redress of their complaints is through the long-term care ombudsmen program in each state. The Older Americans Act directs ombudsmen to represent the interests of residents of long-term care facilities, including nursing homes and assisted living facilities. The act authorizes the ombudsmen to serve as advocates to protect the health, safety, welfare, and rights of residents of long-term care facilities. One of the main responsibilities of ombudsmen is to investigate and resolve complaints. Ombudsmen involvement in assisted living varies considerably depending on state policies and the resources available to address the myriad complaints that they receive from all types of long- term care facilities. However, experts we interviewed noted that most ombudsmen focus the bulk of their limited resources on nursing homes. In fiscal year 2002, ombudsmen received four times as many complaints against nursing homes as assisted living facilities. Ombudsmen can help overcome the factors that may inhibit assisted living residents from filing complaints. During scheduled visits to assisted living facilities, ombudsmen have the opportunity to educate residents on their right to file complaints and encourage them to do so. In addition, while the ombudsmen are on-site they can receive such complaints discretely. However, financial constraints may limit the frequency with which ombudsmen meet with assisted living residents. In 1994, Georgia strengthened procedural remedies available to residents in assisted living facilities by enacting the Remedies for Residents of Personal Care Homes Act. These remedies provide additional consumer protections beyond the investigation of complaints by its licensing agency, the Office of Regulatory Services (ORS) within the Department of Human Resources. The state gave assisted living residents specific procedural rights to have their complaints heard and redressed. The remedies include the right to an internal complaint procedure, an administrative hearing, and specified actions in court. According to consumer advocates, the 1994 law has enhanced the ability of assisted living residents to resolve disputes informally with assisted living providers. At the time Georgia passed this legislation, assisted living facilities had recently come under heightened public scrutiny. Consumer advocates and the media had raised concerns about the lack of adequate oversight, as evidenced by facilities that maintained extremely poor sanitary conditions or that admitted residents who required far greater care than the facility could provide. In response, the state legislature sought to provide assisted living residents with additional consumer protections by creating procedural remedies specifically for them. In its legislative findings, the state legislature recognized that residents often lacked the ability to assert their rights and stated that full consumer protection required that residents have a means of recourse when their rights were denied. According to the state official, the legislature modeled the act’s procedural remedies after remedy options given to nursing home residents through both state and federal law. The remedies provided in the 1994 legislation include an internal complaint procedure and an administrative hearing. Residents may submit an oral or written complaint to a facility administrator, who must either resolve the complaint or respond in writing within 5 business days. If residents do not find the response satisfactory, they may submit an oral or written complaint to the state long-term care ombudsman. Residents also have the right to request an administrative hearing under the Georgia Administrative Procedure Act. They are not required to use any other legal remedies before requesting such a hearing. The Office of State Administrative Hearings (OSAH) must conduct the hearing within 45 days of receiving the request, although state officials may refer the request to an ombudsman for informal resolution pending the hearing. If the resident alleges that the provider acted in retaliation for the resident exercising his or her rights, OSAH must conduct the hearing within 15 days of receiving the request. The facility cannot transfer a resident before he has exhausted all appeal rights unless he develops a serious medical condition or his behavior or condition threatens other residents. The act also gives residents access to different types of court proceedings. A resident may file a lawsuit seeking compensation from an assisted living facility. The resident need not exhaust any of the other legal remedies before bringing such a suit. This remedy includes a provision designed to protect residents from retaliation by a provider. If the provider attempts to remove the resident involuntarily from the facility within 6 months after the resident exercises one of the available remedies, the court presumes retaliation in an action by the resident making that claim unless the provider presents “clear and convincing evidence” to the contrary. Residents may also file a lawsuit requesting that the court order a facility to refrain from violating the rights of a resident. Finally, residents may file a lawsuit for ‘mandamus’—a court order to ORS to comply with laws relating to an assisted living facility or its residents. These procedural remedies appear to have their greatest effect in strengthening the position of residents during informal resolution of disputes. The legal aid representatives we interviewed noted that they resolve most issues between assisted living residents and providers informally. Advocates for residents said that these procedural remedies give the advocates added leverage as they negotiate with providers. However, advocates also stated that they rarely take the next step of actually filing for administrative hearings or court proceedings, in part because legal aid cases generally do not reach that step and also because they believe that the substantive rights of assisted living residents in Georgia are not strong. For example, a resident objecting to an involuntary discharge is unlikely to prevail in an administrative hearing because providers exercise broad discretion in deciding when they can no longer properly care for a resident. However, by requesting a hearing, residents can postpone the date by which they must move out, thereby gaining more time in which to find a suitable place to relocate. Moreover, according to one legal aid attorney, providers often prefer to resolve a dispute informally rather than take their chances with an administrative hearing, because providers typically have little experience with hearings and prefer to limit their costs for legal representation. Strengthening Georgia’s procedural remedies for assisted living residents required action by the state legislature, but once approved, the procedures have imposed minimal costs to the state. An agency to deal with a wide range of state administrative issues already existed, and with few hearings involving assisted living residents actually conducted, these cases represent a small portion of OSAH’s operating expenses. Similarly, the state’s long-standing advocates for assisted living residents—long-term care ombudsmen and legal aid lawyers—have served to inform both providers and residents about these legal remedies while carrying out their normal functions. In fact, providers and residents may remain unaware of their existence, until the advocates have reason to bring these remedies to their attention in the course of resolving disputes. In 1994, Massachusetts passed an assisted living statute that established a statewide assisted living ombudsman program. The program is a key element of the statute, which created a certification system for assisted living separate from the state’s nursing home regulatory and licensure system. According to the state official we interviewed, the primary purpose of this ombudsman program is to maintain the quality of life, health, safety, welfare, and rights of assisted living residents by designating ombudsman staff specifically for assisted living. It provides a means for assisted living residents and family members to file and resolve complaints relating to the quality of services and to residents’ quality of life. However, the program’s exclusive reliance on state funding, under circumstances of state budgetary constraint, has resulted in limited staff resources available to perform these tasks. Assisted living ombudsmen serve primarily as mediators and advocates. As mediators, they receive, investigate, and attempt to resolve problems or conflicts that occur between a provider and residents. They act as advocates for residents by referring their cases to the assisted living certification office or elder protective services, when warranted. In addition, the ombudsmen respond to inquiries by consumers considering assisted living as a long-term care option. They also respond to providers requesting advice. To accomplish these tasks, the ombudsmen make site visits to assisted living facilities, typically in the context of a serious complaint allegation and sometimes together with certification staff. The organizational placement of the ombudsman program within the state’s Executive Office of Elder Affairs (EOEA) is designed to balance program autonomy and coordination with related programs. EOEA oversees both the assisted living ombudsman and certification programs. According to the state official, staff members from both programs coordinate activities, communicate often, and refer cases to each other. This working relationship has helped give the ombudsman more leverage when dealing with providers. However, representatives for both the state and assisted living providers agree that the ombudsman program should remain separate organizationally from the certification program because they perform different functions. Previously, when the staff of the two programs had reported to the same individual in EOEA, providers became confused about the programs’ respective roles during a visit. A subsequent restructuring of EOEA placed the certification and ombudsmen in separate divisions. Shared EOEA administration also links the assisted living ombudsmen to other programs serving elderly clients, such as elderly protective services and the long-term care ombudsmen program. The state has emphasized coordination with elderly protective services to ensure that assisted living residents found in abusive situations quickly receive the help they need. In addition, by placing assisted living ombudsmen in the same office of EOEA as long-term care ombudsmen, Massachusetts has attempted to maintain a degree of communication and coordination across the different long-term care settings. As described by the provider representative we interviewed, this arrangement allows for “cross-fertilization” between the different programs. Although the programs differ substantially in their approach to ensuring quality care, assisted living ombudsmen can nevertheless draw upon the decades-long experience residing in the long- term care program. Massachusetts’ assisted living ombudsman program regulations called for a structure similar to that of the existing long-term care ombudsman program. According to the state official, the long-term care ombudsman program has a full-time training position and several regional coordinators responsible for recruiting, training, and overseeing volunteers who make site visits to nursing homes on a regular basis throughout the state. However, according to the state official, the assisted living ombudsman program never received sufficient funding to develop this type of structure. Although the regulations authorized a similar network of volunteers, the program staff has consisted of no more than three professionals, later reduced to two, who handle complaints and inquiries for 172 assisted living facilities. That left no one available to recruit, train, and supervise volunteers, and consequently, visits to facilities only occurred in response to complaints and not on a routine basis. The Massachusetts legislature funded the assisted living ombudsman program by creating an assisted living administrative fund, which received the fees paid biennially by facilities as part of the certification process. The ombudsman shared these funds with the assisted living certification staff. However, in response to statewide budgetary pressures, the legislature eliminated this fund in fiscal year 2003 and redirected the certification fees to the state’s general revenues. Meanwhile, the long- term care ombudsman program continued to operate largely with federal funds, authorized under the Older Americans Act. The state and provider representatives we spoke with agreed that having a separate assisted living ombudsman program led its staff to become increasingly knowledgeable about assisted living and the particular problems that arise within it. Both providers and residents benefit from the fact that assisted living ombudsmen do not have to balance the needs of residents from different types of long-term care facilities. However, the decision to fund the program solely through the state made it especially vulnerable to budgetary cutbacks when Massachusetts faced constrained fiscal circumstances. Although the federally supported state long-term care ombudsman programs also contend with scarce resources nationwide, the Massachusetts assisted living ombudsman program highlights the difficulty of sustaining this type of program with state funds alone. Florida, Texas, Washington, Georgia, and Massachusetts have each found ways to enhance the experience of assisted living residents in their states. They have done so by developing information resources, expanding complaint mechanisms, or allocating state resources to assisted living programs. However, those initiatives that required increases in state staff or funds fared less well during periods of fiscal constraint. The demise of the Washington QIC program, despite its well-documented favorable outcomes, and cutbacks in the popular Massachusetts assisted living ombudsman program, reflect the vulnerability of any discretionary state program to budget reductions. Florida’s Web site, Texas’ disclosure form, and Georgia’s procedural remedies, by contrast, have benefited from the important advantage that none of these programs required substantial resources to initiate and maintain. These examples from five states can perhaps aid other states in developing their own approaches to helping senior citizens take full advantage of assisted living alternatives to nursing home care. We sent sections from an earlier draft of this report to state officials in Florida, Texas, Washington, Georgia, and Massachusetts and asked them to check that the section accurately described the development and implementation of their state’s program. Officials from all five states responded and provided technical comments that we incorporated where appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from its date. At that time, we will send copies of this report to interested parties. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. We will also make copies available to others upon request. If you or your staff have any questions about this report, please call me at (312) 220-7600. An additional contact and other staff members who prepared this report are listed in appendix IV. National Organizations and Academic Experts Alzheimer’s Association American Association of Homes and Services for the Aging American Bar Association Commission on Law and Aging American Seniors Housing Association Assisted Living Federation of America Association of Health Facility Survey Agencies Consumer Consortium on Assisted Living National Association for Regulatory Administration National Association of State Long-Term Care Ombudsman Programs National Association of State Units on Aging National Center for Assisted Living National Citizens’ Coalition for Nursing Home Reform NCB Development Corporation, The Coming Home Program Catherine Hawes, Texas A&M University Robert Mollica, National Academy for State Health Policy Janet O’Keeffe, Research Triangle Institute Major Studies on Assisted Living Catherine Hawes, et al., A National Study of Assisted Living for the Frail Elderly: Results of A National Survey of Facilities (Beachwood, Ohio: December 1999). Maureen Mickus, “Complexities and Challenges in the Long Term Care Policy Frontier: Michigan’s Assisted Living Facilities” (Michigan State University Applied Public Policy Research Program: September 2002). Robert Mollica and Robert Jenkens, State Assisted Living Practices and Options: A Guide for State Policy Makers (National Academy for State Health Policy and NCB Development Corporation: September 2001). Janet O’Keeffe, et al., Using Medicaid to Cover Services for Elderly Persons in Residential Care Settings: State Policy Maker and Stakeholder Views in Six States, Research Triangle Institute, prepared at the request of the U.S. Department of Health and Human Services (December 2003). Charles D. Phillips, et al., Residents Leaving Assisted Living: Descriptive and Analytic Results from a National Survey, prepared at the request of the U.S. Department of Health and Human Services, Office of the Assistant Secretary for Planning and Evaluation, June 2000. Brenda Spillman et al., Trends in Residential Long-Term Care: Use of Nursing Homes and Assisted Living and Characteristics of Facilities and Residents, Washington, D.C.: Urban Institute, prepared at the request of the U.S. Department of Health and Human Services, Office of the Assistant Secretary for Planning and Evaluation, November 2002. U.S. General Accounting Office, Assisted Living: Quality-of-Care and Consumer Protection Issues in Four States, GAO/HEHS-99-27 (Washington, D.C.: Apr. 26, 1999) U.S. General Accounting Office, Long-Term Care: Consumer Protection and Quality-of-Care Issues in Assisted Living, GAO/HEHS-97-93 (Washington, D.C.: May 15, 1997). Guides on State Assisted Living Regulations American Seniors Housing Association, Seniors Housing: State Regulatory Handbook, March 2003. Lyn Bentley, Assisted Living State Regulatory Review 2004, National Center for Assisted Living (March 2004). Stephanie Edelstein, et al., Assisted Living: Summary of State Statutes (in 3 volumes) AARP, 2000. Robert Mollica, State Assisted Living Policy: 2002 (Portland, Maine: National Academy for State Health Policy, November 2002). We interviewed officials or individuals associated with the following entities: Florida Department of Elder Affairs Florida Assisted Living Affiliation Senior Resource Alliance (Florida) Texas Department of Human Services Texas Assisted Living Association Texas Assisted Living Advisory Committee Washington Department of Social and Health Services Washington Health Care Association Washington Long-term Care Ombudsman Program Georgia Long-Term Care Ombudsman Program Georgia Legal Aid Program Senior Citizens Law Project (Georgia) Assisted Living Association of Georgia Massachusetts Executive Office of Elder Affairs Massachusetts Assisted Living Facilities Association Alice Mahar Dupler, Neva L Crogan, and Robert Short, “Pathways to quality improvement for boarding homes: A Washington state model,” Journal of Nursing Care Quality; Jul 2001; 15(4), 1-7. Alice Mahar Dupler, “Quality Improvement Consultation Program in Assisted Living Facilities, A Washington State Pilot Program: Phase II,” unpublished, no date. Eric Peterson, Carmen Rivera-Lowitt, and Janet Rosenblad made major contributions to this report. | Assisted living facilities provide help with activities of daily living in a residential setting for individuals who cannot live independently but do not require 24-hour skilled nursing care. In 2002, over 36,000 assisted living facilities served approximately 900,000 residents. The states establish and enforce licensing standards for these institutions. Because states have taken widely differing approaches to regulating and supporting assisted living, they can potentially learn from each other's experiences as they consider changes to their own policies. GAO was asked to review challenges faced by consumers and providers of assisted living and seek out notable state initiatives addressing those challenges in three selected areas: (1) disclosure of full and accurate information to consumers, (2) state assistance to providers to meet licensing requirements, and (3) procedures for addressing residents' complaints. We identified specific examples of individual programs in Florida, Texas, Washington, Georgia, and Massachusetts that highlighted different approaches in these three areas, which other states might wish to consider emulating. Consumers faced with choosing an assisted living facility often do not have key information they need in order to identify the one most likely to meet their individual needs. Such information includes staffing levels and qualifications, costs and potential cost increases, and the circumstances that could lead to involuntary discharge from the facility. Initiatives in Florida and Texas have made critical data for consumer selection among facilities more readily available. Florida has created a Web site that enables consumers to learn about all of the facilities in their vicinity and identifies those providing the services the consumers are seeking at a specified price range. Texas has mandated a standardized disclosure statement for assisted living facilities, giving consumers concise and consistent data that facilitates comparisons across providers regarding services, charges, and policies. Assisted living facilities are more likely to meet and maintain licensing standards if they can obtain help in interpreting those standards and in determining what concrete changes they need to make to satisfy them. Washington State established a staff of quality consultants to provide such training and advice to assisted living providers on a voluntary basis. Evaluations of the program 6 months after its start and 2 years later documented improvements in provider compliance as well as resident health and safety. However, a statewide budget crisis led to a decision to stop funding the program, in order to maintain traditional licensing enforcement functions. Assisted living residents sometimes need help to pursue any complaints that they may have with their providers, especially when faced with an involuntary discharge. Long-term care ombudsmen are available in all states, but nursing home residents claim most of their attention. Georgia has legislated an extensive array of procedural remedies specifically for assisted living residents that provide them multiple means for seeking redress of their complaints. The existence of these remedies also strengthens the position of residents in the informal negotiations through which most such disputes are resolved in practice. Massachusetts has created a small staff of ombudsmen dedicated exclusively to serving assisted living residents. This allows them to specialize in addressing the particular problems that arise in assisted living facilities. |
During the past decade, the overarching goal of the U.S. National Drug Control Strategy has been to reduce illegal drug use in the United States. A main priority of the strategy has been to disrupt illegal drug trade and production abroad in the transit zone and production countries by attacking the power structures and finances of international criminal organizations and aiding countries with eradication and interdiction efforts. This involves seizing large quantities of narcotics from transporters, disrupting major drug trafficking organizations, arresting their leaders, and seizing their assets. The strategy also called for the United States to support democratic institutions and the rule of law in allied nations both in the transit zone and in drug producing countries, strengthening of these nations’ prosecutorial efforts, and the prosecution of foreign traffickers and producers. According to State’s International Narcotics Control Strategy Report, the goal of U.S. counternarcotics assistance to other countries is to help their governments become full and self-sustaining partners in the fight against drugs. The updated U.S. National Drug Control Strategy, released in May 2010, endorses a balance of drug abuse prevention, drug treatment, and law enforcement. International efforts in the strategy include collaborating with international partners to disrupt the drug trade, supporting the drug control efforts of major drug source and transit countries, and attacking key vulnerabilities of drug-trafficking organizations. Our work in Afghanistan, Colombia, and the transit zone has shown that the United States and its partner nations have partially met established targets for reducing the supply of illicit drugs. Most programs designed to reduce cultivation, production, and trafficking of drugs have missed their performance targets. In Afghanistan, one of the original indicators of success of the U.S.-funded counternarcotics effort was the reduction of opium poppy cultivation in the country, and for each year from 2005 to 2008, State established a new cultivation reduction target. According to State, the targets were met for some but not all of these years. We recently reported that cultivation data show increases from 2005 to 2007 and decreases from 2007 to 2009 and that 20 of the 34 Afghan provinces are now poppy-free. However, the U.S. and Afghan opium poppy eradication strategy did not achieve its stated objectives, as the amounts of poppy eradicated consistently fell short of the annual targeted amounts. For example, based on the most recent data we analyzed-for 2008-2009-slightly more than one-quarter of the total eradication goal for that year was achieved: of the 20,000 hectares targeted, only 5,350 hectares were successfully eradicated. These eradication and cultivation goals were not met due to a number of factors, including lack of political will on the part of Afghan central and provincial governments. In 2009, the United States revamped its counternarcotics strategy in Afghanistan to deemphasize eradication efforts and shift to interdiction and increased agricultural assistance. In 2008, we reported that Plan Colombia’s goal of reducing the cultivation and production of illegal drugs by 50 percent in 6 years was partially achieved. From 2001 to 2006, Colombian opium poppy cultivation and heroin production decreased by about 50 percent to meet established goals. However, estimated coca cultivation rose by 15 percent with an estimated 157,000 hectares cultivated in 2006 compared to 136,200 hectares in 2000. State officials noted that extensive aerial and manual eradication efforts during this period were not sufficient to overcome countermeasures taken by coca farmers. U.S. officials also noted the increase in estimated coca cultivation levels from 2005 through 2007 may have been due, at least in part, to the U.S. government’s decision to increase the size of the coca cultivation survey areas in Colombia beginning in 2004. Furthermore, in 2008 we reported that estimated cocaine production was about 4 percent greater in 2006 than in 2000, with 550 metric tons produced in 2006 compared to 530 metric tons in 2000. Since our 2008 report, ONDCP has provided additional data that suggests significant reductions in the potential cocaine production in Colombia despite the rising cultivation and estimated production numbers we had cited. ONDCP officials have noted that U.S.-supported eradication efforts had degraded coca fields, so that less cocaine was being produced per hectare of cultivated coca. According to ONDCP data, potential cocaine production overall has dropped from 700 metric tons in 2001 to 295 metric tons in 2008—a 57 percent decrease. According to ONDCP officials, decreases in cocaine purity and in the amount of cocaine seized at the Southwest Border since 2006 tend to corroborate the lower potential cocaine production figures. In interpreting this additional ONDCP data, a number of facts and mitigating circumstances should be considered. First, increasing effectiveness of coca eradication efforts may not be the only explanation for the data that ONDCP provided. Other factors, such as dry weather conditions, may be contributing to these decreases in potential cocaine production. Also, other factors, such as increases in cocaine flow to West Africa and Europe could be contributing to decreased availability and purity of cocaine in U.S. markets. Additionally, ONDCP officials cautioned about the longer-term prospects for these apparent eradication achievements, because weakened economic conditions in both the U.S. and Colombia could hamper the Colombian government’s sustainment of eradication programs and curtail the gains made. Moreover, as we noted in 2008, reductions in Colombia’s estimated cocaine production have been partially offset by increases in cocaine production in Peru and, to a lesser extent, Bolivia. Although It remains to be seen whether cocaine production in Peru and Bolivia will continue to increase and these whether Peru will return to being the primary coca producing country that it was through the 1980’s and into the 1990’s. According to ONDCP data, the United States has fallen slightly short of its cocaine interdiction targets each year since the targets were established in 2007. The national interdiction goal calls for the removal of 40 percent of the cocaine moving through the transit zone annually by 2015. The goal included interim annual targets of 25 percent in 2008 and 27 percent in 2009. However, since 2006, cocaine removal rates have declined and have not reached any of the annual targets to date. The removal rate dropped to 23 percent in 2007 and 20 percent in 2008 (5 percentage points short of the target for that year) then rose to 25 percent in 2009 (2.5 percentage points short of the target for that year). ONDCP has cited aging interdiction assets, such as U.S. Coast Guard vessels, the redirection of interdiction capacity to wars overseas, and budget constraints, as contributing factors to these lower-than-desired success rates. Moreover, the increasing flow of illicit narcotics through Venezuela and the continuing flow through Mexico pose significant challenges to U.S. counternarcotics interdiction efforts. A number of factors to counternarcotics-related programs have limited the effectiveness of U.S. counternarcotic efforts. These factors include a lack of planning by U.S. agencies to sustain some U.S.-funded programs over the longer term, limited cooperation from partner nations, and the adaptability of drug producers and traffickers. U.S. agencies had not developed plans for how to sustain some programs, particularly those programs providing assets, such as boats, to partner nations to conduct interdiction efforts. Some counternarcotics initiatives we reviewed were hampered by a shortage of resources made available by partner nations to sustain these programs. We found that many partner nations in the transit zone had limited resources to devote to counternarcotics, and many initiatives depended on U.S. support. Programs aimed at building maritime interdiction capacity were particularly affected, as partner nations, including Haiti, Guatemala, Jamaica, Panama and the Dominican Republic, were unable to use U.S.- provided boats for patrol or interdiction operations due to a lack of funding for fuel and maintenance. Despite continued efforts by DOD and State to provide these countries with boats, these agencies had not developed plans to address long-term sustainability of these assets over their expected operating life. Also, we found in 2006 that the availability of some key U.S. assets for interdiction operations, such as maritime patrol aircraft, was declining, and the United States had not planned for how to replace them. According to JIATF-South and other cognizant officials, the declining availability of P-3 maritime patrol aircraft was the most critical challenge to the success of future interdiction operations. Since then, DOD has taken steps to address the issue of declining availability of ships and aircraft for transit zone interdiction operations by using other forms of aerial surveillance and extending the useful life of P-3 aircraft. Recently, DOD’s Southern Command officials told us that they plan to rely increasingly upon U.S.- supported partner nations for detection and monitoring efforts as DOD capabilities in this area diminish. However, given the concerns we have reported about the ability of some partner nations to sustain counternarcotics-related assets, it remains to be seen whether this contingency is viable. Our work in Colombia, Mexico, and drug transit countries has shown that cooperative working relationships between U.S. officials and their foreign counterparts is essential to implementing effective counternarcotics programs. The United States has agreed-upon strategies with both Colombia and Mexico to achieve counternarcotics-related objectives and has worked extensively to strengthen those countries’ capacity to combat illicit drug production and trafficking. For example, to detect and intercept illegal air traffic in Colombian air space the United States and Colombia collaborated to operate the Air Bridge Denial Program, which the Colombian Air Force now operates independently. Also, in Mexico, increased cooperation with the United States led to a rise in extraditions of high-level cartel members, demonstrating a stronger commitment by the Mexican government to work closely with U.S. agencies to combat drug trafficking problems. Similarly, in 2008 we reported that in most major drug transit countries, close and improving cooperation has yielded a variety of benefits for the counternarcotics effort. In particular, partner nations have shared information and intelligence leading to arrests and drug seizures, participated in counternarcotics operations both at sea and on land, and cooperated in the prosecution of drug traffickers. However, corruption within the governments of partner nations can seriously limit cooperation. For example, in 2002, the U.S. government suspended major joint operations in Guatemala when the antinarcotics police unit in that country was disbanded in response to reports of widespread corruption within the agency and its general lack of effectiveness in combating the country’s drug problem. In the Bahamas, State reported in 2003 that it was reluctant to include Bahamian defense personnel in drug interdiction operations and to share sensitive law enforcement information with them due to corruption concerns. Corruption has also hampered Dominican Republic-based, money- laundering investigations, according to DEA. Afghan officials objected to aerial eradication efforts and the use of chemicals in Afghanistan, forcing eradication to be done with tractors, all- terrain vehicles, and manually with sticks, making the effort less efficient. Furthermore, Afghan governors had been slow to grant permission to eradicate poppy fields until the concept for the central government’s eradication force was changed in 2008 so that this force could operate without governor permission in areas where governors either would not or could not launch eradication efforts themselves. Deteriorating relations with Venezuela have stalled the progress of several cooperative counternarcotics initiatives intended to slow drug trafficking through that country. In 2007, Venezuela began denying visas for U.S. officials to serve in Venezuela, which complicated efforts to cooperate. Additionally, the overall number of counternarcotics projects supported by both the United States in Venezuela has fallen since 2005. For example, the Government of Venezuela withdrew support from the Prosecutor’s Drug Task Force in 2005 and a port security program in 2006. Drug trafficking organizations and associated criminal networks have been extremely adaptive and resourceful, shifting routes and operating methods quickly in response to pressure from law enforcement organizations or rival traffickers. In 2008, we reported that drug traffickers typically used go-fast boats and fishing vessels to smuggle cocaine from Colombia to Central America and Mexico en route to the United States. These boats, capable of traveling at speeds over 40 knots, were difficult to detect in open water and were often used at night or painted blue and used during the day, becoming virtually impossible to see. Traffickers have also used “mother ships” in concert with fishing vessels to transport illicit drugs into open waters and then distribute the load among smaller boats at sea. In addition, traffickers have used evasive maritime routes and changed them frequently. Some boats have traveled as far southwest as the Galapagos Islands in the Pacific Ocean before heading north toward Mexico, while others travel through Central America’s littoral waters, close to shore, where they could hide among legitimate maritime traffic. Furthermore, the Joint Interagency Task Force-South (JIATF-South), under DOD’s U.S. Southern Command, reported an increase in suspicious flights—particularly departing from Venezuela. Traffickers have flown loads of cocaine to remote, ungoverned spaces and abandoned the planes after landing. Traffickers have also used increasingly sophisticated concealment methods. For example, they have built fiberglass semisubmersible craft that could avoid both visual- and sonar-detection, hidden cocaine within the hulls of boats, and transported liquefied cocaine in fuel tanks. According to DOD officials, these shifts in drug trafficking patterns and methods have likely taken place largely in response to U.S. and international counternarcotics efforts in the Pacific Ocean and Caribbean, although measuring causes and effects is imprecise. In addition, according to DOD, drug trafficking organizations and associated criminal networks commonly enjoy greater financial and material resources (including weapons as well as communication, navigation, and other technologies) than do governments in the transit zone. In addition to maritime operations, drug trafficking organizations have adopted increasingly sophisticated smuggling techniques on the ground. For example, from 2000 to 2006, U.S. border officials found 45 tunnels— several built primarily for narcotics smuggling. According to DEA and Defense Intelligence Agency officials, the tunnels found were longer, deeper, and more discrete than in prior years. One such tunnel found in 2006 was a half-mile long. It was the longest cross border tunnel discovered, reaching a depth of more than nine stories below ground and featuring ventilation and groundwater drainage systems, cement flooring, lighting, and a pulley system. In production countries, such as Colombia, drug producers also proved to be highly adaptive. In 2009 we reported that coca farmers adopted a number of effective countermeasures to U.S. supported eradication and aerial spray efforts. These measures included pruning coca plants after spraying; replanting with younger coca plants or plant grafts; decreasing the size of coca plots; interspersing coca with legitimate crops to avoid detection; moving coca cultivation to areas of the country off-limits to spray aircraft, such as the national parks and a 10 kilometer area along Colombia’s border with Ecuador; and moving coca crops to more remote parts of the country—a development that created a “dispersal effect.” While these measures allowed coca farmers to continue cultivation, they also increased the coca farmers and traffickers’ cost of doing business. U.S. counternarcotics programs have been closely aligned with the achievement of other U.S. foreign policy goals. U.S. assistance under Plan Colombia is a key example where counternarcotic goals and foreign policy objectives intersect. While, as of 2007, Plan Colombia had not clearly attained its cocaine supply reduction goals, the country did improve its security climate through systematic military and police engagements with illegal armed groups and degradation of these group’s finances. Colombia saw a significant drop in homicides and kidnappings and increased use of Colombian public roads during Plan Colombia’s six years. In addition, insurgency groups such as the Revolutionary Armed Forces of Colombia (FARC) saw a decline in capabilities and finances. While these accomplishments have not necessarily led to a decrease in drug production and trafficking, they signaled an improved security climate, which is one of the pillars of Plan Colombia. In Afghanistan, we recently reported that the U.S. counternarcotics strategy has become more integrated with the broad counterinsurgency effort over time. Prior to 2008, counterinsurgency and counternarcotics policies were largely separated and officials noted that this division ignored a nexus between the narcotics trade and the insurgency. For example, DEA drug raids yielded weapons caches and explosives used by insurgents, as well as suspects listed on Defense military target lists, and military raids on insurgent compounds also yielded illicit narcotics and narcotics processing equipment. DOD changed its rules of engagement in November 2008 to permit the targeting of persons by the military (including drug traffickers) who provide material support to insurgent or terrorist groups. Additionally, in December 2008, DOD clarified its policy to allow the military to accompany and provide force protection to U.S. and host nation law enforcement personnel on counternarcotics field operations. DEA and DOD officials stated that these changes enabled higher levels of interdiction operations in areas previously inaccessible due to security problems. DEA conducted 82 interdiction operations in Afghanistan during fiscal year 2009 (compared with 42 in fiscal year 2008), often with support from U.S. military and other coalition forces. These operations include, among other things, raiding drug laboratories; destroying storage sites; arresting drug traffickers; conducting roadblock operations; and seizing chemicals and drugs. The U.S. military and International Security and Assistance Force are also targeting narcotics trafficking and processing as part of regular counterinsurgency operations. In addition, DEA efforts to build the Counternarcotics Police of Afghanistan (CNPA) has contributed to the goals of heightening security in Afghanistan. The DEA has worked with specialized units of the CNPA to conduct investigations, build cases, arrest drug traffickers, and conduct undercover drug purchases, while also working to build Afghan law enforcement capacity by mentoring CNPA specialized units. By putting pressure on drug traffickers, counternarcotics efforts can bring stabilization to areas subject to heavy drug activity. Many counternarcotics-related programs involve supporting democracy and the rule of law in partner nations, which is itself a U.S. foreign policy objective worldwide. In Colombia , assistance for rule of law and judicial reform have expanded access to the democratic process for Colombian citizens, including the consolidation of state authority and the established government institutions and public services in many areas reclaimed from illegal armed groups. Support for legal institutions, such as courts, attorneys general, and law enforcement organizations, in drug source and transit countries is not only an important part of the U.S. counternarcotic strategy but also advance State’s strategic objectives relating to democracy and governance. In many of our reviews of international counternarcotic-related programs, we found that determining program effectiveness has been challenging. Performance measures and other information about program results were often not useful or comprehensive enough to assess progress in achieving program goals. The Government Performance and Results Act of 1993 requires federal agencies to develop performance measures to assess progress in achieving their goals and to communicate their results to the Congress. The act requires agencies to set multiyear strategic goals in their strategic plans and corresponding annual goals in their performance plans, measure performance toward the achievement of those goals, and report on their progress in their annual performance reports. These reports are intended to provide important information to agency managers, policymakers, and the public on what each agency accomplished with the resources it was given. Moreover, the act calls for agencies to develop performance goals that are objective, quantifiable, and measurable, and to establish performance measures that adequately indicate progress toward achieving those goals. Our previous work has noted that the lack of clear, measurable goals makes it difficult for program managers and staff to link their day-to-day efforts to achieving the agency’s intended mission. In Afghanistan, we have reported that the use of poppy cultivation and eradication statistics as the principal measures of effectiveness does not capture all aspects of the counternarcotics effort in the country. For example, these measures overlook potential gains in security from the removal of drug operations from an area and do not take into account potential rises in other drug related activity such as trafficking and processing of opium. Some provinces that are now poppy-free may still contain high levels of drug trafficking or processing. Additionally, according to the Special Representative for Afghanistan and Pakistan, the use of opium poppy cultivation as a measure of overall success led to an over-emphasis on eradication activities, which, due to their focus on farmers, could undermine the larger counterinsurgency campaign. ONDCP officials also criticized using total opium poppy cultivation as the sole measure of success, stating that measures of success should relate to security, such as public safety and terrorist attacks. For Plan Colombia, several programs we reviewed were focused on root causes of the drug problem and their impact on drug activity was difficult to assess. In 2008 we reported that the United States provided nearly $1.3 billion for nonmilitary assistance in Colombia, focusing on economic and social progress and the rule of law, including judicial reform. The largest share of U.S. nonmilitary assistance went toward alternative development, which has been a key element of U.S. counternarcotics assistance and has reportedly improved the lives of hundreds of thousands of Colombians. Other social programs have assisted thousands of internally displaced persons and more than 30,000 former combatants. We reported that progress tracking of alternative development programs, in particular, needed improvement. USAID collected data on 15 indicators that measure progress on alternative development; however, none of these indicators measured progress toward USAID’s goal of reducing illicit narcotics production through the creation of sustainable economic projects. Rather, USAID collected data on program indicators such as the number of families benefited and hectares of legal crops planted. While this information helps USAID track the progress of projects, it does not help with assessing USAID’s progress in reducing illicit crop production or its ability to create sustainable projects. In 2008 we reported that U.S.-funded transit zone counternarcotics assistance encompasses a wide variety of initiatives across many countries, but State and other agencies have collected limited information on results. Records we obtained from State and DEA, including State’s annual International Narcotics Control Strategy Reports and End Use Monitoring Reports, provide information on outcomes of some of these initiatives but do not do so comprehensively. For example, in our review of State’s International Narcotics Control Strategy Reports for 2003 to 2007, we identified over 120 counternarcotics initiatives in the countries we reviewed, but for over half of these initiatives, the outcomes were unclear or not addressed at all in the reports. State has attempted to measure the outcomes of counternarcotics programs in its annual mission performance reports, which report on a set of performance indicators for each country. However, these indicators have not been consistent over time or among countries. In our review of mission performance reports for four major drug transit countries covering fiscal years 2002 through 2006, we identified 86 performance indicators directly and indirectly related to counternarcotics efforts; however, over 60 percent of these indicators were used in only one or two annual reporting cycles, making it difficult to discern performance trends over time. Moreover, nearly 80 percent of these performance indicators were used for only one country, making it difficult to compare program results among countries. Based on our report on DOD performance measures released today, we found that DOD has developed performance measures for its counternarcotics activities as well as a database to collect performance information, including measures, targets, and results. However, we have found that these performance measures lacked a number of the attributes that we consider key to being successful, such as being clearly stated and having measurable targets. It is also unclear to what extent DOD uses the performance information it collects through its database to manage its counternarcotics activities. In 2008, we reported that DEA’s strategic planning and performance measurement framework, while improved over previous efforts, had not been updated and did not reflect some key new and ongoing efforts. While DEA had assisted in counterterrorism efforts through information collection and referrals to intelligence community partners, DEA’s strategic plan had not been updated since 2003 to reflect these efforts. As such, the strategic plan did not fully reflect the intended purpose of providing a template for ensuring measurable results and operational accountability. The performance measures that were to be included in DEA’s 2009 annual performance report did not provide a basis for assessing the results of DEA’s counterterrorism efforts—efforts that include giving top priority to counternarcotics cases with links to terrorism and pursuing narcoterrorists. We have made many recommendations in past reports regarding counternarcotics programs. Several of our more recent recommendations were aimed at improving two key management challenges that I have discussed in my testimony today—planning for the sustainment of counternarcotics assets and assessing the effectiveness of counternarcotics-related programs. Improved planning for sustainment of counternarcotics assets. In our 2008 report on U.S. assistance to transit zone countries, we recommended that the Secretary of State, in consultation with the Secretary of Defense (1) develop a plan to ensure that partner nations in the transit zone could effectively operate and maintain all counternarcotics assets that the United States had provided, including boats and other vehicles and equipment, for their remaining useful life and (2) ensure that, before providing a counternarcotics asset to a partner nation, agencies determined the total operations and maintenance cost over its useful life and, with the recipient nation, develop a plan for funding this cost. More consistent results reporting. In our report on U.S. assistance to transit zone countries, we recommended that the Secretary of State, in consultation with the Director of ONDCP, the Secretaries of Defense and Homeland Security, the Attorney General, and the Administrator of USAID, report the results of U.S.-funded counternarcotics initiatives more comprehensively and consistently for each country in the annual International Narcotics Control Strategy Report. Improved performance measures. Several agencies we reviewed did not have sufficient performance measures in place to accurately assess the effectiveness of counternarcotics programs. In our DOD report released today, we recommend that the Secretary of Defense take steps to improve DOD’s counternarcotics performance measurement system by (1) revising its performance measures, and (2) applying practices to better facilitate the use of performance data to manage its counternarcotics activities. For Colombia, we recommended that the Director of Foreign Assistance and Administrator of USAID develop performance measurements that will help USAID (1) assess whether alternative development assistance is reducing the production of illicit narcotics, and (2) determine to what extent the agency’s alternative development projects are self-sustaining. The existence of such measures would allow for a greater comprehension of program effectiveness. For Afghanistan, we recommended that the Secretary of Defense develop performance targets to measure interim results of efforts to train the CNPA. We also recommended to the Secretary of the State that measures and interim targets be adopted to assess Afghan capacity to independently conduct public information activities. Lastly, we recommended that the Secretary of State, in consultation with the Administrator of DEA and the Attorney General, establish clear definitions for low-, mid-, and high-level traffickers that would improve the ability of the U.S. and Afghan governments to track the level of drug traffickers arrested and convicted. In most cases, the agencies involved have generally agreed with our recommendations and have either implemented them or have efforts underway to address them. Mr. Chairman and Members of the committee, this concludes my prepared statement. I will be happy to answer any questions you may have. Drug Control: Long-Standing Problems Hinder U.S. International Efforts. GAO/NSIAD-97-75 Washington, D.C. February 27, 1997. Drug Control: U.S.-Mexican Counternarcotics Efforts Face Difficult Challenges. GAO/NSIAD-98-154. Washington D.C., June 30, 1998. Drug Control: Narcotics Threat From Colombia Continues to Grow. GAO/NSIAD-99-136. Washington D.C., June 22, 1999. Drug Control: Assets DOD Contributes to Reducing the Illegal Drug Supply Have Declined. GAO/NSIAD-00-9. Washington D.C., December 21, 1999 Drug Control: U.S. Efforts in Latin America and the Caribbean. GAO/NSIAD-00-90R. Washington D.C., February 18, 2000. Drug Control: U.S. Assistance to Colombia Will Take Years to Produce Results. GAO-01-126. Washington D.C., October 17, 2000. International Counterdrug Sites Being Developed. GAO-01-63BR. Washington D.C., December 28, 2000. Drug Control: State Department Provides Required Aviation Program Oversight, but Safety and Security Should be Enhanced. GAO-01-1021. Washington D.C., September 14, 2001. Drug Control: Difficulties in Measuring Costs and Results of Transit Zone Interdiction Efforts. GAO-02-13. January 25, 2002. Drug Control: Efforts to Develop Alternatives to Cultivating Illicit Crops in Colombia Have Made Little Progress and Face Serious Obstacles. GAO-02-291. Washington D.C., February 8, 2002. Drug Control: Coca Cultivation and Eradication Estimates in Colombia. GAO-03-319R. Washington D.C., January 8, 2003. Drug Control: Specific Performance Measures and Long-Term Costs for U.S. Programs in Colombia Have Not Been Developed. GAO-03-783. Washington, D.C., June 16, 2003. Drug Control: Aviation Program Safety Concerns in Colombia Are Being Addressed, but State’s Planning and Budgeting Process Can Be Improved. GAO-04-918. Washington D.C., July 29, 2004. Drug Control: Air Bridge Denial Program in Colombia Has Implemented New Safeguards, but Its Effect on Drug Trafficking Is Not Clear. GAO-05-970. Washington, D.C., September 6, 2005. Drug Control: Agencies Need to Plan for Likely Declines in Drug Interdiction Assets, and Develop Better Performance Measures for Transit Zone Operations. GAO-06-200. Washington, D.C., November 15, 2005. Afghanistan Drug Control: Despite Improved Efforts, Deteriorating Security Threatens Success of U.S. Goals. GAO-07-78. Washington D.C., November 15, 2006. State Department: State Has Initiated a More Systematic Approach for Managing Its Aviation Fleet. GAO-07-264. Washington D.C., February 2, 2007. Drug Control: U.S. Assistance Has Helped Mexican Counternarcotic Efforts, But Tons of Illicit Drugs Continue to Flow into the United States. GAO-07-1018. Washington D.C., August 17, 2007. Drug Control: U.S. Assistance Has Helped Mexican Counternarcotics Efforts, But the Flow of Illicit Drugs Into the United States Remains High. GAO-08-215T. Washington D.C., October 25, 2007. Drug Control: Cooperation with Many Major Drug Transit Countries Has Improved, but Better Performance Reporting and Sustainability Plans Are Needed. GAO-08-784. Washington D.C., July 15, 2008. Plan Colombia: Drug Reduction Goals Were Not Fully Met, but Security Has Improved; U.S. Agencies Need More Detailed Plans for Reducing Assistance. GAO-09-71. Washington D.C., October 6, 2008. Drug Control: Better Coordination with the Department of Homeland Security and An Updated Accountability Framework Can Further Enhance DEA’s Efforts to Meet Post-9/11 Responsibilities. GAO-09-63. Washington D.C., March 20, 2009. Iraq and Afghanistan: Security, Economic, and Governance Challenges to Rebuilding Efforts Should be Addressed in U.S. Strategies. GAO-09-476T. Washington D.C., March 25, 2009. Drug Control: U.S. Counternarcotics Cooperation with Venezuela Has Declined. GAO-09-806. Washington D.C., July 20, 2009. Status of Funds for the Mérida Initiative. GAO-10-235R. Washington D.C., December 3, 2009. Afghanistan Drug Control: Strategy Evolving and Progress Reported, but Interim Performance Targets and Evaluation of Justice Reform Efforts Needed. GAO-10-291. Washington D.C., March 9, 2010. Preliminary Observations on the Department of Defense’s Counternarcotics Performance Measurement System. GAO-10-594R. Washington, D.C., April 30, 2010. Drug Control: DOD Needs to Improve Its Performance Measurement System to Better Manage and Oversee Its Counternarcotic Activities. GAO-10-835. Washington, D.C., July 21, 2010. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The overall goal of the U.S. National Drug Control Strategy, prepared by the White House Office of National Drug Control Policy (ONDCP), is to reduce illicit drug use in the United States. GAO has issued more than 20 products since 2000 examining U.S.-funded international programs aimed at reducing the supply of drugs. These programs have been implemented primarily in drug source countries, such as Colombia and Afghanistan as well drug transit countries, such as Mexico, Guatemala, and Venezuela. They have included interdiction of maritime drug shipments on the high seas, support for foreign military and civilian institutions engaged in drug eradication, detection, and interdiction; and rule of law assistance aimed at helping foreign legal institutions investigate and prosecute drug trafficking, money laundering, and other drug-related crimes. GAO's work on U.S.-funded international counternarcotics-related programswork has centered on four major topics: (1) Counternarcotics-related programs have had mixed results. In Afghanistan, Colombia, and drug transit countries, the United States and partner nations have only partially met established targets for reducing the drug supply. In Afghanistan, opium poppy eradication efforts have consistently fallen short of targets. Plan Colombia has met its goals for reducing opium and heroin but not coca crops, although recent data suggest that U.S.-supported crop eradication efforts over time may have caused a significant decline in potential cocaine production. Data also indicate an increase in coca cultivation in Peru and Bolivia that may eventually offset these declines. Interdiction programs have missed their performance targets each year since goals were established in 2007. (2) Several factors have limited program effectiveness. Various factors have hindered these programs' ability to reduce the supply of illegal drugs. In some cases, we found that U.S. agencies had not planned for the sustainment of programs, particularly those providing interdiction boats in transit countries. External factors include limited cooperation from partner nations due to corruption or lack of political support, and the highly adaptive nature of drug producers and traffickers. (3) Counternarcotics-related programs often advance broader foreign policy objectives. The value of these programs cannot be assessed based only on their impact on the drug supply. Many have supported other U.S. foreign policy objectives relating to security and stabilization, counterinsurgency, and strengthening democracy and governance. For example, in Afghanistan, the United States has combined counternarcotics efforts with military operations to combat insurgents as well as drug traffickers. U.S. support for Plan Colombia has significantly strengthened Colombia's security environment, which may eventually make counterdrug programs, such as alternative agricultural development, more effective. In several cases, U.S. rule of law assistance, such as supporting courts, prosecutors, and law enforcement organizations, has furthered both democracy-building and counterdrug objectives. (4) Judging the effectiveness of some programs is difficult. U.S. agencies often lack reliable performance measurement and results reporting needed to assess all the impacts of counterdrug programs. In Afghanistan, opium eradication measures alone were insufficient for a comprehensive assessment of U.S. efforts. Also, the State Department has not regularly reported outcome-related information for over half of its programs in major drug transit countries. Furthermore, DOD's counternarcotics-related measures were generally not useful for assessing program effectiveness or making management decisions. GAO has made recommendations to the Departments of Defense (DOD) and State and other agencies to improve the effectiveness and efficiency of U.S. counternarcotics- related programs. In particular, GAO has recommended that these agencies develop plans to sustain these programs. GAO has also recommended that they improve performance measurement and results reporting to assess program impacts and to aid in decision making. Agencies have efforts under way to implement some of these recommendations. |
OJJDP, one of the components of the U.S. Department of Justice, Office of Justice Programs (OJP), was established by the Juvenile Justice and Delinquency Prevention Act of 1974 (Juvenile Justice Act). Its mission is to provide national leadership, coordination, and resources to prevent and respond appropriately to juvenile delinquency and juvenile victimization. OJJDP accomplishes its mission through developing and implementing prevention programs and a juvenile justice system that protects the public safety, holds juvenile offenders accountable, and provides treatment and rehabilitative services based on the needs of juveniles and their families. OJJDP funds research and evaluation efforts, statistical studies, and demonstration programs; provides technical assistance and training; produces and distributes publications and other products containing information about juvenile justice topics; oversees activities dealing with missing and exploited children; and administers a wide variety of grant programs. OJJDP funds programs that serve juveniles directly as well as those that benefit juveniles more indirectly by focusing on system-wide changes or by increasing the capacity of governmental units or organizations. OJJDP awards grants to states, territories, localities, and private organizations through five formula and block grant (formula/block grant) programs and numerous discretionary grant programs. OJJDP administers four formula grant programs that provide funds directly to states and territories on the basis of states’ juvenile populations, and one block grant program that awards a fixed level of funds to all states and territories. Under these formula/block grant programs, states may, in turn, make subawards to other organizations such as units of local government. OJJDP awards discretionary grants through a competitive process to state governments, local governments, or individual agencies and organizations. OJP is responsible for the financial monitoring of grantees (i.e., it provides policy guidance, control, and support services in the financial management of grants), whereas OJJDP is responsible for program monitoring. For program monitoring purposes, OJJDP assigns each of its grantees a program manager who is responsible for ensuring administrative and programmatic compliance with relevant statutes, regulations, policies and guidelines of awarded grants. The program manager is also responsible for monitoring grantees’ performance and progress as related to grantees’ stated goals and objectives. OJJDP’s budget has increased significantly over the last 5 years—from about $188 million in fiscal year 1997 to about $596 million in fiscal year 2001. During this time, the Congress has created new programs and increased appropriations for some existing ones. The Congress has also provided direction each year regarding certain program areas OJJDP should fund. Overall, in fiscal year 2001, 31 percent of OJJDP’s available funds were congressionally earmarked—that is, set aside for an identifiable grantee, specified amount, and/or specific authorized purpose. For fiscal year 2001, $180 million of OJJDP’s funds were available for discretionary grant awards, of which 77 percent was earmarked. OJJDP awards the majority of its funds to grantees in its five formula/block grant programs. In fiscal year 2000, the latest year for which awards data were available, OJJDP awarded (1) about $354 million, or 64 percent of the total funds awarded, to these formula/block grant programs and (2) just over $200 million, or 36 percent of the total funds awarded, to a wide range of discretionary grant programs. The programs awarded the most funds in fiscal year 2000 were the Juvenile Accountability Incentive Block Grants Program ($221 million), the Formula Grants Program ($70 million), the Community Prevention Grants Program ($36 million), the Child Abuse and Neglect Program ($32 million), the Missing and Exploited Children Program ($32 million), and the Drug-Free Communities Support Program ($30 million). OJJDP awards funds to a wide range of recipients, with the majority of awarded funds going to state governments. As shown in figure 1, 67 percent of the funds OJJDP awarded in fiscal year 2000 went to states, 20 percent to nonprofit organizations, 6 percent to school districts or educational institutions, and 5 percent to local governments. However, because many grantees make subawards to other entities, the awards to these grantees do not reflect the ultimate recipients of the funds OJJDP awards. For example, under the Formula Grants Program, states pass through a minimum of two-thirds of their awarded funds to public and private nonprofit organizations. (See app. I for data from fiscal years 1996 through 2000 on (1) OJJDP funds awarded to formula/block grant versus discretionary grant programs, (2) OJJDP awards by program area, (3) OJJDP award recipients, and (4) OJJDP formula/block grant awards by state.) To identify programmatic reporting requirements for OJJDP grantees, the reasons for these requirements, and examples of information grantees have reported, we reviewed all 5 of OJJDP’s formula/block grant programs and we selected 11 of its major discretionary grant programs to review based on OJJDP officials’ input regarding which programs were “major” (e.g., number of grantees, program funding, and/or importance of the program). To identify grantee reporting requirements, and the purpose of these requirements, we met with the OJJDP program managers who monitor each of these 16 programs or with other key officials. For those programs in which OJJDP funded an outside evaluation, we also met with program managers who oversee the evaluations. We reviewed OJP’s Grant Management Policies and Procedures Manual (January 19, 2001) and Categorical Assistance Progress Report (progress report) form along with the instructions for completing the form. (See app. III for a copy of the progress report form.) We also reviewed OJJDP program documents for each of the 16 programs, including any special reporting conditions. To supplement program documents, we reviewed relevant documents from outside evaluators. We did not assess the adequacy of reporting requirements established by OJJDP or the outside evaluators. To identify specific examples of performance data that grantees reported, we asked OJJDP officials to provide progress reports for each program demonstrating a range of detail and, in some cases, we asked for reports from specific grantees. For each program, we then reviewed 3 to 15 progress reports (or individual performance reports) submitted between 1998 and 2001. We did not review progress reports from all grantees in every program, nor did we review grantees’ compliance with reporting requirements. For programs being evaluated by an outside evaluator, we reviewed performance data that program grantees reported to those evaluators, when available. To determine whether OJJDP requires grantees to report the number of juveniles they serve directly and to identify the number of juveniles OJJDP grantees reported serving in fiscal year 2000, we interviewed OJJDP program managers for each of the 16 programs we reviewed and examined relevant program documents, including selected semiannual progress reports. To determine whether other programs directly serve juveniles, obtain available data on the numbers of juveniles served, and learn why OJJDP does or does not require grantees to report such data, we reviewed OJJDP program literature and met with OJJDP division directors. Nevertheless, the list of OJJDP programs we identified as directly serving juveniles may not be comprehensive. We focused our data collection effort on only those programs we identified in which all grantees reported juveniles-served data. To assess the methodological rigor of the impact evaluations OJJDP has funded since 1995 of its own programs, and to provide information on the other types of evaluations OJJDP has funded, we asked OJJDP to identify all program evaluations it had funded since 1995. For each evaluation, we asked OJJDP to indicate whether it was an impact evaluation, whether the program being evaluated was funded by OJJDP, and whether the evaluation had been completed or was ongoing. Overall, OJJDP identified 35 evaluations funded since 1995. Eleven of the 35 evaluations were impact evaluations of OJJDP programs, and all were ongoing. For each of the 10 impact evaluations we assessed, we asked OJJDP to provide any documentation relevant to the design and implementation of the evaluation methodologies, such as the initial and supplemental proposals, peer review documents, progress reports, reports of interim results, and correspondence between OJJDP and the evaluators. In addition, we contacted OJJDP officials to resolve any questions that we had regarding the documentation and to request any missing documents. We did not contact the program manager responsible for each evaluation. To assess the methodological rigor of the 10 impact evaluations, we used a data collection instrument to collect information systematically on each program being evaluated and the features of the evaluation methodology. We based our data collection and assessments on generally accepted social science standards. We examined such factors as whether evaluation data were collected before and after program implementation, how program effects were isolated (i.e., the use of nonprogram participant comparison groups or statistical controls), and the appropriateness of sampling, outcome measures, statistical analyses, and any reported results. A social scientist with training and experience in evaluation research and methodology read and coded the documentation for each evaluation. A second social scientist reviewed each completed data collection instrument and the relevant documentation for the impact evaluation to verify the accuracy of every coded item. We relied on documents OJJDP provided to us in April 2001 in assessing the evaluation methodologies and reporting on each evaluation’s status. For each of the remaining 24 evaluations, which included nonimpact evaluations of OJJDP-funded programs, as well as evaluations of juvenile justice programs that OJJDP did not fund, we asked OJJDP to provide general descriptive information, such as the type and purpose of the evaluation, the number of sites involved, and whether the evaluation included data on all participants. We did not assess the methodological rigor of these evaluations. We conducted our work at OJJDP Headquarters in Washington, D.C., from September 2000 to August 2001 in accordance with generally accepted government auditing standards. OJP requires virtually all OJJDP grantees to submit semiannual progress reports, which OJJDP uses to help monitor grantees’ project implementation and achievement of the goals they identified in their grant applications. To this end, OJP provides grantees standard, general guidance on the types of program information they are to report, such as narrative information on the status of each of their project goals and the quantitative results of their projects. In addition to this standard requirement, grantees for some of OJJDP’s programs are subject to additional reporting requirements that apply only to their respective programs. Our review of 16 major programs showed that grantees in 8 of the programs were required to comply only with the standard requirement for information, and grantees in the other 8 programs were required to report additional specified data. The specific reporting requirements were established primarily to help evaluate the results of these programs. Table 1 identifies the 16 programs we reviewed and the reason for the standard or specific reporting requirements for each program. Our review of selected progress reports from the 16 programs showed that, in all but the Formula Grants Program, grantees reported information on the status of their activities and accomplishments in response to the standard requirements, although the details they reported varied as did the projects themselves. Grantees in the eight programs with specific reporting requirements reported a variety of descriptive information and performance data to OJJDP and/or outside evaluators. (See app. IV for a program description, summary of reporting requirements, and examples of what grantees reported in each of the 16 programs we reviewed.) All OJJDP grantees are required to report on their project activities and accomplishments to OJP twice a year using the Office of Justice Programs’ Categorical Assistance Progress Report (progress report) form. The form is unstructured and is to be completed in narrative and/or chart form. The standard instructions to grantees for completing the form state that grantees should report information on the status of each of their projects’ goals scheduled to be achieved during the reporting period and set forth in their grant application, including quantitative project results based on performance measures. Grantees are also instructed to report on actions planned to resolve any implementation problems and request any technical assistance they might need. OJJDP program managers are to use reported information to help monitor grantees’ project implementation. OJJDP officials explained that because the progress report is intended as just one of their monitoring tools, this standard, general guidance meets their basic oversight needs. They further explained that guidance needs to be somewhat general given the variation that can occur among projects as grantees tailor them to meet local needs and circumstances. OJJDP encourages grantees to design projects that meet the unique needs of their own communities, and therefore grantees do not always report on the same performance measures. Although OJJDP program managers have additional ways of keeping abreast of grantees’ projects, such as phone calls and on-site visits, OJJDP officials indicated they would prefer to require and obtain more specific, and even more frequent, information through the progress reports or other reporting mechanisms. However, according to these officials, they are reluctant to impose additional reporting requirements on grantees because of the Paperwork Reduction Act of 1995, which seeks to ensure that federal agencies balance their need to collect information with the reporting and paperwork burden they impose. Under the Act, federal agencies have an obligation to keep the paperwork burden they impose as low as possible, and agencies must receive prior approval from the Office of Management and Budget for information collection requests. We reviewed selected progress reports that grantees from each of the 16 programs submitted to OJJDP and found that, in all but the Formula Grants Program, grantees reported on the status of their projects. Grantees reported input, output, or outcome data related to the process, implementation, and/or accomplishments of their projects. They included information such as subgrant awards, specific meetings held, staff hired, implementation difficulties, number of project participants, and behavioral change in youths. However, the particular information grantees reported varied, as did their projects. This variation, coupled with the unstructured format of the progress report, makes it difficult to aggregate reported data. Fourteen of the programs we reviewed had multiple grantees and the information these grantees reported in response to the standard guidance varied, even within each program. For example, we found the following: Under the Tribal Youth Program—a program that recognizes differences among tribes and encourages diversity in their projects—grantees must implement projects in keeping with at least one of four broad purpose areas. One tribe reported that it had completed the renovation of a youth center; another reported that it had collected examples of other tribes’ juvenile law enforcement codes and started drafting model codes adapted for each of its villages. A third tribe reported that the resignation of its community truancy officer had impacted its ability to reduce instances of misbehavior in school. Under another program—the Juvenile Accountability Incentive Block Grants Program—grantees (states) and their subgrantees (communities) have undertaken a variety of projects and, thus, report different information. In this program, states and their communities can choose from 12 different purpose areas under the broad objective of promoting greater accountability in the juvenile justice system. Thus, one state reported that one community hired a juvenile court intake officer and included that officer’s caseload; the same state reported that another community was unable to start a project because a local agency declined to participate in its project. Another state reported on the number of youths enrolled in one community’s drug testing project and reported the number of drug screening tests performed. In the Drug-Free Communities Support Program, grantees design projects to meet the needs of their local communities; thus, the projects and the information grantees reported varied. For example, one grantee reported that it helped local students produce a 30-second anti-smoking commercial in collaboration with the local health department and further reported that only 9 of 50 invited members attended a strategic planning meeting it had held. Another grantee reported making presentations on drug abuse to 146 young men at the local juvenile detention facility, and that its pre- and post-assessments continually showed that the young men gained knowledge in the harmful effects of alcohol and drugs. In the Formula Grants Program, not all grantees reported on the objectives and accomplishments of their subgrantees’ projects, as required. OJJDP requires grantees in this program to complete an Individual Project Report (IPR) for each subgrantee. The instructions for completing IPRs are similar to the instructions grantees in other programs receive for completing semiannual progress reports. Our review of IPRs from selected states showed that, for one state, none of the IPRs contained any information on subgrantees’ accomplishments, and some did not include information on subgrantees’ program objectives. For another state, neither OJJDP nor the state was able to provide us with copies of completed IPRs because OJJDP’s automated reporting system for states was inoperable. Two of the 16 programs we reviewed had only one grantee each. Although both received only standard reporting guidance, they reported more detailed, quantitative output and outcome data than grantees in the other programs that received only standard guidance. In the first program—the National Clearinghouse and Resource Center for Missing and Exploited Children—the grantee has voluntarily reported detailed information in a structured format. In the second program—the Model Courts Program— OJJDP has emphasized that the grantee should include quantitative performance data in its progress reports, but did not prescribe the specific indicators on which the grantee must report. OJJDJP has designated the National Center for Missing and Exploited Children (NCMEC) as the grantee for National Clearinghouse and Resource Center for Missing and Exploited Children, and NCMEC has developed its own standardized reporting format that covers 10 categories. This format collects numbers and other information on each category, such as missing children cases, exploited children cases, public affairs, and hotline calls. NCMEC reports to OJJDP quarterly, rather than semiannually, because this timeframe matches the reporting structure of its data management system. For the first quarter of 2001, NCMEC reported various output and outcome data that included receiving 24,983 calls through its hotline; assisting in the recovery of 1,610 missing children; receiving 5,291 tips on its online child pornography tip line; and displaying pictures of 1,399 missing children, which resulted in locating 257 children. The sole grantee of the Model Courts Program—the National Council of Juvenile and Family Court Judges—-also reports quantitative information in its semiannual progress reports. Although OJJDP has not specified the performance indicators on which the Council must report, it has emphasized to the Council the need for quantitative performance data in the semiannual progress reports. As a result, the Council includes specific quantitative output data in its progress reports. For example, it reported that during the last half of 2000 it distributed 17,818 technical assistance bulletins, conducted 96 training presentations, and made 31 site visits to model courts. In addition, the Council voluntarily publishes an annual report that provides more detailed information on the accomplishments of the individual model courts, such as a reduction in the number of children in court custody. OJJDP officials told us that they do not require the Council to provide this report, but they have instructed it to report detailed performance data on the activities of the model courts, when such data exist. They further explained that if the Council were to stop publishing an annual report, OJJDP would require it to include model court performance data in its progress reports. In eight of the major programs we reviewed, grantees were given additional, more explicit reporting instructions requiring them to report on the same specific performance measures as other grantees in the same program. In these programs, additional requirements were established to meet the evaluative needs of OJJDP or an outside evaluator. In one of the eight programs, requirements were also established to ensure grantee compliance with certain requirements of the Juvenile Justice Act as well as for program assessment. The specific requirements of each of these eight programs varied, as they were tailored to each program. However, grantees in all these programs were still required to routinely report narrative information on the status of their activities and accomplishments through semi-annual progress reports. In five of these programs, OJJDP and/or an evaluator have established specific reporting requirements primarily to support an outside program evaluation. For example, as a condition of receiving a Juvenile Mentoring Program grant, OJJDP requires all grantees to participate fully in the evaluation by providing data to the evaluator. This evaluator requires grantees to report their data through quarterly progress reports that are similar to semiannual progress reports. The required data include information on youths participating in each project, participating mentors, and youth-mentor matches. For instance, grantees are required to report family structure information for participating youths. The evaluator aggregates such data from all grantees and has reported, for example, that 56 percent of participating youths lived with their mother only, 20 percent lived with both parents, 4 percent lived with their father only, and 21 percent were in other living arrangements. In two of the eight programs—the Internet Crimes Against Children Task Force Program and Children’s Advocacy Centers—specific reporting requirements were established so that OJJDP could assess program accomplishments. The governing board of the Internet Crimes Task Force, in agreement with OJJDP, identified monthly performance measures on which grantees must report, such as the number of arrests made, search warrants issued, subpoenas served, and cases opened by the task forces. Under the Children’s Advocacy Centers program, OJJDP prescribed specific performance measures on which grantees must report, such as the number of practitioners trained, training conferences held, and publications distributed. In this program, specific reporting requirements were established not only for OJJDP to assess the program’s overall accomplishments, but also to help grantees assess their own projects. The eighth program—the Formula Grants Program—has requirements that are statutorily based and further spelled out by OJJDP in program regulations. Program reporting requirements were established to ensure grantees comply with the four core requirements of the Juvenile Justice Act and as a basis for assessing the effects of the program. These core requirements are (1) deinstitutionalization of status offenders, (2) separation of juveniles from adult offenders, (3) removal of juveniles from adult jail and lockup, and (4) addressing efforts to reduce disproportionate minority confinement. OJJDP regulations list in detail the information on which states must report. For instance, regarding the deinstitutionalization of status offenders, states must report the total number of accused and adjudicated status offenders and nonoffenders placed in facilities that are, for example, not near their home community. (See app. VI for a summary of states’ compliance with the core requirements of the Juvenile Justice Act.) According to the compliance monitoring coordinator for the Formula Grants Program, grantees’ reports on compliance with the core requirements also provide the basis for OJJDP to assess the effects of the program. We identified eight programs that serve juveniles directly and whose grantees reported such data for fiscal year 2000. However, OJJDP often does not require grantees to provide this information, in large part because not all of its programs are intended to provide direct services to juveniles. We identified eight programs in which grantees directly serve juveniles and in which all grantees report the number of juveniles served to either OJJDP or an outside evaluator. About 400 grantees in these eight programs directly served close to 142,000 juveniles in one year. For example, in fiscal year 2000, the Juvenile Mentoring Program reported serving about 8,500 juveniles, and in calendar year 2000, the Court Appointed Special Advocate Program reported serving 70,348 youths. Table 2 shows the programs we identified as directly serving juveniles and reporting such data for fiscal year 2000. We also identified a program in which all subgrantees directly serve juveniles, but not all subgrantees report such data. The national granteefor the Children’s Advocacy Centers program reported that its subgrantees served over 100,000 juveniles in calendar year 2000, but this number represents only those juveniles served by subgrantees accredited through a national membership council. For several reasons, OJJDP does not typically require grantees to report the number of juveniles their projects directly serve. First and foremost, many of OJJDP’s programs are not intended to serve juveniles directly. The Juvenile Justice Act established OJJDP for a variety of purposes, many of which involve indirect benefits to juveniles, rather than direct services. Statutorily-established purposes for OJJDP include the following: To provide technical assistance to and training programs for professionals who work with delinquents. To provide for the evaluations of federally-assisted juvenile justice and delinquency prevention programs. To establish a centralized research effort on problems of delinquency. To assist state and local governments in improving the administration of justice and services for juveniles who enter the system. Some of OJJDP’s programs, in their entirety, provide indirect benefits, rather than direct services, to juveniles. OJJDP’s Model Courts Program, for example, benefits juveniles indirectly by providing training and technical assistance to court personnel to improve their handling of child abuse and neglect cases. The Internet Crimes Against Children Task Force Program also benefits juveniles indirectly by helping to identify and arrest pedophiles and child pornographers who use the Internet to prey on children. Furthermore, in commenting on a draft of this report, the Assistant Attorney General pointed out that although OJJDP’s research projects do not typically provide services directly, their results can potentially help thousands of juveniles. OJJDP officials provided the following additional reasons for not requiring all grantees to report the number of juveniles their projects directly serve: A common interpretation of “juveniles served” does not exist across, or even within, programs. For example, grantees in one program might consider the number of juveniles served as those assessed for services but referred elsewhere, while grantees in a different program might consider only juveniles who received at least 10 sessions of therapy. Even within the same program grantees may not share a common definition of “juveniles served.” One program grantee might report on the number of juveniles who received intensive one-on-one drug prevention services over an extended period of time, while another grantee in that same program might report on the number who attended a one-time presentation on drug prevention. Without a common interpretation of “juveniles served,” the data grantees report would be inconsistent and would have little value. For some programs, directly serving juveniles may be only one of a number of intended program purposes and thus, OJJDP does not typically require all grantees within these programs to report such data. For example, in the Formula Grants Program, states and their subgrantees can choose from among 14 different program areas related to preventing and controlling delinquency and improving juvenile justice systems. Under the program area of “planning and administration,” for instance, states can fund planning projects that benefit juveniles indirectly, such as developing a comprehensive state plan to identify juvenile service needs and programs that address those needs over the long term. However, under the area of “illegal drugs and alcohol,” a local subgrantee can serve juveniles directly by establishing a drug and alcohol abuse prevention project. Juveniles-served data could be used inappropriately to measure the effectiveness of a program whose primary purpose may not be to provide direct services to juveniles. For example, the primary purpose of State Challenge Activities is to stimulate system-wide change, although many of its 10 activity areas also promote projects intended to directly serve juveniles. However, grantees in this program are expected to implement direct service projects within the broader context of promoting system- wide change. For instance, one State Challenge Activities grantee used funds it received under the “deinstitutionalization of status offenders” activity area to establish two community projects that provide housing for runaway juveniles, many of whom are girls. The grantee intends to use its experiences with these two new projects to initiate system-wide change by developing a comprehensive model program expressly geared to serving runaway girls. By focusing on the number of girls served by this program, one might fail to see that its primary purpose was to develop a comprehensive model program for serving runaway girls. OJJDP has funded 35 evaluations since 1995, including 11 evaluations intended to measure the impact of OJJDP-funded programs. We reviewed the methodological rigor of 10 of the 11 evaluations. Half of these 10 evaluations are in formative stages, while the other five are well into implementation. None had been completed at the time of our review. Our in-depth review of these 10 evaluations shows that although several are well-designed and use, or plan to use, sophisticated data analysis methods, others raise concerns as to whether the evaluations will produce definitive results. We recognize that impact evaluations, such as the types that OJJDP are funding, can encounter difficult design and implementation issues. For some of the evaluations we reviewed, program flexibility has added to the complexity of designing evaluations. A lack of comparison groups to aid in isolating the impacts of some programs, and data collection problems could compromise some evaluation results. According to OJJDP officials, the OJJDP weighs a number of factors when deciding which programs to evaluate and what kind of evaluations to fund. Given its budget, it considers how much of its discretionary funds to spend in support of evaluation activities. In deciding which of its programs to evaluate, OJJDP gives priority consideration to programs that have been mandated by the Congress. Other criteria OJJDP uses to determine whether a program should be evaluated include the program’s level of funding and its uniqueness, as well as the feasibility and cost of an evaluation and its potential benefits to the field. Similar criteria are also involved with decisions to evaluate non-OJJDP funded programs, as well as congressional interest and other federal agencies’ willingness to co-fund an evaluation. The 10 impact evaluations of OJJDP-funded programs that we assessed vary in size and scope. The cost to conduct these evaluations ranges from $300,000 to well over $5 million; however, some of these grants involve both impact and process evaluations and the cost of the impact portion alone cannot be separated from the total. All 10 evaluations are multi-year, multi-site projects. The number of evaluation sites ranges from 2 in the Rural Gang Initiative to 175 in the evaluation of the Juvenile Mentoring Program. As of April 2001, three evaluations had produced interim findings of some program impacts. (See app. II for information on OJJDP’s process for disseminating products with interim findings as well as other products.) Program evaluation is an inherently difficult task because the objective is to isolate the impact of a particular program from all other factors that could have caused a change consistent with the intent of the program, or mitigated against that change. Given that programs, such as those funded by OJJDP, operate in an ever-changing environment and involve juveniles and adults who themselves constantly change, producing definitive evaluation results can be arduous. For example, the impact of a hypothetical program intended to improve students’ grades could be confounded by the effects of an outside-of-school mentoring program, the transfer of high-performing students to a magnet program, changes in school faculty, a new scholarship program, a severe flu season that results in widespread student absences from school, and a myriad of other factors. Our in-depth review of the 10 impact evaluations of OJJDP programs showed that a number of these evaluations are particularly complex because local grantees design their own projects to fit their communities’ needs. (See app. VII for a description of the impact evaluations OJJDP has funded of its own programs since 1995). Although this customization may make sense from a program perspective, it makes it more difficult to evaluate the program. Instead of assessing a single, homogeneous program with multiple grantees, the evaluation must assess the effects of multiple configurations of a program. Although all of the grantees’ projects under each program being evaluated are intended to achieve the same or similar goals, an aggregate analysis could mask differences in individual projects’ effectiveness and, thus, not result in information about which configurations of projects work and which do not. OJJDP’s evaluation of the Enforcing the Underage Drinking Laws Program (discretionary grant component) exemplifies this situation. In implementing their projects, states and local communities have substantial latitude to employ media campaigns, merchant education, compliance checks, youth leadership training, or a variety of other activities to deter underage drinking. Similarly, under the Positive Action Through Holistic Education program, local educators develop their own ways to prevent student violence and behavior problems based on their assessments of the causes of these problems in their schools. Because of the limited number of sites (two school districts) being evaluated and the likely differences in how each school has developed its own project, the resulting evaluation may not provide information that could be generalized to a broader implementation of the program. A standard way for evaluators to isolate the impacts of a program from other potential factors that could have influenced change is to use a comparison group as a benchmark. In the hypothetical example cited above concerning a program to improve students’ grades, a second set of students who are not in the program but are matched in academic performance and exposed to all of the same factors (except the program) could provide a baseline from which to assess the impact of the program. The grades of students in the two groups before and after the program would provide the data from which to measure program impacts. Without the benefit of the comparison group as a baseline, it is difficult or impossible to isolate changes resulting from the program from changes due to other factors. The designs of two of the five evaluations that are well into implementation lack an appropriate comparison group. The evaluation of the Juvenile Mentoring Program—a one-on-one mentoring program for youths—compares youths entering the program to those completing it. However, a variety of other factors, including the fact that youths in the program are likely to mature and, thus, improve somewhat spontaneously, cannot be ruled out as a rival cause of change from the beginning to the end of the program. Although the evaluators are employing multiple and innovative strategies to determine the effectiveness of the program in achieving its objective, the lack of a comparison group of nonparticipant youths is an obstacle to identifying definitive outcomes. An evaluation of the Partnerships to Reduce Juvenile Gun Violence Program includes a comparison of before and after crime statistics in project communities with crime statistics for the same time frames for the cities in which the projects operate. However, citywide crime statistics would no doubt include data from communities that are similar to the project community as well as from those that are not. Thus, the differences between citywide and project community baselines make it difficult to attribute potential findings to the program. Of the five programs for which evaluations are still being developed, two (the Safe Start Initiative and the Rural Gang Initiative) did not seem to have plans for comparison groups at the time of our review. Another (Parents Anonymous) anticipates using a comparison group, but as yet had not developed specific plans for one. Regardless of the quality of a program evaluation design, data collection problems can compromise the validity of findings. Data collection problems may affect the validity of the findings for three of the five evaluations that are currently completing or have completed data collection. The Juvenile Mentoring Program evaluation has experienced problems obtaining behavioral measures and school performance data with which to gauge program-driven change. The Comprehensive Gang Initiative evaluation has also experienced data collection problems such as the lack of fully adequate comparison youth data at all or most sites, missing police histories, and missing self-reported data. The Intensive Aftercare evaluation has experienced survey response rate shortfalls, in some cases obtaining response rates of less than 30 percent, which may affect the validity of the findings. In commenting on a draft of this report, the Assistant Attorney General said that the poor response rates for some elements at different sites were particularly disappointing because this evaluation had a strong random assignment design; however, the strategies for obtaining adequate data turned out to be insufficient. She added that the program staff who were required to collect data did not give data collection adequate priority in comparison to their other duties. This was particularly true of data regarding the comparison groups. In addition to funding impact evaluations of OJJDP programs, OJJDP has funded 24 other evaluations since 1995—11 nonimpact evaluations of OJJDP programs, 9 impact evaluations of programs that were not supported by OJJDP funds, and 4 nonimpact evaluations of programs that were not funded by OJJDP. The nonimpact evaluations are not intended to determine the outcomes of the various programs, but rather how well the programs have been implemented. For example, OJJDP has funded a process evaluation of its SafeFutures program to learn more about the process of community mobilization and collaboration in building a comprehensive program of prevention and intervention for at-risk youths and juvenile offenders. OJJDP has also funded evaluations of programs that are funded by entities other than OJJDP. For example, although OJJDP does not fund the Act Now Truancy Program, it has funded a nonimpact evaluation of this program. The Act Now Truancy Program grew out of a unique Arizona law that allowed prosecutors to issue citations to parents whose children were chronically truant. Because there was a great deal of interest in this approach and OJJDP believed it provided a unique opportunity to learn about the impact of an unusual approach, it funded an evaluation of the program. Appendix VIII contains brief descriptions of these 24 other evaluations. Although there is great interest in assessing results of programs, it is extremely difficult to design and execute evaluations that will provide definitive information. Our in-depth review of 10 OJJDP-funded evaluations of OJJDP's own programs undertaken since 1995 has shown that, in some cases, the flexibility that can be beneficial to grantees in tailoring programs to meet their communities' needs has added to the complexities of designing impact evaluations that will result in valid findings. Furthermore, the lack of an appropriate comparison group or sites and/or problems in data collection may compromise the reliability and validity of some of these evaluations. Because half of these 10 evaluations are in relatively early stages, any potential problems with comparison group issues or data collection shortfalls could still be resolved over the course of the evaluation. We recognize that not all evaluation issues that can compromise results are resolvable, including many involving comparison groups and data collection. However, to the extent that appropriate comparison groups can be established and tracked and data collection issues can be overcome, the validity of the evaluation results can be enhanced. Our review of the recent OJJDP program evaluations has shown that, of the five that are in or near their final stages, some problems with valid comparison groups and/or data collection could compromise the usefulness of some of their results. Five other program evaluations are in a formative stage where comparison group issues and data collection strategies are not yet finalized. Accordingly, we recommend that the Attorney General direct the Administrator of OJJDP to assess the five impact evaluations in the formative stages to address potential comparison group and data collection problems and, on the basis of that assessment, initiate any needed interventions to help ensure that the evaluations produce definitive results. We provided a copy of this report to the Attorney General for review and comment. In an October 15, 2001 letter, the Assistant Attorney General commented on a draft of this report. Her comments are summarized below and are presented in their entirety in appendix IX. Her detailed comments have been addressed in the report as appropriate. The Assistant Attorney General said that the draft report provides useful information that highlights areas warranting attention. She added that the draft report would be an important tool that OJP will use to improve the quality of its evaluations and to design programs that will achieve greater impact. Furthermore, OJP will assess the five impact evaluations that are currently in their formative stages to address potential comparison group and data collection problems. On the basis of that assessment, OJJDP will initiate any needed interventions to help ensure that evaluations produce definitive results. The Assistant Attorney General said that OJP agrees that it should always strive for more rigorous and scientifically sound evaluation designs and that the inclusion of comparison groups would certainly strengthen the interpretation of evaluation results. However, she disagreed with our reliance on the use of comparison groups as the only valid evaluation design for two primary reasons. First, OJJDP seeks to conduct juvenile justice evaluations in a real-world setting, where laboratory-like comparison groups may not be possible. Second, sufficient funding is not available for including comparison groups in every evaluation. The Assistant Attorney General also said that given the choice between conducting far fewer evaluations, all with comparison groups, and conducting a greater number of evaluations under less-than-ideal conditions, OJJDP’s Research and Program Development Division works hard to tread a middle ground that satisfies needs for both quality and quantity. She further pointed out that a growing number of policy makers and evaluators firmly believe that community-based initiatives do not lend themselves to the kind of traditional evaluations that this draft report proposes. Accordingly, some researchers have strongly urged that new approaches to evaluation be developed. In addition, the Assistant Attorney General said that our report suggests that more evaluations using experimental or quasi-experimental evaluation designs should be funded. She added that many communities reject participation in programs that are evaluated in this way (i.e., with control or comparison groups) because they feel that it requires them to purposely exclude youths from receiving services. In her comments, the Assistant Attorney General seemed to be using the terms “comparison group” and “control group” interchangeably. However, control groups are commonly associated with experiments involving random assignment. We do not intend our statements regarding the need for comparison groups in impact evaluations to imply that random assignment is necessary for studies to be valid. Furthermore, we recognize that groups can be compared after controlling for differences by methods other than random assignment, including statistical methods and various methods of matching. For impact evaluations, comparisons should be made, and should involve individuals who were not subject to the program or treatment being evaluated. However, not all the evaluations we assessed made such comparisons. We also recognize that not all evaluation issues that can compromise results are resolvable, even with the use of comparison groups. We also recognize that designing evaluations with comparison groups can be expensive and funding limitations could preclude their use in all evaluations. In addition, obtaining participants can be troublesome, as the Assistant Attorney General pointed out. However, the validity of evaluation results can be enhanced through establishing and tracking comparison groups. If other ways exist to effectively isolate the impacts of a program, comparison groups may not be needed. However, we saw no evidence of other methods being used in the 10 impact evaluations we assessed. While studies that do not have appropriate comparison groups can provide useful information, they should not be considered impact evaluations. Furthermore, we recognize the fact that communities may not favor withholding treatments or programs from individuals in control or comparison groups, however, this problem is commonly handled by phasing in the treatment or program and offering it to comparison group members following the evaluation period. As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. At that time, we will send copies to the Senate Judiciary Committee, the Senate Subcommittee on Youth Violence, the House Committee on Education and the Workforce, the House Subcommittee on Early Childhood, Youth and Families, the Attorney General, and the Director of the Office of Management and Budget. If you or your staff have any questions about this report, please contact James M. Blume or me at (202) 512-8777. Key contributors to this report are acknowledged in appendix X. This appendix provides information on the awards the Office of Juvenile Justice and Delinquency Prevention (OJJDP) made each year from fiscal years 1996 through 2000. It contains data on OJJDP funds awarded to formula/block grant programs versus discretionary grant programs (see fig. 2), OJJDP funds awarded by more specific program areas (see table 3), types of OJJDP award recipients (see table 4), and OJJDP formula/block grant awards by state (see table 5). We relied on the Office of Justice Program’s (OJP) awards database to analyze data on all OJJDP-administered awards made during this 5-year period. We analyzed awards by the year the award was made—not the year in which the funds were appropriated. We worked with OJJDP officials to identify awards by major program or program area, as the database did not provide sufficiently detailed information. OJP officials advised us that they perform daily quality control checks on all data entered into their database, however, we did not verify the accuracy of the database. The Office of Juvenile Justice and Delinquency Prevention (OJJDP) has a process for disseminating published interim results of impact evaluations as well as other publications produced by OJJDP and its grantees. OJJDP publications are available through the Juvenile Justice Clearinghouse.According to an OJJDP official, OJJDP develops a specific strategy for each publication that includes the number of copies to be printed, the methods for announcing availability, and the target audience that will automatically receive copies. OJJDP promotes products through the National Criminal Justice Reference Service (NCJRS) Catalog, OJJDP’s Juvenile Justice journal, the NCJRS and OJJDP Web sites, e-mail lists, the Office of Justice Programs press announcements, conference displays, criminal/juvenile justice newsletters and journals, and flier mailings. Almost all of OJJDP’s publications are made available to the public through OJJDP’s Web sites, which is administered by the Juvenile Justice Clearinghouse. Many publications, depending on their length, are also available through the Clearinghouse’s fax-on-demand service. Individuals can also order copies of publications online or by calling the Clearinghouse’s toll-free number. In addition, the Clearinghouse automatically sends publications to targeted constituents (e.g., juvenile justice policymakers, practitioners, researchers, and community-based organizations) and to individuals who have registered to receive publications based on their specific areas of interest. As of May 2001, OJJDP had used this dissemination process to share interim results from 5 of the 10 ongoing impact evaluations of OJJDP programs that we assessed. In total, OJJDP had distributed over 400,000 copies of 9 products that contained interim results from the 5 evaluations. Table 6 provides additional information on the distribution of these publications. Twice a year, the Office of Juvenile Justice and Delinquency Prevention (OJJDP) grantees are required to complete a Categorical Assistance Progress Report—a narrative report that is to include a summary of the status of their particular projects’ goals, quantitative project results based on performance measures set forth in their grant applications, actions planned to resolve any implementation problems, and any technical assistance they might need. In 8 of the 16 major programs we reviewed, grantees received only this general guidance, and were not subject to any additional reporting requirements. In the other eight programs we reviewed, grantees were required to follow this standard guidance and, in addition, report more specific information. Grantees in all 16 programs reported input, output, and/or outcome data related to the process, implementation, and/or accomplishments of their projects, such as acquisition of additional funding for a project evaluation, the number of project participants, or the number of missing children recovered. Table 7 provides summary information on the eight programs in which grantees are not subject to additional reporting requirements and examples from grantees’ progress reports. Table 8 provides similar information regarding the other eight programs in which grantees are additionally required to report specified data to OJJDP or outside evaluators, as well as the specific performance measures on which grantees are required to report. Unless otherwise noted, the examples of reported information represent individual grantee or subgrantee data for a 6-month period. Information provided regarding the specific data on which grantees are required to report do not necessarily include all performance data required. The Office of Juvenile Justice and Delinquency Prevention’s (OJJDP) training and technical assistance programs and research programs are unique in that they cut across many of OJJDP’s other programs. Also, grantees in each of these two areas typically report the same types of quantitative performance data as other grantees in their area, even though OJJDP does not usually prescribe the specific performance measures on which the grantees should report. Training and technical assistance grantees maintain the same types of data due to the common support services they provide, and research grantees do the same because they share a common goal of producing research products. OJJDP awards grants to training and technical assistance providers to support grantees in many of OJJDP’s grant programs. OJJDP administers the vast majority of its training and technical assistance grants through three of the Office’s divisions: (1) the Training and Technical Assistance Division (TTAD), (2) the State and Tribal Assistance Division (STAD), and (3) the Child Protection Division (CPD). Many of the training and technical assistance providers are required to report information on their projects’ activities and accomplishments semiannually using OJP’s Categorical Assistance Progress Report form, as do all OJJDP grantees.OJP provides standard guidance on information to be reported, such as information on the status of each of the grantees’ project goals and quantitative results of their projects. STAD has not imposed additional or more specific reporting requirements on its training and technical assistance providers and, for the most part, neither have TTAD nor CPD. Officials explained that OJJDP does not require all grantees to routinely report prescribed data because it is reluctant to place additional reporting requirements on grantees due to the Paperwork Reduction Act of 1995,which set goals to reduce the federal government’s reporting and paperwork burden. Although most of these providers are not subject to additional reporting requirements for prescribed data, it is not unusual for them to report on the same or similar quantitative performance measures. Because of the nature of the services they provide, training and technical assistance providers tend to maintain like data that can readily be counted, such as numbers of training events held, practitioners who attended those events (“practitioners trained”), and technical assistance requests filled. Some of these providers also produce publications or materials, such as bulletins, surveys, curricula, brochures, and other support materials, and report such information to OJJDP. Table 9 summarizes performance data we obtained regarding training and technical assistance grants. OJJDP officials cautioned that not all providers share common definitions of “training” and “technical assistance.” For one thing, the difference between the two is not always clear and, therefore, it is sometimes difficult to definitively categorize a provided service as training versus technical assistance. Furthermore, not all training events are equal. For example, some providers might characterize both a 1-day training conference and a 10-day training workshop as a training event; others might differentiate between the two. Furthermore, one provider might consider a telephone request from a grantee as merely a query, while another might consider it a request for technical assistance. OJJDP administers its research grants out of its Research and Program Development Division (RPDD). RPDD sponsors empirical studies on an array of topics related to juveniles and delinquency, from the causes of violence to the impact of victimization. The overall goal of these research grants is to generate credible and useful information to help prevent and reduce juvenile delinquency and victimization. Research grantees are not only expected to collect data but to analyze and disseminate their analyses to the public. RPDD requires all research grantees to produce publishable products and, in some instances, RPDD specifies the type of products to be published depending on the results of the research. Thus, according to OJJDP officials, one measure of a research grantee’s performance is the number of products the grantee has published. Like most OJJDP grantees, research grantees must report information on their projects’ activities and accomplishments semiannually through progress reports. RPDD does not impose additional, specific reporting requirements on grantees, but it does encourage them to report on products produced through private publishers (as opposed to those published through OJJDP). The division director told us that it is not necessary to impose specific requirements on grantees in addition to the semiannual progress report requirements because officials work closely with grantees throughout the life of the grants. OJJDP research grantees produce products based on their OJJDP-funded research. Some of these products are approved and published by OJJDP, who in turn disseminates the products through its own distribution process (see app. II for a description of OJJDP’s product dissemination process). Grantees also publish many products that are based on their OJJDP-funded research through private publishers. OJJDP officials told us they give their research grantees latitude to privately publish products because the majority of their research grantees are academics whose funding depends on the number of products they publish, and because grantees often have funding sources in addition to OJJDP. Tables 10 and 11 summarize the products that active OJJDP research grantees published through OJJDP and private publishers as a direct result of OJJDP-funded research. Table 10 describes the number of research products published by OJJDP from 1993 through September 2000, by topic. Table 11 shows the number of products, by topic, that grantees with active research grants privately published between 1986 and June 2001, or were in the process of publishing in June 2001. This appendix contains information on the 10 impact evaluations that the Office of Juvenile Justice and Delinquency Prevention (OJJDP) has funded of its own programs since 1995 and for which we have assessed the methodological rigor, as well as information on one impact evaluation— Teen Courts—that we did not assess. Five of the 10 evaluations are in their formative stages, and five are well into their implementation. For each of the 10, we have included a description of the program being assessed, the evaluating organization, a description of the evaluation and its findings, and our assessment of the evaluation. As discussed in the Scope and Methodology section of this report, we did not assess the methodological rigor of the Teen Courts evaluation. However, we have included a summary of this evaluation at the end of this appendix. Program Description: Parents Anonymous is a national child abuse prevention program that began in 1970. It consists of 32 state and local organizations and over 1,000 weekly mutual support groups. The principal participants are at-risk parents, though complementary projects exist for children. The cornerstones of the program are mutual support and shared leadership. Evaluator: National Council on Crime and Delinquency. Evaluation description: This evaluation, which is in the beginning stages, is based on a proposal to conduct a process evaluation in year 1, and an outcome evaluation in years 2 and 3. The researchers will determine how Parents Anonymous is staffed and operated in different settings, how it attempts to change the behavior and attitudes of parents, and what factors are related to its effectiveness. While the specifics of an outcome evaluation design are yet to be determined, the researchers indicate that they will most likely compare the Parents Anonymous participants with a control group and with Parents Anonymous dropouts. The process evaluation received $300,000 for a 3-year period. Evaluation findings: It is too early in this evaluation to have reported results. GAO assessment: No assessment of the impact evaluation is possible because it has not yet been planned. More fully developed proposals will need to be made to OJJDP to obtain funding for the impact evaluation portion of the study. Program Description: This program replicates and evaluates Project PATHE, which was first implemented in the Charleston County School District in South Carolina between 1980 and 1983. Project PATHE is a comprehensive school-based program that combines services to students who are at elevated risk for developing problem behaviors with school- wide organizational changes intended to improve both school climate and students’ behavior. Local educators are encouraged to develop their own (1) explanations for the causes of their schools’ violence and behavior problems and (2) specific local objectives and ways to prevent these problems by empowering teachers’ decision-making and fostering collaborative and nonhierarchical efforts. For various reasons, the grantee has had difficulty in selecting a school district for the replication. Funding for this effort began in October 1999 with funds provided by the Centers for Disease Control and Prevention. In commenting on a draft of this report, the Assistant Attorney General pointed out that all funds for this effort come from the Centers for Disease Control and Prevention through an interagency agreement. OJJDP awarded this grant, which the two agencies jointly manage. OJJDP originally identified Project PATHE as an OJJDP-funded program. However, on the basis of the Assistant Attorney General’s comments, this program appears to be a non-OJJDP-funded impact evaluation. Since we assessed its methodological rigor, we have included it with the other OJJDP-funded evaluations. Evaluator: Institute of Behavioral Science, University of Colorado. Evaluation description: Although the impact evaluation was expected to be completed in July 2001, it has yet to begin. Original plans called for one school district to be selected for the replication and evaluation. In a school district, one high school and one middle school would be selected to receive the program, once school principals had been informed and staff surveys had been conducted to determine interest in participating. Comparison schools (one high school and one middle school) in the same district would be selected with similar demographic characteristics of students, levels of problem behaviors, and other unspecified organizational characteristics, as well as a low probability of mounting other school-wide efforts to reduce problem behavior during the study period. Plans are to collect data before and after project implementation. All students and staff in all schools would be surveyed in September and May for 3 consecutive school years. In addition, 200 students from each high school—100 seniors and 100 sophomores—and 200 students from each middle school—100 eighth graders and 100 sixth graders—would be sampled in the first year of the study for followup for 3 years. Each sample is to include 25 students identified as “high risk.” It is not clear, however, how these samples will be selected. Schools are to be visited three times yearly during the study, and school records and teacher ratings of student behavior in each school will be used in establishing differences between the program and comparison schools. Program outcomes are to be selected after determining goals and objectives collaboratively with local school officials. Multivariate statistical analyses, such as logistic regression, are planned. Because of the difficulty in selecting a school district the latest progress report indicates some changes in this design. Agreements have been signed with two school districts (Charleston, SC and Baltimore, MD), instead of one, and plans are to conduct the replication and evaluation in four middle schools (two program and two comparison schools) in each district—high schools have been excluded. The amount of the grant is $875,000. However, additional funds have been requested. Evaluation findings: It is too early in this evaluation to have reported results. GAO assessment: The evaluation, as designed, is basically sound. The variation in program structure and implementation between schools may limit generalizability. While the researchers do suggest awareness of potential problems due to students switching schools, they do not clearly indicate, at this early design stage, how possible contamination will be handled. Program Description: The Rural Gang Initiative is a comprehensive strategy to ameliorate gang problems in rural areas. The program was adapted from the comprehensive gang model, developed at the University of Chicago, and implementation began in two rural areas in fall 2000. The program consists of five elements: community mobilization, opportunities provision, social intervention, suppression, and organizational change and development. Evaluator: National Council on Crime and Delinquency. Evaluation description: The impact evaluation of this program began in January 2001 and is expected to be completed in December 2003. It focuses on two of the four sites, Mount Vernon (IL) and Glenn County (CA), that were part of a year-long feasibility study that began in April 1999. The other two sites are not part of the evaluation because they did not fully implement the model. Information will be collected from these two sites on gang-involved youths and youths at-risk of gang involvement, all of whom have participated in the program. However, it is unclear how the youths will be sampled or whether all participants will be included. Data will be obtained from a variety of sources, including interviews, organizational surveys, and administrative data from schools and the justice system. No comparison groups are planned, though the researchers indicate that attempts will be made to collect data that will permit an assessment of alternative explanations for program effects. The researchers plan to collect data before and after program implementation; then they will measure any changes and follow-up with program participants for at least 12 months after their participation in the program. Individual sites are to identify specific program outcomes. Although the researchers have described prospective outcome measures, such as the reduction of gang-related crime and the prevention or reduction of gang involvement, they (1) have not chosen the outcome measures that they will use and (2) have not provided information on the types of statistical analyses planned. However, the evaluation is still in the formative stages. At the time of our review, this evaluation had received $525,000 in funding. Evaluation findings: It is too early in this evaluation to have reported results. GAO assessment: It is too early to tell from this evaluation how effective it will be. However, the absence of plans for comparison groups and the lack of specificity regarding any other control mechanisms, make it unclear how program effects will be distinguished from alternative explanations at this stage of the evaluation. Program Description: The Safe Schools/Healthy Students program has funded 77 school districts nationwide, with grants ranging up to $3 million, to develop services and activities to promote healthy childhood development and prevent violence and drug abuse. The program also aims to develop greater collaboration and cooperation between communities and schools to enhance their effectiveness in responding to and reducing violence. Each project model is intended to evolve over time. Evaluator: Research Triangle Institute. Evaluation description: This impact evaluation began in October 1999 and data will be collected through the Spring of 2005. School and community-based archival records, surveys of key coalition personnel, teachers, superintendents, principals and other school staff, and teacher behavioral checklists for students in selected grades will be gathered in all 77 school districts. The evaluation will compare data from participating sites with national norms and with similar information from matched nonparticipating (comparison) sites that the researchers surveyed in each of two large, nationally representative studies of school districts. The matching will be based on unspecified socio-demographic characteristics and responses to policy-related questions in the baseline survey. Archival data will be collected yearly over a 5-year period. Survey data will generally be collected at three points in time, about 2 years apart. The survey items will be drawn from established instruments and will provide, in conjunction with the archival or administrative data, information on behavioral outcomes, risk factors and inhibiting factors, and indicators of positive development and mental health. At the time of our review, this evaluation was funded at approximately $5.6 million. Evaluation findings: It is too early in this evaluation to have reported results. GAO assessment: This evaluation, as designed, is basically sound. Program Description: This demonstration program seeks to prevent and reduce the impact of family and community violence on young children, primarily aged 0 to 6, in up to 12 communities. The program plans to create a comprehensive service delivery system that integrates service providers (in the fields of early childhood education/development, health, mental health, and all manner of prevention, intervention, and treatment programs), law enforcement, legal services, and the courts. It also seeks to improve the access, delivery, and quality of services to children exposed to, and at high risk of, violence. Project sites are to be selected through a competitive grant process. Funding for the program began in October 1999. Evaluator: Caliber Associates. Evaluation description: This evaluation began in May 2000, and is expected to end in September of 2005. The effect of the Safe Start Initiative will be measured within and across all participating communities at both the community and individual levels. Multiple data collection methods, including focus groups, service agency usage logs and documents, and random-digit-dialing telephone surveys, will be used. Plans are to collect data before program implementation and for 4 years after the program begins in each site. Although specific outcome measures have not yet been identified, they are expected to address such areas as agency referral levels and quality of service, increased interagency collaboration, knowledge and perceptions of police and child protective services, rates of child maltreatment, physical injuries, and mental health problems. No comparison communities are to be studied. Analyses are to include regression and time series models. At the time of our review, this initiative had received $1 million. Evaluation findings: It is too early in this evaluation to have reported results. GAO assessment: The absence of any appropriate comparison communities and the variability in program implementation and components across the 12 study sites will make it difficult to find compelling evidence of program effects. Program Description: This program aims, in the five sites in which it is being implemented and evaluated (Mesa and Tucson, AZ; Bloomington, IL; San Antonio, TX; and Riverside, CA) to reduce gang-related crime through five interrelated strategies: community mobilization; provision of social, educational, and economic opportunities; suppression of gang violence; social intervention; and organizational innovation. It involves the collaborative efforts of the police, probation officers, prosecutors, judges, schools, youth agencies, churches, housing authorities, and governmental agencies. It targets youths at strong risk of gang membership and crime, and youths already involved in serious gang crime. The evaluation began in 1995. Evaluator: University of Chicago. Evaluation description: The impact evaluation of the program began in May 1995, and is expected to be completed in April 2002. Each project site is to be matched with a comparison site. In four of the five sites, the program participants and comparison groups were selected from similar gang problem areas within the same city; in the fifth site, a separate comparison community was selected. Between 100 and 115 youths, ages 12 to 21, who were involved in gangs or at risk of involvement, were selected to participate in the program in each site, and between 77 and 134 similar youths in each site were selected for comparison purposes. Neither program nor comparison group youths were selected randomly. A large, complex, communitywide data collection effort is being employed in each site, through a variety of methods and sources, including organizational surveys, youth surveys, reports from service workers, police and school records, local newspaper reports, and census data. Data were to be collected at baseline and after the first and third years of the program. The principal outcomes to be measured are gang crime patterns at the individual, gang, and community levels. The evaluation will also consider changes in opportunities, as well as integration in and alienation from conventional individuals and institutions. A variety of analyses of the data are planned, including time-series analyses and hierarchical linear models. At the time of our review, the evaluation had received approximately $3 million. Evaluation findings: Preliminary results have been reported for program participants, but not comparison groups. Because of missing data and other problems, reporting deadlines may not be met, and two of the five project sites and associated comparison sites have been deferred from the current analyses. GAO assessment: The evaluation, as designed, is basically sound. However, numerous difficulties in obtaining data threaten parts of the evaluation. Also, the way in which subjects were selected for the study may be problematic, and it is unclear how program and comparison youths were matched within sites. Program Description: Since 1998, 85 communities and 4 colleges in at least 10 states have been awarded subgrants under the discretionary grant component of this program to enforce underage drinking laws. In most states, a diverse group of stakeholders are involved in planning a variety of projects under this program that can include media campaigns, merchant education, compliance checks and other enforcement, youth leadership training, school-based education, and the development of local coalitions and interventions aimed at reducing underage drinking. States and communities are given substantial latitude in planning their projects; interventions are not standardized across communities. Evaluator: Wake Forest University School of Medicine. Evaluation description: The effort to evaluate the discretionary grant component of this program began October 1, 1998, and is expected to be completed December 31, 2001. Data are to be collected—from telephone surveys of police chiefs, sheriffs, and youths in participating communities and matched comparison communities in at least 9 states—before or early on in project implementation and at least 1 year after project initiation. Project sites to be evaluated were initially chosen from all states participating in the program. It is unclear whether these sites are representative of all participating project sites. In the first year, surveys were conducted in 52 participating communities and a similar number of comparison communities. In the second and third years, surveys will be conducted in those same communities and 34 others—17 in each group. The participant and comparison communities were matched on median income, liquor law violations, percentage attending college, and population size. The surveys of the top one or two law enforcement officials in each community will provide information on local law enforcement efforts, including the number of compliance checks conducted in each year. A small number of youths from each site are to be selected at random for the surveys each year. The youth surveys will obtain data on perceptions of alcohol availability, peer and personal alcohol use, and alcohol-related problem behaviors including binge drinking and drunk driving. At the time of our review, this evaluation had received approximately $945,000. Evaluation findings: While some demographic data have been reported from the baseline survey, no results have been reported involving program effects. GAO assessment: The researchers suggest aggregating all program communities together and all comparison communities together to diminish community sample size problems, which may mask program effects. In addition, the wide variation allowed in program implementation may compromise the interpretation and generalizability of any findings. Program Description: The Intensive Aftercare program provides intensive supervision and services to serious juvenile offenders for 6 months following their release from secure confinement. The goal is to facilitate reintegration and reduce recidivism. The program was implemented, beginning in June of 1993, by various youth service offices and departments of corrections in four states: Colorado, Nevada, New Jersey, and Virginia. New Jersey was eventually dropped because of implementation problems, so the evaluation of the program is being completed in the other three states. Evaluator: National Council on Crime and Delinquency. Evaluation description: Beginning in 1995, youths entering correctional facilities in the three states (four counties in Colorado including Metropolitan Denver; Clark County, NV; and Norfolk County, VA) were screened for eligibility and randomly assigned, within each site, to the treatment group (whose members participated in the Intensive Aftercare program upon release) or control group. Between 1995 and 1999, 82 youths were assigned to the program and 68 to the control group in Colorado, 120 youths were assigned to the program and 127 to the control group in Nevada, and 75 youths were assigned to the program and 45 to the control group in Virginia. Information was collected for study participants at baseline (that is, upon entry into the institution), before release from the institution (9 to 12 months after baseline, or entry), immediately after completing the program (6 months after release), and 6 months after completing the program (12 months after release). The data collected, using survey instruments, standardized tests, monthly case management forms, and administrative (police and court) databases, included social and criminal history and demographic data, information on the extent of supervision and services, and the extent of criminal activity following institutional release. Many of the measures being employed in the study, according to the researchers, are standard and have been validated. At the time of our review, the evaluation had received approximately $932,000 in funding. Evaluation findings: The preliminary findings offered from this evaluation suggest that the Intensive Aftercare participants did receive greater supervision and more services after release than the control group, which suggests some success in implementing the program. Outcome results related to reintegration and recidivism are not complete, and the interim results are mixed as to whether the program is associated with positive outcomes. GAO assessment: This is a well-designed study, though serious missing data problems, if not corrected, may make it difficult to determine the outcome of this program. Program Description: JUMP was established by Part G of the Juvenile Justice and Delinquency Prevention Act of 1974, as amended in 1992. Through that legislation, the Congress authorized OJJDP to award 3-year grants to community-based not-for-profit organizations or to local educational agencies. The grantees are to support one-on-one mentoring projects that match volunteer adult mentors with youths at risk of delinquency, gang involvement, educational failure, and dropping out of school. The legislation also provided funding for a national, cross-site evaluation of JUMP. OJJDP guidelines emphasize the need for projects to recruit, train, supervise, and do thorough background checks for all volunteer mentors; develop procedures for appropriately matching youths and mentors; define the population of at-risk youths to be served; develop guidelines for the type, frequency, and duration of youth and mentor project activities; and establish procedures for gathering and reporting data to support the evaluation process. As of November 2000, 175 JUMP projects had been funded, in amounts ranging from $180,000 to $210,000 over a 3-year period. Evaluators: Information Technology International and Pacific Institute for Research and Evaluation. Evaluation description: This evaluation began in May 1997 and is expected to conclude on September 30, 2002. Three approaches are being taken to determine how well JUMP is accomplishing its objectives. The first is a modified pre-post design that involves a within-subject comparison of the characteristics of youths at the time they enter and exit the program and between-subject comparisons of youths entering and exiting the program at the same time. The second approach is a best practices approach that will use structural equation models to estimate what program features or activities, including success in matching mentors and youths, are most likely to contribute to program success in reducing the risk of school and family problems, delinquency, and drug use among youths. The third approach relies on combined youth outcome data and community data to determine community cost offsets. The evaluation was funded at $3.3 million. Evaluation findings: In their November 2000 JUMP annual report, the evaluators provided considerable descriptive information about the various JUMP projects, the characteristics of the youths and mentors, and information on youth-mentor matching. The only “outcome” information thus far provided, however, is information on how satisfied youths and mentors were with the mentoring experience and how much benefit each perceived was derived from the experience. None of the three analytic approaches described above has been successfully applied to study outcomes because of a variety of pitfalls experienced by the national evaluation team, most notably insufficient data on school performance and behavioral measures (e.g., delinquent behavior and arrests). GAO assessment: The researchers are employing multiple and innovative strategies to determine the effectiveness of JUMP in achieving its objectives. It is not clear, however, whether definitive evaluation results can be reached in the absence of outcome data on youths who, in the same project areas at the same points in time, do not receive the program. In addition, data limitations, if not corrected, may be serious enough to compromise findings. Program Description: The Partnerships to Reduce Juvenile Gun Violence Program is a multi-year demonstration program planned for four sites (Baton Rouge and Shreveport, LA; Syracuse, NY; and Oakland, CA). It began in 1997 and is expected to conclude in 2001. However, one site, Shreveport, was dropped from the program early. The program aims to reduce youth gun violence by enhancing, in specific target areas of these cities, prevention and intervention strategies and strengthening partnerships among community residents, law enforcement agencies, and the juvenile justice system. The program involves mobilizing the community, establishing agency linkages, and planning case management for juveniles with gun charges in year 1, linking at-risk youths to services in year 2, and expanding opportunities for youths in year 3. Evaluator: COSMOS Corporation. Evaluation description: The strategy for evaluating the impact of this program has evolved as the program has unfolded. An impact evaluation was planned for three sites and was to include (1) a comparison of changes in crime rates in target areas of these cities before and after the implementation of the program, (2) a comparison of responses from high- risk youths in targeted areas surveyed before and after the program was implemented and services were provided, and (3) information on changes in policies and caseloads revealed through focus group meetings and interviews with agency officials. Crime rate information has thus far been reported only for Oakland and Baton Rouge, and surveys have been conducted only in Baton Rouge. In Baton Rouge, surveys were given to 92 high-risk youths in the criminal justice system identified through a variety of processes. The sampling strategies for surveying these high-risk youths were unlikely to yield generalizable results. In addition, fifth-, seventh-, and ninth-grade students in six schools in the target area were surveyed in March of 1999. It is unclear why these students and schools were sampled and what response rates were. In 2000, a small sample of 50 youths in Baton Rouge was identified as a possible matched comparison group for arrest rate comparisons. At the time of our review, this evaluation had received $1.2 million in funding, although a process evaluation is also being conducted with these funds. Evaluation findings: The researchers report decreases in gun-related homicides and arrests in Oakland that were larger in the target area than for the city as a whole. They also report decreases in gun-related homicides in Baton Rouge. No analyses of results from the survey data have been reported to date. GAO assessment: Comparisons between crime rates in the target community and the city as a whole may not be appropriate. Student and school selection criteria are unclear, making it difficult to assess their appropriateness for obtaining definitive results. In addition, if supporting survey and administrative data are only gathered in one site, it will be very difficult to generalize findings whether they appear positive or not. The purpose of the Teen Courts evaluation is to measure the effect of handling young, relatively nonserious violators of the law in teen courts, rather than in traditional juvenile family courts. Although teen courts often include many of the same steps used by formal juvenile courts (for example, intake, preliminary review of charges, court hearing, and sentencing), they differ from formal courts in that young people are able to assist in the community decision-making process for dealing with juvenile offenders. Youths may act as prosecutors, defense counsel, jurors, court clerks, bailiffs, and judge (or as a panel of judges). To evaluate teen courts, both a process and impact evaluation are used, with case studies and comparison groups as part of the research design. In each of the four case study sites (Anchorage, AK; Independence, MO; Maricopa County, AZ; and Rockville, MD), data are collected on about 100 youths handled in teen courts (experimental group) and 100 youths handled in the traditional juvenile justice system (comparison group). Data are also collected on several dimensions of program outcomes, including post-program changes in teens’ perceptions of justice and their ability to make more mature judgements as a result of the program. A process evaluation of the projects—exploring legal, administrative, and case-processing factors that hinder the achieving of project goals—is also being conducted. This appendix contains summaries of the 24 evaluations the Office of Juvenile Justice and Delinquency Prevention (OJJDP) has funded since 1995 (excluding the 11 impact evaluations it has funded of its own programs discussed in app. VII). For 22 summaries, we used descriptions of evaluations that were provided to us by OJJDP; for 2 summaries, we wrote the descriptions based on OJJDP documents. OJJDP categorized the 24 evaluations into the following three groups: OJJDP-funded programs: nonimpact evaluations (11). Non-OJJDP-funded programs: impact evaluations (9). Non-OJJDP-funded programs: nonimpact evaluations (4). The purpose of this evaluation is to test the feasibility and effectiveness of the OJJDP community assessment center concept in different environments. Community assessment centers seek to facilitate earlier and more efficient delivery of prevention and intervention services at the front end of the juvenile justice system. The evaluation uses a two-phase process to (1) measure some outcomes at the two enhancement sites, with quasi-experimental design, and (2) achieve more and better outcome measures. But, according to OJJDP, implementation and data problems will limit the effectiveness of the quantitative methods employed. OJJDP also believes that the attempt to implement a random assignment study at one project site will probably need to be abandoned. The first phase covers the four project sites that comprised the Community Assessment Centers program—two of these sites funded enhancements to existing programs and the other two funded the planning and implementation of new programs. The second phase covers the two project sites—one enhancement and one planning—in which the program is being continued after the end of the first funding cycle. Many of the evaluation measures are at the project or community level rather than at the participant level. The purpose of this evaluation is to examine the viability and effectiveness of the community-based delinquency prevention model used by grantees in the Community Prevention Grants Program. The Community Prevention Grants Program encourages communities to develop comprehensive, collaborative plans to prevent delinquency. The evaluation focuses on two main questions: (1) What is the impact of the program on community planning, service delivery, risk factors, protective factors, and juvenile problem behaviors? (2) What factors and activities lead to the effective implementation of the Community Prevention Grants Program model and to positive program outcomes? This evaluation employs a case study approach supplemented by a basic profile of communities that are participating in the program. Case studies are to be implemented in 11 communities in 6 states. Evaluation measures are to be applied at the project, community, and program levels. The purpose of this process evaluation is to address the following questions about the Comprehensive Strategy program: (1) What are the factors associated with successful Comprehensive Strategy planning and implementation? (2) To what extent do project sites adhere to the prescribed Comprehensive Strategy framework? (3) What are the major implementation challenges program grantees face in implementing the Comprehensive Strategy? (4) To what extent does the training and technical assistance provided to project sites help them acquire the knowledge, skill, and tools necessary to develop the Comprehensive Strategy? (5) What role should OJJDP play in the future implementation of the Comprehensive Strategy? The Comprehensive Strategy is OJJDP’s approach for addressing juvenile violence and delinquency at the community, state, and national levels through a systematic plan. It advocates the use of local planning teams to assess the factors and influences that put youths at risk of delinquency, determine available resources, and establish prevention programs to either reduce risk factors or provide protective factors that buffer juveniles from the impact of risk factors. This evaluation uses a multilevel design to assess how project sites implement the Comprehensive Strategy. The evaluation began with telephone interviews with site coordinators from all 48 project sites; 25 of the 48 project sites were randomly selected for a stakeholder survey. One year later, 10 of the 25 project sites are being given a second stakeholder survey. Subsequently, five sites are to be selected for visits, and intensive case studies are being done in three cities. The purpose of this evaluation is to examine (1) community coalitions’ developmental processes from the early planning and adoption stages through implementation and later stages and (2) the impact of coalitions’ prevention efforts concerning risk and resiliency factors and, to the extent feasible, alcohol, tobacco, and other drug use. The Drug-Free Communities Support Program provides grants to community coalitions to strengthen their efforts to prevent and reduce young people’s illegal use of drugs, alcohol, and tobacco. The evaluation is studying two cohorts of program grantees—those that received grants in 1998 or 1999 (cohort 1) and those that received grants in 2000 (cohort 2). The national evaluation sample is comprised of a total of 213 grantees. The sample is divided by years of operation: 1-5 years, 6-9 years, and more than 9 years. Semiannually, cohort 1 grantees are required to submit progress reports to OJJDP and the evaluator that include a special section (Part II), which provides information about the compositions of the coalitions and outcome data collection. Cohort 2 grantees do not have a Part II reporting requirement and submit progress reports semiannually only to OJJDP. In addition, 21 grantees (15 from cohort 1 and 6 from cohort 2) serve as intensive study sites, where interviews with staff and stakeholders provide greater detail about coalition development and local program evaluation. The purpose of this process evaluation is to provide feedback to OJJDP on the implementation of the Juvenile Accountability Incentive Block Grants Program. The program encourages states and local jurisdictions to implement accountability-based programs and services in 54 states and U.S. territories. The evaluator is surveying state and local practitioners, policy makers, and grant program administrators about their perceptions and attitudes about the program and its administration. Specifically, the evaluation focuses on (1) understanding how states and local units of government plan for and administer program funds and (2) examining the perceptions of states and local units of government about how well the program is achieving congressional intent. In addition, in-depth case studies are conducted at a limited number of sites. The purpose of this evaluation is to (1) provide feedback to the Performance-based Standards Project team on improving design and implementation support to the sites, (2) assist the project team in refining the Performance-based Standards Project model and in maximizing responsiveness to the needs of the participants, that is, those who are implementing the project model, and (3) chronicle the development of the project and summarize lessons learned. OJJDP established the Performance-based Standards Project to improve the quality and conditions of juvenile corrections facilities. Specifically, the project develops and implements outcome standards and an assessment tool. Corrections facilities can use both to monitor progress towards meetings goals in areas of operations, such as health and safety. The evaluation uses a case study approach. This approach consists of the collection of both quantitative and qualitative data describing the processes used to implement the project model in 80 juvenile detention and correctional facilities across the country. Site visits are made and in-depth case studies are planned. An all-site survey is distributed to key participants to determine satisfaction with the supports provided to them in the implementation of the project model. In addition, the survey seeks the participants’ assessment of (1) the impact the project has made on conditions of confinement and management of the facilities and (2) the overall utility of the project model. The purpose of this evaluation is to (1) determine the extent to which replication project sites have been able to conform to the original program model, and (2) assess the “prosocial” (that is, positive, socially-oriented behavior) outcomes for mothers and their babies. The Prenatal and Early Childhood Nurse Home Visitation Program consists of intensive and comprehensive home visitation by nurses during a woman’s pregnancy and the first 2 years following the birth of her first child. The evaluation involves six project sites and employs a quasi-experimental design with matched comparison groups. This purpose of this process evaluation is to document and understand the process of community mobilization and collaboration. SafeFutures is designed to build a comprehensive program of prevention and intervention strategies for at-risk youths and juvenile offenders. The program comprises six project sites that represent urban, rural, and Native American communities. The evaluation is examining all six project sites. Project sites collect and record performance data on program operations and client outcomes using the Client Indicator Data Base. The project sites are required to collect extensive information from selected SafeFutures program components on individual participants’ risk and protective factor profiles, youths’ service utilization, and agencies’ coordination of services for youth during the course of their involvement in the SafeFutures program. In addition, they collect information on outcome measures regarding youths’ educational commitment (that is, school attendance, achievement, and behavior), youths’ involvement in delinquency and crime, and any changes in youths’ risk profiles. Analysis of these data will provide a picture on program performance in three key areas: reaching the intended high-risk youth clientele, coordinating services for youths with multiple problems, and monitoring subsequent school performance problems and involvement in the juvenile justice system. The purpose of this evaluation is to document the lessons learned and factors associated with the successful development and implementation of the Safe Kids/Safe Streets program. The Safe Kids/Safe Streets program is designed to (1) help communities break the cycle of early childhood victimization and later criminality and (2) reduce child abuse and neglect, as well as the child fatalities that often result. The evaluation is surveying five project sites through five data collection strategies: agency administrative data, case tracking, key informant interviews, surveys of agency professionals, and surveys of stakeholders. The purpose of this evaluation is to (1) support culturally appropriate process and outcome evaluations of activities funded under Tribal Youth Program grants and (2) build the capacity of tribes to better evaluate their own juvenile justice programs and activities. The Tribal Youth Program assists grantees in developing projects, within tribal communities, for the prevention and control of youth violence and substance abuse. The evaluation is participatory in nature, that is, project personnel and stakeholders will be involved in developing the evaluation designs, with the assistance and guidance of an evaluation facilitator. The five project sites are implementing different projects and have not yet completed their evaluation designs. According to OJJDP, it is too early in the evaluation to tell exactly what designs are to be used. OJJDP has required that all evaluations be designed to examine both program implementation and program outcomes. The purpose of this process evaluation is to (1) determine how community collaboration can affect truancy reduction and lead to systemic reform and (2) assist OJJDP in the development of a model for a truancy reduction program, including identifying the essential elements of that model. The Truancy Reduction Demonstration Program encourages communities to develop comprehensive approaches—involving schools, parents, the justice system, law enforcement, and social service agencies—in identifying and tracking truant youths. The evaluation is employing site visits, interviews with key personnel, and case studies of individual sites. Process data are gathered from all seven project sites participating in the evaluation and, from some sites, limited outcome data are gathered. The purpose of this evaluation is to evaluate the efficacy of three Adolescent Female Offenders programs in Wayne County, Michigan. The three programs are (1) a program incorporating gender-specific programming, home-based intervention, and community involvement, including pregnant and parenting adolescents; (2) an intensive probation program with limited gender-specific programming; and (3) a traditional, female-only residential program that provides limited gender-specific training. The evaluation is using a quasi-experimental design. Using random assignment, the home-based intervention model is to be compared with the established intensive probation model; the outcomes of these models are then to be compared with outcomes of the traditional, female-only residential program. The comparison analysis involves at least 50 young women in each of the 3 programs. A wide range of outcomes— including recidivism, substance use, depression, community integration, academic performance and career aspirations, parenting readiness, and responsible sexual behavior—is to be examined. The evaluator is also exploring the relationship of specific program components to these outcomes. The purpose of this impact evaluation is to evaluate the effectiveness of a cognitive-behavioral group intervention. The Coping With Life Course is aimed at enhancing prosocial coping and problem solving for adolescents incarcerated in youth correctional facilities. To evaluate the program, a minimum of 120 adolescents in one youth correctional facility are randomly assigned to either the Coping with Life Course intervention group or a standard-care control group. Six Coping with Life Course cohort groups of 10 each are followed. The evaluation is allowing for attrition (from the initial 60 participants down to 48) in each of the intervention and control groups. Participant functioning is assessed before and after intervention through a battery of questionnaires. Recidivism, return to close custody, and service utilization are tracked through databases and statewide records. The purpose of the evaluation is to (1) document the implementation of a new “family index” case management system (through a process evaluation) and (2) examine the impact of the family index on juvenile court case processing (through an impact evaluation). The family index system allows cross-referencing to identify all family members involved in family law; juvenile dependency; juvenile delinquency; and criminal, civil, and probate matters. For the process evaluation, a case study approach is used to describe the implementation of the family index at one project site, the Riverside, California, Court. For the impact evaluation, a pre-post design is used to examine how the family index has affected juvenile court matters (for example, court processing time, coordination between courts, and content of hearings). The purpose of this evaluation is to evaluate the effects of Flashpoint on the antisocial patterns of juvenile offenders’ thoughts and actions and high school students’ thoughts and actions. Specifically, changes are assessed for (1) media use and literacy, (2) violence-supporting beliefs and behavior, and (3) substance use and abuse. The Flashpoint program is designed to build critical thinking skills young people need to (1) see through false media portrayals that glorify violence and drug use and (2) apply decision-making in their own lives. The evaluation is using comparison groups, pre-post interventions, and case studies. Participants include 264 juveniles, ages 14 to 17. Treatment groups are compared with no-treatment control groups for baseline-to-posttest changes. Three groups and project sites are involved: (1) repeat and serious offenders in a correctional program, (2) first-time offenders in a diversion program, and (3) students in a public high school. The purpose of this evaluation is to determine if Free to Grow can reduce substance abuse (alcohol use, smoking, and illegal drug use). The Free to Grow program builds on existing Head Start programs, adding community- strengthening and family-intervention components to address the problem of substance abuse. The evaluation is attempting to determine the independent effects of these two components on substance abuse prevention. It involves 16 project sites and 16 comparison sites and employs a multistage experimental research design. The purpose of this evaluation is to provide a process and outcome evaluation of the Gaining Insight into Relationships for Lifelong Success Project. The project involves two primary levels of intervention: (1) a psycho-educational counseling group, dealing with relationships and involving girls in four relational domains (relation to self, family, peers, and teachers), and (2) a focus on individual consultations, educational workshops and the policies and procedures of the local juvenile justice system, and the involvement of court service workers from the system. Specifically, the evaluation will (1) investigate the applicability of a relational approach to the treatment of female juvenile offenders; (2) examine the components of the relational approach that deal with relationships to self, family, peers and teachers; (3) evaluate the impact of increasing the knowledgebase of professionals involved in the local juvenile justice system; and (4) provide an empirically based, alternative treatment model that can be replicated in other settings. The evaluation of the first level of intervention—the counseling group—focuses on each of the four relational domains through the use of multimethod data collection; this collection includes self-reports and other reports, school records, and recidivism data. The evaluation of the second level of intervention focuses on the court services that workers use, specifically gender-sensitive treatment recommendations and referrals; qualitative observational data, gathered from monthly meetings, will be used. There is random assignment between girls referred to either the project intervention or to the standard intervention currently being used by the Clark County Court in Athens, Georgia. Approximately 180 girls—90 referred to the project intervention and 90 referred to the standard court intervention—are to be evaluated. The purpose of this evaluation is to assess the implementation and the impact of Quantum Opportunities. The Quantum Opportunities program is designed to reduce the incidence of delinquency, criminal behavior, and subsequent involvement in the criminal and juvenile justice systems amongst educationally at-risk inner city youths. The evaluation is using an experimental design with random assignment. Ninth-grade students at six sites are randomly assigned to treatment and control groups, with the treatment group enrolled in the Quantum Opportunities program. The students are followed through their high school careers and 2 years beyond. Information is collected from academic achievement tests, administered each year, and from two questionnaires. The purpose of this evaluation is to test the impact—on recidivism, program completion, and victim satisfaction—of the Restorative Justice Conferences for a population of youthful offenders (aged 14 and under) in an urban setting (Indianapolis, IN). Restorative Justice Conferences bring together the offender, victim, and supporters of each so as to provide an opportunity for fuller discussion of the offense; the effect of the offense on the victim, the offender’s family, and greater community; and steps the offender can take to make amends. The evaluation is using a single-site evaluation with an experimental design. As part of the design, youths are randomly assigned to a treatment group (Restorative Justice Conferences) or a matched control group. The purpose of this evaluation is to determine if the program reduces the amount of delinquency in a city. The Risk-Focused Community Policing program increases protection by the community police, potentially reducing delinquency. The evaluation is using an experimental research design. The project site (city) is divided into approximately 40 census blocks, with 20 blocks randomly selected as program blocks and the other 20 designated as control blocks. The purpose of this evaluation is to study the Act Now Truancy Program. The program is a prosecutor-led truancy reduction program. The evaluation is using a pre-post intervention design involving one project site. Information is collected and aggregated (for example, truancy rates rather than individual truancy behavior) for all participants. The purpose of this evaluation, conducted in two schools, is to assess the impact of the Childhood Violence Prevention Program. The program is designed to prevent the legitimization of aggression among pre-adolescent, elementary, and middle school children, with special focus on victims of child maltreatment. The evaluation is using a pre-post intervention design, with comparison groups. The study involves having elementary school students participate in a class activity using a workbook designed to encourage problem solving action rather than aggressive behavior in interactions with peers. This project is not an evaluation, per se, but rather a synthesis of existing evidence on community-level interventions and service programs. Its purpose is to identify the strengths and weaknesses of community-level evaluations and to provide recommendations to the field about how to structure and carry out such evaluations. Community-level programs for youths are designed to promote positive youth development. To evaluate the programs, a committee—experts from several disciplines (child and adolescent development, child health, sociology, psychology, evaluation research, youth services, and community development)—is assessing the strengths and limitations of measurements and methodologies that have been used to evaluate these interventions. The purpose of this evaluation is to assess the activities undertaken by project sites, determine whether they can be evaluated, and ultimately assess the impact of these activities on the youthful offenders participating in the program. The program is intended to (1) enhance school-to-work education and training in juvenile correctional facilities and (2) improve youthful offenders’ transition into the community. The evaluation design has not been completed, but random assignment study is strongly preferred, if feasible. At the time of our review, only one of the three potential sites could be evaluated. One more project site is to be awarded and, if it can be evaluated, it will be added as a second evaluation site. The following are GAO’s comments on the Department of Justice’s October 15, 2001, letter. 1. As we indicated in our report, impact evaluations, such as the types that OJJDP is funding, can encounter difficult design and implementation challenges. (See section titled, Evaluations of OJJDP Programs are Difficult to Successfully Design and Implement.) Also, we are aware that virtually all impact evaluations have limitations. However, where possible, impact evaluations should be designed to mitigate as many rival explanations of program effects as feasible, and potential limitations of the chosen research design should be acknowledged. 2. Our statement that the Juvenile Mentoring Program evaluation “has experienced problems obtaining behavioral measures and school performance data” was not intended to criticize the evaluators’ level of effort, but rather to indicate that their inability to obtain data from school and law enforcement officials in many of the study sites makes it more difficult to evaluate how well the program is achieving its objectives of diminishing delinquency, gang involvement, and school failure. While enhanced analysis of sites with the best data may be warranted, it does not overcome the problem of having a large number of sites with little or no reliable data from school and law enforcement officials. This problem was explicitly recognized by OJJDP in its November 2000 report. 3. During the course of our review, OJJDP officials told us that one measure of a research grantee’s performance is the number of products the grantee has published. These officials provided us a listing of all products published by active research grantees through OJJDP and private publishers as a direct result of OJJDP-funded research. We summarized these voluminous data by topic to facilitate the presentation. 4. Our report points out that the Enforcing the Underage Drinking Laws Program evaluation documents OJJDP provided to us were not clear on whether the sites chosen were representative. Our report does not suggest “there is no way of achieving a legitimate representative sample.” However, we agree with OJJDP that the evaluation may not be evaluating the Enforcing the Underage Drinking Laws Program because there may be no program components common to all project cities. The Assistant Attorney General states that the evaluation will be able to measure impacts on several program areas across each site. However, our point is that the evaluator’s plan to aggregate data across sites may be inappropriate because wide variation allowed by the program means that program activities are not common across all sites. Therefore, interpreting and generalizing results may be problematic. In addition to the above, Lori A. Weiss, Barbara A. Guffy, Michele J. Tong, Leslie C. Bharadwaja, David P. Alexander, Douglas M. Sloane, Shana B. Wallace, Michele C. Fejfar, Charity J. Goodman, and Jerome T. Sandau made key contributions to this report. Howell, J.C. Youth Gang Programs and Strategies. U.S. Department of Justice, Office of Justice Programs, Office of Juvenile Justice and Delinquency Prevention. Washington, D.C.: Aug. 2000. Novotney, L. C., E. Mertinko, J. Lange, and T. K. Baker. “Juvenile Mentoring Program: A Progress Review.” Juvenile Justice Bulletin, Sept. 2000. Sheppard, D., H. Grant, W. Rowe, and N. Jacobs. “Fighting Juvenile Gun Violence.” Juvenile Justice Bulletin, Sept. 2000. U.S. Department of Justice, Office of Justice Programs, National Institute of Justice. “Reintegrating Juvenile Offenders Into the Community: OJJDP’s Intensive Community-Based Aftercare Demonstration Program.” Research Preview, Dec. 1998. U.S. Department of Justice, Office of Justice Programs, Office of Juvenile Justice and Delinquency Prevention. FY 2000 OJJDP Discretionary Program Announcement: Juvenile Mentoring Program. Mar. 2000. U.S. Department of Justice, Office of Justice Programs, Office of Juvenile Justice and Delinquency Prevention. Gang-Free Schools and Communities Initiative: FY 2000 OJJDP Discretionary Program Announcement. July 2000. U.S. Department of Justice, Office of Justice Programs, Office of Juvenile Justice and Delinquency Prevention. Juvenile Mentoring Program: 1998 Report to the Congress. Washington, D.C.: Dec. 1998. U.S. Department of Justice, Office of Justice Programs, Office of Juvenile Justice and Delinquency Prevention. OJJDP Research 2000. Washington, D.C.: May 2001. Wiebush, R. G., B. McNulty, and T. Le. “Implementation of the Intensive Community-Based Aftercare Program.” Juvenile Justice Bulletin, July 2000. | Although national rates of violent juvenile crime and youth victimization have declined during the past five years, critical problems affecting juveniles, such as drug dependency, the spread of gangs, and child abuse and neglect, persist. The Office of Juvenile Justice and Delinquency Prevention (OJJDP) has funded various demonstration, replication, research and evaluation, and training and technical assistance programs to prevent and respond to juvenile delinquency and juvenile victimization. GAO's review of 16 of OJJDP's major programs found that, although virtually all grantees must report on their progress twice a year, the information they reported varied. Grantees receive standard, general guidance for reporting on their projects and providing OJJDP information to monitor grantee's projects and accomplishments. According to OJJDP officials, such guidance needs to be general because of differences among individual projects and local needs and circumstances. GAO identified eight programs in which all grantees reported the number of juveniles they directly served. OJJDP does not require grantees in all its programs to report directly on the number of juveniles served directly because many of its programs are not intended to serve juveniles directly. GAO's in-depth review of OJJDP's 10 impact evaluations undertaken since 1995 raises concerns about whether the evaluations will produce definitive results. In some of these evaluations, variations in how the programs are implemented across sites make it difficult to interpret evaluation results. |
The AWACS aircraft first became operational in March 1977, and as of November 2004, the U.S. AWACS fleet was comprised of 33 aircraft. The aircraft provides surveillance, command, control, and communications of airborne aircraft to commanders of air defense forces. The onboard radar, combined with a friend-or-foe identification subsystem, can detect, identify, and track in all weather conditions enemy and friendly aircraft at lower altitudes and present broad and detailed battlefield information. The AWACS airplane is a modified Boeing 707 commercial airframe with a rotating radar dome (see fig. 1). The ailerons and cowlings are similar to commercial 707 parts but were modified for special requirements. The AWACS radome is the covering that provides housing for the airplane’s radar and friend-or-foe (IFF) identification system. Half of the radome covers the radar and half covers the IFF system and each has a different make-up in its composition. The Air Force purchased only the IFF section of the radome in the two separate purchases. In the past, the Air Force has generally repaired, rather than purchased, the ailerons, cowlings, and radomes but recently had to purchase new parts to meet operational requirements. Prior to the recent spare parts purchases, the ailerons and cowlings had not been purchased since the mid-1980s, and the last radome unit had not been purchased since 1998. All of the spare parts were purchased as noncompetitive negotiated procurements. The Federal Acquisition Regulation (FAR) provides guidance for the analysis of negotiated procurements with the ultimate goal of establishing fair and reasonable prices for both the government and contractor. For a noncompetitive purchase, the contract price is negotiated between the contractor and government and price reasonableness is established based primarily on cost data submitted by the contractor. The ailerons were also purchased as a commercial item. For a commercial item, price reasonableness is established based on an analysis of prices and sales data for the same or similar commercial items. For the AWACS spare parts purchases we reviewed, DCMA provided technical assistance to the Air Force by analyzing labor hours, material and overhead costs, and contract prices. DCAA provided auditing and cost accounting services. DCMA and DCAA analyses were submitted to the Air Force prior to contract negotiations for the respective purchases. Since late 2001, the Air Force has negotiated and awarded contracts to Boeing for the purchase of outboard ailerons, cowlings, and radomes totaling over $23 million. Specifically, the Air Force purchased three ailerons for about $1.4 million, 12 right-hand cowlings and 12 left-hand cowlings for about $7.9 million, and three radomes for about $5.9 million. The Air Force paid an additional $8.1 million in costs as part of the initial radome contract to move equipment and establish manufacturing capabilities in a new location (see table 1). The most recent per unit cost of each part represents a substantial increase from prior purchases. The overall unit cost of the ailerons and cowlings increased by 442 percent and 354 percent, respectively, since they were last purchased in 1986. The unit price for the one radome purchased under the September 2001 contract increased by 38 percent since it was last purchased in 1998, and the unit price nearly doubled two years later under the September 2003 contract. Overall, only a small portion of the price increases could be attributed to inflation. Figure 2 shows the unit price increases, including adjustments for inflation, for ailerons, cowlings, and radomes. The Air Force and Boeing cited a number of additional factors that may have contributed to higher prices. For all the parts, the Air Force purchased limited quantities, which generally results in higher unit prices. For the ailerons, which had not been purchased since 1986, Boeing officials told us that some of the price increase was attributable to production inefficiencies that would result from working with older technical drawings, developing prototype manufacturing methods, and using different materials in the manufacturing process. The unit price of the cowlings included costs for the purchase of new tools required to manufacture the cowlings in-house—which Boeing decided to do rather than have vendors manufacture the cowlings, as had been done in the past. The new tools included items such as large production jigs, used to shape and fabricate sheet metal. Regarding radomes, the Air Force paid Boeing to relocate tooling and equipment from Seattle, Washington, to Tulsa, Oklahoma, and develop manufacturing capabilities at the Tulsa facility to produce and repair radomes. Boeing had initially decided to discontinue radome production and repair at its Seattle location due to low demand for these parts but, after further consideration of the Air Force’s requirements, decided to relocate the capability in Tulsa. The first radome contract the Air Force awarded Boeing included over $8.1 million to relocate the tooling and equipment and set up the manufacturing process. The remaining $1.2 million was the estimated production cost of the one radome. In negotiating contracts for the outboard ailerons, cowlings, and radomes, the Air Force did not obtain and evaluate information needed to knowledgeably assess Boeing’s proposals and ensure that the spare parts prices were fair and reasonable. In general, the Air Force did not obtain sufficient pricing information for a part designated a commercial item, adequately consider DCAA and DCMA analyses of aspects of contractor proposals, or seek other pricing information that would allow it to not only determine the fairness and reasonableness of the prices but improve its position for negotiating the price. Boeing asserted that the aileron assembly was a commercial item. Under such circumstances, fair and reasonable prices should be established through a price analysis, which compares the contractor’s proposed price with commercial sales prices for the same or similar items. However, when purchasing the ailerons, the Air Force did not seek commercial sales information to justify the proposed price. Instead, the Air Force relied on a judgmental analysis prepared by Boeing, which was not based on the commercial sales of the same or similar aileron. In reviewing the contractor’s submissions of data to the government, both DCMA and DCAA found Boeing’s proposal inadequate for the Air Force to negotiate a fair and reasonable price. DCMA performed a series of analyses on the purchase of the aileron assembly, each of which indicated that Boeing’s proposed unit price was too high. Boeing proposed in November 2002 to sell three aileron assemblies for $514,472 each. Subsequently, DCMA performed three separate price analyses, which indicated that Boeing’s price should be in the $200,000 to $233,000 range. However, the Air Force negotiation team did not discuss these analyses with Boeing during negotiations or include them as part of the Air Force’s price negotiation documentation. In January 2003, DCAA reported that the proposed price was “unsupported” and that Boeing did not comply with the Boeing Estimating System Manual, which requires support for commercial item prices. Further, the report said that Boeing must submit cost information and supporting documentation. The Air Force never addressed DCAA’s concerns. Instead, the Air Force relied on the analysis prepared by Boeing and paid $464,133 per unit. The price analyst involved with the negotiation said that, in retrospect, the Air Force should have sought commercial sales information from Boeing, citing this purchase as his first experience with a commercial item. We asked Boeing to provide historical sales information of the same or commercial equivalent item to use as a general benchmark on price reasonableness of the ailerons purchased by the Air Force. According to Boeing representatives, the requested data were not available because the military version of the ailerons had not been produced for over 20 years. Boeing representatives agreed that the Boeing analysis was subjective, but they said the analysis represented the best estimate based on their assumptions and limitations. When negotiating the purchase price for the cowlings, the Air Force again did not use information provided by DCMA or address DCMA’s recommendation that it determine the availability and potential use of existing tools to manufacture the cowlings. Included in the $7.9 million contract for cowlings, Boeing proposed and the Air Force awarded about $1.1 million for the purchase of new tools, such as large production jigs, associated with the manufacture of the cowlings. However, DCMA had recommended in its initial evaluation of Boeing’s proposal that the Air Force give qualified offerors an opportunity to inspect the condition of cowling tools used in prior manufacturing for their applicability and use in fabricating the cowlings. DCMA pointed out that the tools were located at Davis–Monthan Air Force Base in Arizona, where government-owned tooling is often stored when no longer needed for production. However, the Air Force did not accurately determine the existence and condition of the tools. Subsequent to the contract award, Boeing—not the Air Force— determined that extensive government-owned tooling was available at Davis-Monthan and got approval, in May 2004, to use the tools in manufacturing the cowlings. As a result, the cowlings contract included unnecessary tool purchase costs when it was awarded. Air Force and Boeing officials anticipated a contract modification would be submitted to reduce the price as a result of using the existing tools. A significant portion of the September 2001 cost-plus-fixed fee contract that the Air Force awarded to Boeing to purchase one radome unit involved relocating tools and equipment and establishing a manufacturing process at Tulsa. Specifically, over $8.1 million of the contract, which was valued at about $9.3 million, was spent to move equipment and establish a manufacturing process at the Tulsa facility; the price of producing the one radome unit was about $1.2 million. About 19 months later, in April 2003, at the Air Force’s request, Boeing provided a proposal to produce two additional radomes at the Tulsa facility, and in September 2003, the Air Force awarded a contract to Boeing to produce the two radomes at over $2.3 million per unit—almost twice the 2001 unit price. Based on our analysis, the Air Force did not obtain adequate data to negotiate a fair and reasonable price for the second radome contract. First, the Air Force requested a DCMA analysis of Boeing’s proposal, but, in late June 2003, DCMA told the Air Force price analyst that, for an unexplained reason, DCMA did not receive the request for assistance; the price analyst then determined that he would waive the technical evaluation, which would forego the benefit of DCMA’s technical expertise. Second, and most importantly, the Air Force did not consider Boeing’s costs under the September 2001 contract, which would have provided important information to help the Air Force determine if it was obtaining a fair and reasonable price for the radomes. In addition to encouraging innovation, competition among contractors can enable agencies to compare offers and thereby establish fair and reasonable prices and maximize the use of available funds. The Air Force determined that Boeing was the sole source for the parts and did not seek competition. However, a DCMA analysis had determined that Boeing’s proposed price for the engine cowlings was not fair and reasonable and, because a subcontractor provided the part in support of the original production contracts, recommended that the cowlings be competed among contractors. From the outset of the cowlings purchase, Air Force documents said that the Air Force did not have access to information needed to compete the part. However, the Air Force has a contract with Boeing that could allow the Air Force to order drawings and technical data for the AWACS and other programs for the purpose of competitively purchasing replenishment spare parts. Nevertheless, Boeing has not always delivered AWACS data based on uncertainties over the Air Force’s rights to the data. Based on discussions with Air Force representatives, Boeing has been reluctant to provide data and drawings in the past, making it difficult for the Air Force to obtain them. Moreover, Boeing maintains that it owns the rights to the technical data and drawings and the Air Force could not use the drawings to compete the buy without Boeing’s approval. It is unclear if the AWACS program office had placed a priority on fostering competition for the cowlings and other spare parts. Representatives of the AWACS spare parts program office at Tinker Air Force Base cited a number of concerns in purchasing the spare parts from vendors other than Boeing. First, they said that the need for these spare parts had become urgent and noted that other vendors would have to pass certain testing requirements, which could be a lengthy process, and that, even with this testing, performance risks and delivery delays were more likely to occur. An overriding concern was that the Air Force establish a good relationship with reliable parts providers, such as Boeing. Program office officials told us that the Air Force would likely be better served in the long run by staying with a reliable supplier rather than competing the parts. In contrast, senior contracting officials at Tinker—who have oversight responsibilities for the contracting activities that support the AWACS program—have a different point of view. These officials were concerned about the large price increases on AWACS spare parts and the lack of competition. They stated that the Air Force is a “captured customer” of Boeing because the company is the only source for many of the parts needed to support aircraft manufactured by Boeing, such as the AWACS. According to these senior contracting officials, during the last several years Boeing has become more aggressive in seeking higher profits regardless of the risk involved with the purchase. For example, they told us that, even when the risk to the company is very low, the company is seeking at least a 3- to 5-percent higher fee than in the past. As a result, contracting officers have had to elevate some negotiations to higher management levels within the Air Force. They also said that, without the ability to compete spare parts purchases, the Air Force is in a vulnerable position in pricing such contracts. Earlier in 2004, Boeing and the senior Air Force contracting officials involved with the aircraft programs managed at Tinker began a joint initiative to work on various contracting issues. Concerning data rights, these contracting officials told us that in future weapon systems buys, the Air Force must ensure that it obtains data rights so that it can protect the capability to later compete procurements of spare parts. The Air Force needs to be more vigilant in its purchases of spare parts. The AWACS parts purchases we reviewed illustrate the difficulty of buying parts for aircraft that are no longer being produced as well as buying them under non-competitive conditions. A key problem was that the Air Force did not take appropriate steps to ensure that the prices paid were fair and reasonable. It did not obtain and evaluate information that either should have been available or was available to improve its negotiating position. It did not attempt to develop other sources to purchase the spare parts and promote competition. And, it did not have a clear understanding of its rights to technical data and drawings, which are necessary to carry out competitive procurements. As the AWACS aircraft—like other Air Force weapon systems—continue to age, additional spare parts will likely be needed to keep them operational. Given the significant price increases for the ailerons, cowlings, and radomes, the Air Force needs to look for opportunities to strengthen its negotiating position and minimize price increases. Clearly, competition is one way to do this. Unless the Air Force obtains and evaluates pricing or cost information and/or maximizes the use of competition, it will be at risk of paying more than fair and reasonable prices for future purchases of spare parts. To improve purchasing of AWACS spare parts, we recommend that the Secretary of Defense direct the Secretary of the Air Force to ensure that contracting officers obtain and evaluate available information, including analyses provided by DCAA and DCMA, and other data needed to negotiate fair and reasonable prices; develop a strategy that promotes competition, where practicable, in the purchase of AWACS spare parts; and clarify the Air Force’s access to AWACS drawings and technical data including the Air Force’s and Boeing’s rights to the data. We received written comments on a draft of this report from DOD and The Boeing Company. In its comments, DOD concurred with GAO’s recommendations and identified actions it plans to take to implement the recommendations. DOD’s comments are included in appendix II. DOD also provided technical comments, which we incorporated into the report as appropriate. In its comments The Boeing Company provided information that augments the information in the report and provides the company’s perspective on the AWACS purchases. With respect to the prices the Air Force paid for the spare parts, Boeing provided more detailed information to explain the costs associated with each part. However, the information Boeing provided did not change our conclusion that the Air Force did not obtain and evaluate sufficient information to establish fair and reasonable prices. The company also noted that it has worked with Air Force representatives to address issues associated with higher profits and, as of January 2005, was working with the Air Force to address issues associated with access to AWACS technical drawings and data. The Boeing Company’s comments are included in appendix III. We are sending copies of this report to the Secretaries of the Air Force, the Army, and the Navy; appropriate congressional committees; and other interested parties. We will also provide copies to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff has questions concerning this report, please contact me at (202) 512-4841 or by e-mail at cooperd@gao.gov, or James Fuquay at (937) 258-7963. Key contributors to this report were Ken Graffam, Karen Sloan, Paul Williams, and Marie Ahearn. To identify price increases associated with the ailerons, cowlings, and radomes, we reviewed Air Force contracting files and we held discussions with members of the Air Force involved in each purchase, which included contracting officers, negotiators, and price analysts. These officials were located at Tinker Air Force Base, Oklahoma, the location of the Airborne Warning and Control System (AWACS) spare parts program office—the E3 Systems Support Management Office. To account for the impact of inflation, we used published escalation factors for aircraft parts and auxiliary equipment to escalate prices previously paid for the parts to a price that would have been expected to be paid if the prices considered the effects of inflation. To determine whether the Air Force contracting officers obtained and evaluated sufficient information to ensure that Boeing’s prices were fair and reasonable, we held discussions with the Defense Contract Management Agency (DCMA) representatives and obtained copies of reports and analyses prepared by DCMA and Defense Contract Audit Agency (DCAA). We reviewed Air Force contracting files and held discussions with Air Force officials that negotiated the respective purchases, which included contracting officers, negotiators, and price analysts. We also held discussions with representatives of Boeing and visited Boeing production facilities in Tulsa, Oklahoma. The Boeing officials represented several Boeing divisions involved in the purchases including Boeing’s military division (Boeing Aircraft and Missiles, Large Aircraft Spares and Repairs), which had responsibility for negotiating all of the spare parts purchases. Boeing Aerospace Operations, Midwest City, Oklahoma, had contract management responsibility for the purchases. To determine the extent that competition was used to purchase the parts, we reviewed Air Force contracting files and held discussions with members of the Air Force involved in each purchase, which included contracting officers, negotiators, and price analysts. We also held discussions with representatives of the AWACS spare parts program office and senior contracting officials responsible for overseeing contracting activities at Tinker Air Force Base, Oklahoma. We conducted our review from August 2003 to November 2004 in accordance with generally accepted government auditing standards. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.” | Over the past several years, the Air Force has negotiated and awarded more than $23 million in contracts to the Boeing Corporation for the purchase of certain spare parts for its Airborne Warning and Control System (AWACS) aircraft. Since they first became operational in March 1977, AWACS aircraft have provided U.S. and allied defense forces with the ability to detect, identify, and track airborne threats. In March 2003, GAO received allegations that the Air Force was overpaying Boeing for AWACS spare parts. This report provides the findings of GAO's review into these allegations. Specifically, GAO identified spare parts price increases and determined whether the Air Force obtained and evaluated sufficient information to ensure the prices were fair and reasonable. GAO also determined the extent to which competition was used to purchase the spare parts. Since late 2001, the Air Force has spent about $1.4 million to purchase three ailerons (wing components that stabilize the aircraft during flight), $7.9 million for 24 cowlings (metal engine coverings), and about $5.9 million for 3 radomes (protective coverings for the radar antennae). The unit prices for the ailerons and cowlings increased by 442 percent and 354 percent, respectively, since they were last purchased in 1986. The unit price of the radomes, purchased under two contracts, nearly doubled from September 2001 to September 2003. Although some of the price increases can be attributed to inflation, other factors, such as re-establishing production processes and procuring limited quantities of the parts, contributed more significantly to the increases. In addition, the 2001 radome contract included about $8.1 million for Boeing to relocate equipment and establish a manufacturing capability at a new location. The Federal Acquisition Regulation (FAR) requires contracting officers to evaluate certain information when purchasing supplies and services to ensure fair and reasonable prices. However, Air Force contracting officers did not evaluate pricing information that would have provided a sound basis for negotiating fair and reasonable prices for the spare parts. Moreover, the Air Force did not adequately consider Defense Contract Audit Agency (DCAA) and DCMA analyses of these purchases, which would have allowed the Air Force to better assess the contractor's proposals. For example, when purchasing ailerons, the Air Force did not obtain sales information for the aileron or similar items to justify Boeing's proposed price and did not consider DCMA analyses that showed a much lower price was warranted. Instead, the contracting officer relied on a Boeing analysis. None of the spare parts contracts cited in the allegations were competitively awarded--despite a DCMA recommendation that the cowlings be competed to help establish fair and reasonable prices. The Air Force did not develop alternate sources for competing the purchase of the cowlings because it believed it lacked access to technical drawings and data that would allow it to compete the purchase. Yet the Air Force has a contract with Boeing that could allow the Air Force to order technical drawings and data specifically for the purpose of purchasing replenishment spare parts. |
School districts receive funding from a variety of sources, including local, state, and federal governments. Title I funds, received by more than 90 percent of the nation’s school districts and more than 55 percent of all public schools, make up a small portion of most districts’ overall funding. Specifically, in fiscal year 2008, the most recent year for which data were available, about 8 percent of districts’ funding came from federal programs and about 2 percent of districts’ funding came from Title I, which is generally the largest of the federal funding sources for kindergarten through grade 12. In individual districts, the share of funding from Title I ranged from zero to 36 percent in 2008. Generally, Title I allocations to districts are based on the district’s size and percentage of students from low-income families, as well as the population of the district’s state and how much that state spends per pupil on education. The National Center for Education Statistics reports information about school district expenditures based on data collected by the U.S. Census Bureau. These data include information on how much districts spend for particular activities, such as instruction, administration, and instructional support. They also include information on the types of goods and services purchased, such as salaries, benefits, and equipment. However, they do not indicate which funding sources (such as Title I) these expenditures are made from. According to data reported by the National Center for Education Statistics for 2007–2008, 61 percent of total school district expenditures (from all revenue sources) were for instruction. Title I funds are awarded to states, which distribute the vast majority of them for use by school districts, and Title I does not describe allowable uses of funds for specific goods or services. To provide for local flexibility in determining how to use funds, ESEA requires districts to measure academic outcomes and achieve benchmarks, but does not generally dictate how funds are to be spent. Title I funds are intended for instruction and other supportive services for disadvantaged children so that they can master challenging curricula and meet state standards in core academic subjects. Title I does not include a definition of costs related to instruction, or costs unrelated to instruction that school districts must use. While Education has issued guidance on Title I, it has not prescribed specific uses of Title I funds. According to Education officials, the agency is reluctant to endorse spending on any particular good or service, as Education wants to allow schools to spend the money to meet their unique needs and to be free to spend the money creatively. Schools may run two types of Title I programs—targeted and schoolwide. Schools where more than 40 percent of students are from low-income families may operate schoolwide programs, enabling them to serve all children at the school with Title I funds. In targeted-assistance schools, Title I funds may only be used to benefit children who are determined to be eligible by being identified as failing, or most at risk of failing, to meet the state’s student academic achievement standards. Schoolwide programs offer schools more flexibility than targeted programs in using Title I funds because they may use these funds to support all students, regardless of students’ Title I eligibility, and to fund a comprehensive school plan to upgrade all the instruction in a school. Schoolwide programs also offer additional fiscal flexibility when schools combine separate program resources into a single accounting fund. The schoolwide model has become the dominant model as schools have opted to take advantage of the flexibility to serve all students. However, schools and districts are still responsible for maintaining appropriate internal controls over all federal education funds. While Title I is a flexible funding source, ESEA contains some provisions requiring a minimum percentage or limiting the maximum percentage of funds that can be used for specific purposes. For example, the law requires that, generally, a state spend no more than 1 percent on administration. States are required to reserve 4 percent of Title I funds to provide school districts with funds for school improvement activities, unless this amount would reduce school districts’ Title I grant below the amount received in the prior year. States may also reserve up to 5 percent of Title I funds in excess of the state’s previous year allocation for academic achievement awards to schools. Similarly, the law requires that school districts in need of improvement reserve at least 10 percent of Title I funds for teacher professional development. School districts with schools in need of improvement must also spend specific percentages of Title I funds to provide student transportation to support public school choice and supplemental educational services to students in those schools. Among other provisions, the law also contains several fiscal requirements, including a maintenance of effort requirement that districts’ state and local funding levels not decrease by more than 10 percent in any year; a stipulation that Title I funds be used to supplement, not supplant, state and local funds; and a requirement that state and local funds be used to provide comparable services to schools receiving funds and those not receiving funds. While ESEA limits the percentage of Title I funds that states may use for administrative purposes, it does not limit the amount that school districts may use. We noted in our 2003 report that there is no specific definition of administrative activities for Title I and that Education’s general administrative regulations and guidance address the issue of how grantees should identify administrative costs. Education’s general administrative regulations state that “administrative requirements mean those matters common to grants in general, such as financial management, kinds and frequency of reports, and retention of records.” In 1998, Education issued a report entitled The Use of Federal Education Funds for Administrative Costs, which discussed various definitions of administrative costs and activities in common use. Guidance issued by Education on what constitutes administrative costs states that “he costs of administration are those portions of reasonable, necessary and allowable costs associated with the overall project management and administration. These costs can be both personnel and nonpersonnel costs and both direct and indirect.” The guidance provides a list of examples of direct administrative costs such as the salaries, benefits, and other expenses of staff that perform overall program management, program coordination, and office management functions. Indirect costs represent the expenses that are not readily identified with a particular grant function or activity, but are necessary for the general operation of the district and the conduct of activities it performs. An indirect cost rate is a mechanism for determining what proportions of a district’s overall administration costs each program should bear and is expressed as a percentage of some or all of the direct cost items in the district’s budget. For example, the costs involved with providing office space, financial services, or general payroll services to officials who administer Title I grants cannot be directly allocated to the grant because these services are provided to a large number of people. The indirect cost rate, which is approved by the state, accounts for these types of expenses. The 12 selected school districts we visited used Title I funds for activities intended to improve academic outcomes for low-income students, primarily in elementary school, through a variety of initiatives, such as reducing class sizes and expanding instructional hours. As seen in figure 2, these funds represented a relatively small proportion of total revenues in our selected districts, from less than 1 to more than 8 percent. However, Title I funds may be used in conjunction with local, state, or other federal funding sources to support larger initiatives than Title I funds alone could support. For example, Title I funds could be used along with ESEA Title II (Improving Teacher Quality State Grants) funds to support a literacy program or to support a supplemental component, such as small group instruction, of a larger state-funded or locally funded literacy initiative. In this case, the Title II funds might be used to provide professional development to teachers in the literacy program, and Title I might be used to hire teachers for small group instruction. Given the relatively small proportion of funding they received from Title I, 8 of the 12 districts we visited chose to target Title I funds at the elementary grade levels, where officials said they believed the funds would provide the greatest improvements in academic achievement. This strategy is consistent with the findings of a study of a nationally representative sample of 300 school districts nationwide on targeting and uses of federal education funds, which found that elementary schools received 76 percent of Title I funds allocated to schools, considerably more than their share of the nation’s low-income students (57 percent). While most Title I funds are directed to elementary schools, two urban districts that we visited provided funds to all schools in the district. In one district, officials said that they began using Title I funds for high schools only after state funding for high poverty schools became unavailable. Nearly all of the schools in the 12 districts we visited used Title I funds for schoolwide programs, rather than for targeted assistance programs. Schoolwide programs offer flexibility by allowing schools to fund a comprehensive schoolwide plan to upgrade all instruction in a high poverty school without distinguishing between eligible and ineligible children and also make it easier for schools to coordinate the use of Title I and other funds. While schoolwide programs offer additional fiscal flexibility when schools combine separate program resources into a single accounting fund, the school districts we selected for review continued to track the Title I dollars to individual eligible activities, even as they took advantage of the flexibility to serve all children. In our selected districts, the remaining 6 percent of schools with targeted assistance programs tended to have lower poverty levels below or only slightly above the 40 percent threshold required for a schoolwide program. The districts we visited used Title I funds in support of a variety of initiatives. Funds were used to reduce teacher/student ratios and extend instructional time. Our reviews of Title I grant applications and interviews with district officials indicated that 10 districts used funds to pay for additional teachers and teachers’ aides or assistants (paraprofessionals) during the regular school day as a way to reduce class sizes in Title I schoolwide programs, provide supplemental instruction to small groups of Title I eligible students in targeted assistance schools, or provide additional attention to students within a classroom, among other things. Furthermore, eight districts used Title I funds to extend the time that students spend in the classroom through after school and summer school programs. Title I funds supported these initiatives in a variety of ways, including paying for teachers’ time, instructional materials, and student transportation to and from the program. Of the 12 districts we visited, 10 used at least some portion of their Title I funds to provide for the professional development of their teachers, including districts that were required to spend not less than 10 percent of Title I funds for this purpose as a result of being designated a district in need of improvement. Professional development included traditional classroom or workshop training as well as the use of math and literacy coaches to help teachers implement training they received in the classroom. Such coaches develop lesson plans and model the use of the lesson plan. They observe teachers in the classroom and provide feedback and coaching. Title I funds were used to purchase training services, hire substitutes for teachers’ time spent in training, pay for attendance at workshops or conferences, and pay the salaries and benefits of math and literacy coaches. Several selected districts used funds to purchase technology for the classroom to assess student progress, to provide differentiated learning experiences to students at various levels of achievement, or improve the learning experience. Software purchases included both assessment software as well as math and literacy software. For example, districts purchased software that could produce reading materials that contained similar content at different reading levels. Others purchased software that assessed student reading levels and provided lessons that could be adjusted based on reading levels. Several districts used Title I funds to purchase hardware for classrooms including computers, printers, and large screen displays. Districts also purchased tools such as interactive whiteboards that allow teachers and students to project computer images and interface with the technology at the board, voting devices that allow a teacher to gauge student comprehension instantaneously, and document cameras that allow teachers to project a photo of a scientific specimen or share a document for a lesson. In the 12 districts we visited, which generally pursued personnel-intensive strategies to improve academic outcomes, we found that salaries and benefits, when combined, were the largest category of Title I expenditures. In all but two districts, Providence and Orleans Parish, salaries and benefits made up at least 70 percent of Title I expenditures, (see fig. 3.) This finding is generally consistent with findings of the Educational Research Service, which found that districts spend about 80 percent of all funds on salaries and benefits. For more detailed profiles of Title I spending in the districts we visited, see appendix II. In the 12 selected districts, we found that from 65 to 100 percent of full- time equivalents (FTE) whose salaries were paid for with Title I funds were instructional staff, including teachers and paraprofessionals such as teachers’ aides. Of the more than 1,300 FTEs whose salaries and benefits were paid for with Title I funds in the 12 districts we visited, 82 percent were instructional personnel (see fig. 4). Education’s 2009 study on targeting and uses of Title I and other federal funds found that, at the individual school level, about 88 percent of personnel expenditures were used to pay for teachers and paraprofessionals. The study also found that the highest poverty schools spent a lower percentage of Title I funds on instructional staff and a higher percentage on instructional support staff, such as instructional coaches, librarians, or social workers, than the lowest poverty schools did. While a large portion of Title I FTEs were instructional personnel, the types of instructional personnel varied by district, with some districts paying only or primarily teachers and others paying primarily paraprofessionals or teachers’ aides. On average, the 12 selected districts used Title I funds to pay about 2.3 teachers for every teacher’s aide. Education’s recent study found that Title I funds were increasingly used to pay for teachers rather than paraprofessionals, such as teachers’ aides. From the 1997–1998 school year to the 2004–2005 school year, measured in FTEs, the total number of Title I staff increased by 49 percent whereas the number of Title I teachers’ aides declined by 10 percent. The proportion of teachers’ aides among Title I school staff declined from 47 to 35 percent, whereas the share of teachers rose from 45 to 55 percent during the same period. In seven of the school districts we visited, Title I funds were also used to pay for various instructional support personnel. Instructional support personnel accounted for 10 percent of FTEs paid with Title I funds in the 12 districts we visited. These personnel included instructional coaches, who coached teachers of Title I students; librarians to provide additional literacy support for students in schoolwide programs that were not meeting standards in reading; and counselors to support students who were not meeting standards and their families by providing academic support and information about academic requirements. Similarly, Education’s study found that about 7 percent of Title I funds spent on personnel at the school level were spent on instructional support staff. Some personnel costs paid for with Title I funds were administrative personnel, such as Title I directors or coordinators and administrative assistants. The percentage of FTEs charged to Title I that were administrative varied from 0 to 15 percent and accounted for 8 percent of all Title I personnel in the school districts we visited. In a few school districts we visited, administrative personnel also included personnel who supported parental involvement activities. On the other hand, one district opted not to charge Title I for any of its administrative costs. Larger districts with greater levels of Title I funds and more schools in need of improvement had more personnel dedicated to overseeing the use of Title I funds. Education’s study found that 5 percent of Title I funds spent on personnel at the school level were spent on administrative personnel. Prior studies have also found that more than 80 percent of Title I funds are spent or budgeted for instruction-related (versus administrative) purposes. Although we were not able to compare the percentage of all expenditures selected districts made in the instructional, instructional support, and administration categories, Education’s study of Title I budget and expenditures found that, on average, districts expended 10 percent of Title I funds for administrative costs. It also found that more than 70 percent of Title I funds were spent on instruction, primarily for salaries and benefits for instructors, but also including some instructional materials and equipment. The same study found that less than 20 percent of funds were used for instructional support, which includes professional development, student support, in addition to instructional support personnel. In our 2003 report on Title I spending by school districts, which focused on selected districts using a common financial system, we reported that the 6 selected districts we reviewed spent 0 to 13 percent of Title I funds on administration and that each district spent at least 84 percent of Title I funds on activities related to instruction. In addition, we identified several studies that had focused on the use of Title I funds for administrative purposes and had generally found that districts spent 4 to 10 percent of Title I funds on administrative activities, but definitions of administrative expenditures varied. Title I requirements appeared to drive increased spending on purchased services by larger districts. The larger, more urban school districts we visited, which also had larger percentages of schools in need of improvement, spent substantially more of their Title I funds on purchased or contracted services than other districts, due to requirements that they provide supplemental educational services and transportation for school choice and spend a certain percentage on professional development. As shown in figure 5, 8 of the 12 districts spent less than 5 percent on services, while the other 4 urban districts spent 9 to 28 percent on services. Purchased services accounted for 17 percent of all Title I expenditures in the districts we visited. Three of the four districts that spent 9 to 28 percent on services had 23 to 72 percent of their schools in need of improvement. The eight remaining districts spent 0 to 5 percent of their Title I funds on services, and had 0 to 11 percent of their schools designated as in need of improvement. Service expenditures for the larger, urban districts included payments to vendors chosen by parents to provide supplemental educational services and consultants to assist schools with activities such as curriculum development and redesign or to provide professional development for teachers. The smaller districts with few or no schools in need of improvement had service expenditures for other types of services, such as license agreements for software. Materials and supplies, including instructional materials, made up 0 to 20 percent of Title I expenditures in the school districts we visited and averaged 9 percent for all districts we visited (see fig. 6). Material and supply expenditures included supplementary reading kits; supplementary work books; office supplies; and food and publicity for parent involvement activities. Seven of the districts we visited spent Title I funds on equipment. The most spent in any district on this category of expenditures was just less than 6 percent ($1.8 million) of their Title I funds. On average, this category of expenditures accounted for 2 percent of all Title I expenditures in the 12 school districts we visited. In those districts that had expenditures in this category, the expenditures were generally for computer and other electronic equipment. Some school districts classified all electronic equipment as equipment, property, or capital purchases, while other districts distinguished between types of equipment based on dollar value or susceptibility to theft. For example, a laptop computer costing $800 might be considered a supply, but a mobile docking module costing $6,600 that allows a teacher to transport all necessary computer equipment from room to room could be considered equipment or property. In addition to the expenditures directly attributable to the Title I program, the 12 selected districts charged 0 to 12 percent of their Title I grant to indirect administrative costs (see fig. 7). On average, about 4 percent of Title I expenditures in the districts we visited were for indirect administrative costs. Several small districts with small grants chose not to charge an indirect cost rate to their Title I grants. As part of the budgeting process, school districts submitted detailed budgets to state educational agencies that included amounts set aside or reserved from the Title I grant for specific uses by the district, including district-managed services, such as professional development. Some of these reservations are mandatory, and others are optional. Each state had different categories of optional reservations or set-asides. Optional set-asides included funds for administration or summer school. The five selected districts we visited that reserved funds for supplemental educational services and transportation set aside 4 to 18 percent of their Title I funds for those purposes. While the districts we visited with schools in need of improvement typically set aside larger amounts for supplementary educational services, transportation to support school choice was typically not a large budget item. In two cases, this was because the district provided choice and transportation to all students regardless of their schools’ performance, and paid for this with local funds. In one district, the number of students opting to transfer to a different school was relatively small. Total mandatory and optional amounts set aside, including amounts for district-managed services such as administration, professional development, or supplemental educational services, accounted for 0.1 to 68 percent of Title I funds in the districts we visited. In a few of the larger districts we visited, the total mandatory and optional amounts set aside exceeded 50 percent of the Title I grant. Mandatory set-asides alone amounted to as much as 28 percent in two districts. This is consistent with the findings of Education’s study, which reported that from the 1997– 1998 school year to the 2004–2005 school year, the share of Title I funds allocated to individual schools declined from 83 to 74 percent, while the share used for district-managed services rose from 9 to 21 percent. Title I recipients are subject to various oversight mechanisms, which provide some information on noncompliance with relevant spending requirements, but are not designed to provide estimates of the prevalence of noncompliance with requirements regarding use of funds. Table 1 summarizes the strengths and limitations of these oversight mechanisms for this purpose, along with further information. Education monitors how well states ensure school district compliance with Title I requirements; however, this monitoring is not designed to capture all district noncompliance with spending requirements. Education generally monitors state implementation of the Title I program and evaluates the extent to which states ensure district and school compliance with a broad array of Title I requirements. As a part of its assessment of state oversight of districts and schools, Education reviews Title I compliance in two to three school districts in each state being reviewed. Some Title I requirements reviewed by Education relate to how Title I funds are spent. However, many other requirements deal with nonfiscal programmatic issues. While ensuring compliance with Title I spending requirements plays a part in Education’s monitoring strategy, officials we spoke with cautioned that monitoring is not designed to capture all instances of noncompliance. For instance, Education officials do not conduct detailed reviews of districts’ Title I expenditures to identify unallowable expenses, but rely primarily on other sources of oversight, such as OIG audits, for this purpose. Education uses the results of its monitoring efforts and others’ oversight efforts to design technical assistance and training initiatives to assist states and school districts in using their resources and flexibility appropriately. They also use results to target future monitoring efforts based on risk. In some cases, Education may place conditions on the further receipt of grant funds. We reviewed Education’s 2009–2010 monitoring reports and other relevant documents for findings related to how Title I funds were spent in the 16 states and the District of Columbia that Education monitored that year. Among findings Education identified as common, some dealt with program compliance issues that were unrelated to how Title I funds were spent, such as failing to ensure that districts notified parents about supplemental educational services or school choice in a timely manner or failing to post information about school choice on their Web sites. Other findings were related to how Title I funds were spent, such as failure to ensure that school districts allocated funds according to Title I requirements, met various fiscal requirements, or used Title I funds to support only paraprofessionals who had the required qualifications. For instance, sampled districts in several states did not distribute at least 95 percent of parental involvement funds to schools, as required. In other cases, states failed to ensure that districts accurately calculated the amount of Title I funds to reserve for services to participating private school students. There were also cases in which states did not prevent districts from using Title I funds to supplant state and local funds, which is prohibited under Title I. For example, one Arkansas district required its Title I schools to use Title I funds to pay for electricity and cleaning supplies, while other schools used nonfederal general funds for these items. Education required Arkansas to notify the district that this practice was not allowable and submit evidence that the district subsequently provided general funds to its Title I schools for these purposes. In addition, Education identified instances where districts paid paraprofessionals at Title I schools who did not meet Title I qualification requirements, indicating that Title I funds may have been used to support ineligible staff. For instance, 82 paraprofessionals in one Illinois school district did not have the required qualifications. The state was required to submit an action plan demonstrating how it would ensure that all paraprofessionals would meet qualification requirements prior to the beginning of the next school year. Education’s OIG also conducts audits of selected districts, using risk- based criteria including tips, past audit findings, and other known weaknesses according to officials. OIG has conducted a number of audits examining fiscal controls over funds from the Title I program and other federal Education grants, and Education and OIG officials we spoke with concurred that these audits tend to involve more in-depth financial reviews than Education’s monitoring activities. OIG audits likely provide a more comprehensive look at specific districts’ controls over Title I funds than other oversight mechanisms; however, since OIG selects districts due to known risk factors, the weaknesses it identifies are not necessarily representative of those of Title I recipients at large. Education’s OIG has identified some instances in which selected districts spent Title I funds for unallowable purposes, did not adequately document Title I expenditures, or used Title I funds to supplant state and local funds. OIG has also found that inadequate policies and procedures, including inadequate state monitoring of districts, are a common cause of such violations. An OIG final management information report released in 2009 described the results of 41 final audit reports with findings related to district fiscal controls over formula grant programs, most frequently Title I. OIG identified unallowable personnel costs in 8 out of 16 audits where such costs were reviewed. OIG also identified unallowable nonpersonnel costs in 9 out of 20 audits where such costs were reviewed because they were unnecessary, unreasonable, or not in keeping with program purposes. As an example, OIG found in 2009 that the Dallas Independent School District spent about $142,000 in Title I funds for salaries and benefits of non-Title I employees and $17,000 for books distributed to non-Title I schools. OIG recommended that the district return these funds to Education. Completion of corrective action to address this audit finding was pending as of May 2011. To address the systemic issues that OIG identified, Education responded that it would use the information provided in the management information report in the development of its technical assistance plan and training curricula to provide enhanced guidance to states and school districts. In other cases, OIG found that districts had inadequate documentation supporting Title I expenditures. For example, the school district of the City of Detroit was cited in 2008 for failure to maintain required time and effort certifications or personal activity reports for staff funded wholly or partially through Title I funds totaling about $48 million. These activity reports help ensure that the amount of Title I funds budgeted and claimed for Title I personnel is accurate. OIG recommended that Education instruct the state to require the district to return personnel expenditures that could not be adequately documented, but the majority of this finding was disputed by the state and district because, among other reasons, they claimed that they provided credible after-the-fact documentation. Completion of corrective action to address this audit finding was pending as of May 2011. OIG identified other districts that used Title I funds to supplant regular nonfederal funds. For instance, OIG found that a school district in New York inappropriately reclassified about $68,000 in textbook costs from general funds to Title I expenditures. The state and district agreed with OIG’s recommendation to return these funds to Education with interest. Single audits, which were completed on nearly 9,000 school districts that spent $11.8 billion in Title I funds in fiscal year 2009, are another important Title I oversight mechanism, but the single audit requirement does not cover school districts with less than $500,000 per year in federal expenditures. Because of the large number of school districts that had single audits, analyzing data on the audits can provide useful information about the audited school districts’ compliance with requirements related to Title I expenditures. However, available summary data do not provide in-depth information about the nature and severity of identified weaknesses. Of the 8,720 school districts with Title I expenditures that submitted a fiscal year 2009 single audit report to the Federal Audit Clearinghouse, 4,005 (46 percent) of the single audits reported that the Title I program was a major program and were therefore tested by the auditors for compliance with the requirements that could have a direct and material effect on Title I. The auditor did not report any findings for 82 percent of those audits. We analyzed data on the remaining 737 audits to determine the type of compliance finding most commonly reported in the audit reports. The allowable costs/cost principles category of compliance requirement was cited most frequently. This finding occurred in about 301 (8 percent) of the 4,005 audits that examined Title I spending, or in about 40 percent of the 737 single audits that resulted in one or more findings related to Title I. Costs charged to a project must generally be allowable under the terms of the grant, actually associated with the project to which they are charged, and reasonable. Therefore, the single audit findings related to allowable costs/cost principles indicate that the audited district did not comply with one or more of these criteria. Of the 737 single audits with findings, the nature and severity of the findings varied. For instance, in one single audit we reviewed, one finding indicated the district failed to maintain time and effort documentation that certified that the employee worked solely on Title I activities for an employee that was paid out of Title I funds. Not maintaining appropriate records could lead to Title I funds being used for costs not related to the program or allowable under the terms of the grant. In other cases, auditors reported that some school districts had used Title I funds in a manner inconsistent with Title I requirements. For instance, auditors reported that one district did not meet the requirement that 100 percent of teachers of core academic subjects be highly qualified. The auditor found that 5 of the 33 full-time instructional employees were not highly qualified, and, therefore, the district was not in compliance with the requirement. In such cases, states are responsible for issuing a written evaluation of the audit finding that specifies the necessity for corrective action. Auditors also test and report on internal controls over compliance for major programs when performing single audits. We reviewed selected single audits that reported material weaknesses in internal controls over compliance related to Title I grants. For example, the auditor reported that one school district failed to allocate Title I funds to each participating school attendance area or school in rank order, based on the number of low-income children residing in the area or attending the school, as required by Title I. As a result of this material weakness, this district may have funded lower-poverty schools at the expense of higher-poverty schools. Of the 8,720 single audits, about 550 (6 percent) identified a material weakness in internal control over compliance in at least one federal program examined. However, because we relied on summary data in our analysis, we could not determine the proportion of these material weaknesses specifically related to Title I, as opposed to other federal programs included in the audits. We provided a draft copy of this report to the Secretary of Education for review and comment. We received technical comments and incorporated them where appropriate. We are sending copies of this report to relevant congressional committees, the Secretary of Education, and other interested parties. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7215 or scottg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made key contributions to this report are listed in appendix III. To perform this work, we visited 12 school districts, 3 in each of 4 states—Louisiana, Ohio, Rhode Island, and Washington. We selected states and school districts based on the characteristics described in the mandate, including variation in size, student demographics, economic conditions, and geographic locations. For instance, we first selected states in different regions of the country. Then, we selected districts that were urban, rural, and suburban, and had a mixture of poverty levels and demographic diversity. We reviewed Title I plans, audits, and budget and expenditure reports that detailed the use of Title I funds by selected school districts. Districts tended to use a consistent set of categories to describe what was purchased, such as expenditures for salaries, benefits, supplies, and services. However, selected districts did not consistently classify expenditures in a way that allowed us to ascertain their purpose (for example, instructional versus noninstructional). Therefore, while we were able to ascertain what proportions of staff funded by Title I were in administrative, instructional, and instructional support positions, we were not able to directly compare all instructional versus noninstructional expenditures. We also assessed the reliability of the expenditure data provided to us by the districts by (1) reviewing data for completeness and obvious inconsistencies, (2) comparing them with other available expenditure data to determine data consistency and reasonableness, (3) interviewing district Title I and financial officials about expenditure data quality control procedures, and (4) selecting transactions in varied expenditure categories and reviewing documentation for those transactions to determine whether the amount and expenditure category were accurately recorded. We selected a nongeneralizable sample of transactions from major categories, such as salaries, benefits, or services, to include expenditures that appeared typical as well as expenditures that did not appear to fit the category they were in. We determined that the amounts and object categorization of expenditures were sufficiently reliable for the purpose of describing the nature of district Title I expenditures and making broad comparisons of districts’ expenditures. However, we did not determine whether all costs were allowable or met all documentation requirements. Due to the limited number of districts selected, our findings from these districts cannot be generalized to school districts nationwide. We conducted semi-structured interviews with state and local education officials to better understand and discuss their states’ or districts’ use of Title I funds. State and local education officials described their procedures for ensuring funds were spent appropriately, but we did not test these procedures. We analyzed relevant federal laws, regulations, and guidance related to spending of Title I and other Department of Education (Education) funds, as well as accounting methods and protocols issued by the states. Additionally, we conducted a literature search to determine what researchers have found regarding how Title I funds have been spent. We searched online databases, including ERIC, Dialog Databases, NTIS, PolicyFile, ProQuest, and Statistical Insight using keyword “Title I” alone and along with “spending,” “expenditures,” and “administrative” to identify references, including studies, journal articles, and other material, that focused on expenditure of Title I funds. We also searched for studies that cited studies we had identified in our 2003 report. Overall, we identified 99 references in material published from 2001 to 2010. To further winnow down the list of publications, we refined our search to studies that examined Title I expenditures for multiple districts. We were left with only one study that met our criteria. We also interviewed researchers of the identified study and Education officials to determine if there were other relevant studies. They did not identify any that described Title I expenditures. We reviewed findings from Education’s fiscal year 2009 monitoring efforts, audits conducted by Education’s Office of the Inspector General from fiscal years 2003 to 2009 that included a review of fiscal controls over Title I funds, and data on school district single audits for fiscal year 2009 to determine whether Title I funds were used in accordance with relevant requirements. Findings from each of these oversight tools provide useful information about types of noncompliance seen among local school districts, but it is important to note that they are not designed to provide estimates of the extent or severity of such noncompliance. School districts expending at least $500,000 in federal funds are required to obtain a single audit and file audit results with the Federal Audit Clearinghouse, which has been designated as the central collection point, repository, and distribution center for single audit reports and maintains a database of single audit results. To describe the results of single audits, we analyzed selected data that were reported to the Federal Audit Clearinghouse by school districts. We assessed the reliability of the Federal Audit Clearinghouse data on single audits by (1) performing electronic testing of required data elements, (2) reviewing existing information about the data and system that produced them, and (3) interviewing U.S. Census Bureau officials knowledgeable about the database. While the Federal Audit Clearinghouse conducts testing to help ensure that submitted data are internally consistent, and that all required data fields are completed, and requires that both the submitter and the independent auditor certify the report, it does not verify the accuracy of reported data or that all entities required to report data do so. It is, therefore, important to note that many school districts are not required to obtain single audits, and are therefore not represented in this database. Additionally, some entities required to report may not do so, and others may report inaccurate data. Despite these limitations, we determined that these data were sufficiently reliable for the purpose of describing Title I- related findings in submitted audits of school districts. In order to isolate and analyze single audit data for school districts, we used codes indicating the type of entity being audited as assigned by U.S. Census Bureau staff reviewing single audit submissions. We included all entries coded as pertaining to school districts with Title I expenditures in our universe. Due to complexities in how school districts are organized, as well as potential inconsistencies in how entities are coded by U.S. Census Bureau staff, we manually reviewed entries with Title I expenditures that were either missing an entity code or had an entity code that did not clearly indicate that the audited entity was a school district. We kept all entries in categories where we were able to determine that the large majority of records corresponded with school districts, and removed all entries in categories where this was not the case. We conducted this performance audit from May 2010 through June 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The following district profiles provide a snapshot of Title I expenditures in the 12 school districts we reviewed for the 2008–2009 school year. We obtained data from a variety of sources, including the districts themselves, the states, and the National Center for Education Statistics. We attempted to provide data that are comparable, but each state, and in some cases, each district, had its own accounting system and its own accountability systems. Districts did not account for the same types of expenditures in the same ways. Similarly, each state has its own tests for 8th grade reading and mathematics. While it may be appropriate to compare achievement results within a state, it would not be appropriate to compare achievement results across states. In addition, we provide some background and demographic information on each district, but each district has unique needs and characteristics that may not be reflected in the background information provided in this appendix. Geographic category: town (fringe) Total schools in district: 36 Number of schools receiving Title I funds: 22 (61%) Schoolwide programs: 21 (95%) Targeted assistance programs: 1 (5%) Desktop computers for reading program (2 vendors) Title I staff (full-time equivalents) Overall Title I budget (Showing discretionary and required reservations) Geographic category: suburb (small) Total schools in district: 54 Number of schools receiving Title I funds: 21 (39%) Title I staff (full-time equivalents) Instructional support (1 librarian/other) Overall Title I budget (Showing discretionary and required reservations) Geographic category: city (midsize) Total schools in district: 19 Number of schools receiving Title I funds: 19 (100%) Computers (skills practice) Title I staff (full-time equivalents) Instructional support (1 librarian/other) Overall Title I budget (showing discretionary and required reservations) Southeast Local School District (Wayne Co.) Geographic category: rural (distant) Total schools in district: 6 Number of schools receiving Title I funds: 4 (67%) Title I staff (full-time equivalents) Overall Title I budget (showing discretionary and required reservations) Geographic category: suburb (large) Total schools in district: 10 Number of schools receiving Title I funds: 6 (60%) Schoolwide programs: 5 (83%) Targeted assistance programs: 1 (17%) Title I staff (full-time equivalents) Instructional support (2 instructional coaches) Instructional (25 teachers) Overall Title I budget (Showing discretionary and required reservations) Includes homeless, migrant, and limited English proficient students. Geographic category: city (large) Total schools in district: 112 Number of schools receiving Title I funds: 108 (98%) Services delivered by top five vendors Supplemental educational services (4 vendors) Title I staff (full-time equivalents) Overall Title I budget (showing discretionary and required reservations) Geographic category: rural (fringe) Total schools in district: 5 Number of schools receiving Title I funds: 1 (20%) Schoolwide programs: none Targeted assistance programs: 1 (100%) Title I staff (full-time equivalents) Instructional (2 teachers) Overall Title I budget (Showing discretionary and required reservations) Geographic category: city (small) Total schools in district: 22 Number of schools receiving Title I funds: 8 (36%) Schoolwide programs: 4 (50%) Targeted assistance programs: 4 (50%) Computers and supplies (2 vendors) Title I staff (full-time equivalents) Administrative positions (<1 administrators) Overall Title I budget (Showing discretionary and required reservations) Geographic category: city (midsize) Total schools in district: 47 Number of schools receiving Title I funds: 47 (100%) Supplemental educational services (3 vendors) Title I staff (full-time equivalents) Overall Title I budget (Showing discretionary and required reservations) Geographic category: suburb (large) Number of schools receiving Title I funds: 12 (35%) Schoolwide programs: 5 (42%) Targeted assistance programs: 7 (58%) Title I staff (full-time equivalents) Overall Title I budget (showing discretionary and required reservations) Geographic category: suburb (midsize) Total schools in district: 19 Number of schools receiving Title I funds: 5 (26%) Schoolwide programs: 3 (60%) Targeted assistance programs: 2 (40%) Title I staff (full-time equivalents) Administrative positions (<1 clerical) Overall Title I budget (showing discretionary and required reservations) Geographic category: city (large) Total schools in district: 99 Number of schools receiving Title I funds: 33 (33%) Schoolwide programs: 31 (94%) Targeted assistance programs: 2 (6%) Supplemental educational services (4 vendors) Title I staff (full-time equivalents) Overall Title I budget (Showing discretionary and required reservations) The following staff members made key contributions to this report: Cornelia Ashby, Director; Betty Ward-Zukerman, Assistant Director; Lara Laufer, Analyst-in-Charge; Matthew Alemu; Phyllis Anderson; James Bennett; Jessica Botsford; William Colvin; Susannah Compton; Ranya Elias; Catherine Hurley; Kimberly McGatlin; Sarah McGrath; Jean McSween; Maria Morton; and Ellen Phelps Ranen. Recovery Act: Opportunities to Improve Management and Strengthen Accountability over States’ and Localities’ Uses of Funds. GAO-10-999. Washington, D.C.: September 20, 2010. Recovery Act: States Could Provide More Information on Education Programs to Enhance the Public’s Understanding of Fund Use. GAO-10-807. Washington, D.C.: July 30, 2010. No Child Left Behind Act: Education Actions May Help Improve Implementation and Evaluation of Supplemental Educational Services. GAO-07-738T. Washington, D.C.: April 8, 2007. No Child Left Behind Act: Education Actions Needed to Improve Local Implementation and State Evaluation of Supplemental Educational Services. GAO-06-758. Washington, D.C.: August 4, 2006. No Child Left Behind Act: Improvements Needed in Education’s Process for Tracking States’ Implementation of Key Provisions. GAO-04-734. Washington, D.C.: September 30, 2004. Title I: Although Definitions of Administrative Expenditures Vary, Almost All School Districts Studied Spent Less Than 10 Percent on Administration. GAO-03-386. Washington, D.C.: April 7, 2003. Disadvantaged Students: Fiscal Oversight of Title I Could Be Improved. GAO-03-377. Washington, D.C.: February 28, 2003. | Title I of the Elementary and Secondary Education Act (ESEA), as amended, is the largest federal education funding source for kindergarten through grade 12. In fiscal year 2010, Congress appropriated $14.5 billion for Title I grants to school districts to improve educational programs in schools with high concentrations of students from lowincome families. ESEA includes accountability requirements for schools and districts that focus primarily on measuring academic outcomes rather than prescribing exactly how Title I funds are to be spent. ESEA, as amended, includes a mandate that requires GAO determine how selected districts expend Title I funds. In response, GAO addressed (1) how selected school districts spent their Title I funds and (2) what federal mechanisms are in place to oversee how Title I funds are used and what is known about the extent of noncompliance with relevant requirements. To do this, GAO visited a nongeneralizable sample of 12 school districts in 4 states and analyzed their Title I expenditures for the 2008-2009 school year. GAO also reviewed federal and local audit findings for a wider range of states and districts. Districts were selected based on criteria in the mandate including variation in size, student demographics, location, and economic conditions. GAO found that 12 selected districts in Louisiana, Ohio, Rhode Island, and Washington used Title I funds primarily for instructional purposes, consistent with findings from other research. Most selected districts focused Title I activities at the elementary level, where they expected the greatest improvement in academic achievement. Title I funds supported district initiatives to improve academic outcomes, such as reducing class sizes, extending class time, and coaching Title I teachers. The selected districts generally spent the majority of Title I funds on salaries and benefits, largely for instructional personnel. Districts with schools that failed to meet state adequate yearly progress goals for two or more consecutive years were required by law to reserve funds for various initiatives, such as transportation for public school choice, supplemental educational services, and professional development. In some districts such set-asides, which do not flow directly to schools, accounted for sizable portions of funds, amounting in two districts to 28 percent of Title I revenue. Predictably, such districts spent more than other districts on purchased services, such as tutoring for students eligible for supplemental educational services. Title I recipients are subject to various oversight mechanisms, which provide some information on noncompliance with relevant spending requirements, but are not designed to provide estimates of the prevalence of noncompliance. The Department of Education (Education) has conducted state-level monitoring to assess states' Title I program implementation. It has identified common issues, such as failure to ensure that districts properly calculate or reserve funds for specific purposes. To guard against fraud and abuse, Education's Office of Inspector General (OIG) uses risk-based criteria, such as past audit findings, to select districts for financial audit. In such districts, OIG has found instances of unallowable expenditures of Title I funds. Also, all states and districts that spend more than $500,000 in federal awards must file an annual audit that focuses on financial management and compliance with provisions of selected federal programs. Roughly 18 percent of districts that filed a fiscal year 2009 audit in which Title I compliance was reviewed had findings related to Title I, which most commonly dealt with unallowable costs and cost principles. However, only a subset of districts are audited for Title I compliance in any given year. When any type of oversight identifies noncompliance, school districts and states must identify and take corrective actions. Education also uses results of oversight and monitoring to target future monitoring efforts and to develop technical assistance and training to assist states and school districts in using their resources and flexibility appropriately. GAO is not making recommendations. Education provided technical comment on a draft of this report, which we incorporated as appropriate. |
People are an agency’s most important organizational asset. An organization’s people define its character, affect its capacity to perform, and represent the knowledge-base of the organization. As such, effective strategic human capital management approaches serve as the cornerstone of any serious change management initiative. They must also be at the center of efforts to transform the cultures of federal agencies so that they become less hierarchical, process-oriented, stovepiped, and inwardly focused; and more flat, results-oriented, integrated, and externally focused. In January 2001, GAO designated strategic human capital management as a governmentwide high-risk area. As GAO’s January 2001 High-Risk Series and Performance and Accountability Series reports make clear, serious human capital shortfalls are eroding the ability of many agencies, and threatening the ability of others, to economically, efficiently, and effectively perform their missions. Plainly, the major problem is not federal employees. Rather, the problem is the lack of a consistent strategic approach to marshaling, managing, and maintaining the human capital needed to maximize government performance and ensure its accountability. Our High-Risk report outlined four pervasive human capital challenges now facing the federal government: Leadership, continuity, and succession planning Strategic human capital planning and organizational alignment Acquiring and developing staffs whose size, skills, and deployment meet Creating results-oriented organizational cultures These challenges are longstanding and will not be quickly or easily addressed. They require sustained and inspired efforts by many parties, including the President, department and agency leaders, the Office of Management and Budget, the Office of Personnel Management, Congress, and others. Comprehensive human capital legislative reforms will likely be needed, but agency leaders must not wait for them to happen. Much of the authority agency leaders need to manage human capital strategically is already available under current laws and regulations. Therefore, we believe the first step toward meeting the government’s human capital challenges is for agency leaders to identify and make use of all the appropriate administrative authorities available to them to manage their people for results. The use of these authorities often will need to be undertaken as part of and consistent with proven change management practices. The second step is for policymakers to pursue incremental legislative reforms to give agencies additional tools and flexibilities to hire, manage, and retain the human capital they need, particularly in critical occupations. The third step is for all interested parties to work together to identify the kinds of comprehensive legislative reforms in the human capital area that should be enacted over time. These reforms should place greater emphasis on skills, knowledge, and performance in connection with federal employment and compensation decisions, rather than the passage of time and rate of inflation, as is often the case today. The human capital model highlights the kinds of thinking that agencies should apply, as well as some of the steps they can take, to make progress in managing human capital strategically. The model consists of the following: The Critical Success Factors Table identifies eight critical success factors for managing human capital strategically, which embody an approach to human capital management that is fact-based, focused on strategic results, and incorporates merit principles and other national goals. Each critical success factor is then presented in a table, illustrating their development across three levels. Discussions in Pointers expand upon the eight critical success factors to highlight not just the needed actions but the kind of thinking that marks high-performing organizations’ approaches to managing people through the presentation of concepts, steps for progressing, and agency case illustrations. Appendix A includes a list of related GAO products corresponding to the eight critical success factors. In developing the model, we built upon GAO’s Human Capital: A Self- Assessment Checklist for Agency Leaders (GAO/OCG-00-14G, September 2000). In addition, we considered lessons learned from GAO reports on public and private organizations that are viewed as leaders in strategic human capital management and managing for results. Additional GAO reports highlighting the progress and shortcomings of individual federal agencies in these two areas were also consulted (Appendix A). Our model was further informed by the findings of external reports on strategic human capital management and managing for results from academia, OPM, the Merit Systems Protection Board, the National Academy of Public Administration, and others. We also reflected upon our own experience in strategic human capital management and lessons learned from our use of flexibilities available to GAO for maximizing the performance and accountability of GAO employees. Since maximizing performance and assuring accountability are at the heart of our mission at GAO, we believe it is our responsibility to lead by example, especially in the human capital area. By managing GAO’s workforce strategically and focusing on results we are helping to improve our performance and enhance accountability. By doing so, we also hope to demonstrate to other federal agencies that they can make similar improvements in the way they manage their people. Individuals who made key contributions to this product include Stephen Altman, Martin DeAlteris, Ellen V. Rubin, Shelby D. Stephan, and Edward Stephenson. The Critical Success Factors Table identifies eight critical success factors for managing human capital strategically. These factors are organized in pairs to correspond with the four governmentwide high-risk human capital challenges that our work has shown are undermining agency effectiveness (Figure 1). Taken together, the eight critical success factors embody an approach to human capital management that is fact-based, focused on program results and mission accomplishment, and incorporates merit principles and other national goals. When considering the human capital cornerstones and the critical success factors, it is important to remember that they are interrelated and mutually reinforcing. Any pairing or ordering of human capital issues may have a sound rationale behind it, but no arrangement should imply that human capital issues can be compartmentalized and dealt with in isolation from one another. All of the critical success factors reflect two principles that are central to the human capital idea: People are assets whose value can be enhanced through investment. As with any investment, the goal is to maximize value while managing risk. An organization’s human capital approaches should be designed, implemented, and assessed by the standard of how well they help the organization achieve results and pursue its mission. For each of the eight critical success factors, the table describes three levels: Level 1, Level 2, and Level 3 (Figure 2). The descriptions portray the approach to managing people that can be expected of an organization at each level. To understand the progression from level to level, it is best to keep in mind the two central human capital ideas just discussed. An agency at Level 1 is unlikely to have effectively put these two principles into practice. At Level 2 an agency is clearly taking steps to apply them. An agency at Level 3 has made these principles an integral part of its approach to doing business, and can see demonstrable results from having done so. Progressing to Level 3, which each agency should strive to accomplish, will take considerable time, effort, and resources on behalf of agency leadership to successfully manage the required organizational change. The use of this model requires a long-term commitment to valuing human capital as a strategic asset. Agency leaders view people as costs to be cut rather than as assets to be valued. Management decisions involving the workforce are often made without considering how these decisions may affect mission accomplishment. Similarly, business decisions are often made without due consideration of the human capital needs they entail or the human capital approaches that may be needed for successful implementation. Agency leaders acknowledge the importance of human capital to mission accomplishment, and have informed managers at all levels of the roles they need to play in acquiring, developing, and retaining people to meet the agency’s programmatic needs. The agency is working to explicitly link its human capital approaches to intended program results. Agency leaders view people as an important enabler of agency performance, recognize the need for sustained commitment by the agency to strategically manage its human capital, and stimulate and support efforts to integrate human capital approaches with organizational goals. The agency’s human capital approaches are consistently developed, implemented, and evaluated by the standard of how well they support the agency’s efforts to achieve program results. Managers at all levels actively support these concepts and are prepared and held accountable for effectively managing people. Human capital management is considered a support function, separate from and generally subordinate to the agency’s core planning and business activities. The “personnel” or “human resource management” office is largely process- oriented and focused on ensuring agency compliance with merit system rules and regulations. Expectations for staff in these offices are limited to processing transactions and addressing “personnel issues” on a case-by-case basis. Human capital professionals have begun to focus on the agency’s business needs and their role in filling them. The human capital function is in transition “from rules to tools,” facilitating compliance with merit system principles and other national goals, and helping the agency more effectively meet its strategic and business goals. Human capital professionals are expected to be customer- oriented and to develop the expertise needed to be effective in their new roles. Human capital professionals partner with agency leaders and line managers in developing strategic and program plans. The human capital office provides effective human capital strategies to meet the agency’s current and future programmatic needs and fulfil merit systems principles and other national goals. Human capital professionals are prepared, expected, and empowered to provide a range of technical and consultative services to their internal customers; agency leaders and managers consistently recognize the key role of human capital professionals in helping the agency and its people effectively pursue their mission. The agency has streamlined personnel processes and effectively employs technology to meet customer needs. The agency has yet to fully recognize the link between its human capital approaches and organizational performance objectives. Existing human capital approaches have yet to be assessed in light of current and emerging agency needs. The agency changes or adopts human capital approaches without considering how well they support organizational goals and strategies, or how these approaches may be interrelated. The agency's human capital needs are considered during strategic and annual planning. Existing human capital approaches have been assessed for their alignment with current and emerging needs. New human capital initiatives are in design or implementation specifically to support programmatic goals. These initiatives are building towards a coherent, results-oriented human capital program. The agency's human capital approaches demonstrably support organizational performance objectives. The agency considers further human capital initiatives or refinements in light of both changing organizational needs and the demonstrated successes or shortcomings of its human capital efforts. The human capital needs of the organization and new initiatives or refinements to existing human capital approaches are reflected in strategic workforce planning documents. Decisionmakers lack critical information with which to create a profile of the workforce (e.g., skills mix, deployment, and demographic trends) or to evaluate the effectiveness of human capital approaches, partially due to inadequate data sources. Performance measures and goals for the agency's human capital programs, especially as they link to programmatic outcomes, have yet to be identified. The agency is working to ensure that information systems are in place to generate meaningful and reliable data across a range of human capital activities. Data gathered includes workforce shape, competencies and skills mix, and demographic trends. The agency has profiled its workforce so that usable information is on hand with which to make decisions in such areas as acquiring, developing, and retaining talent. The agency has identified performance measures and goals for its human capital programs, with attention to establishing the link between these programs and agency results. Decisions involving human capital management and its link to agency results are routinely informed by complete, valid, and reliable data. Data gathered is kept current. Agency leaders use this information to manage risk by spotlighting areas for attention before crises develop and to identify opportunities for improving agency results. Performance measures for the agency's human capital programs have been distilled to a vital few, and are an integral part of the agency's strategic planning, performance measurement, and evaluation efforts. Data on the agency's workforce profile, performance goals and measures for human capital approaches, and areas requiring agency attention are reflected in strategic workforce planning documents. Agency leaders approach human capital expenditures (e.g., professional development and knowledge management, recruiting programs, pay and benefits, performance incentives, and enabling technology) as costs that should be minimized rather than as investments that should be managed to maximize value while minimizing risk. Funding decisions may be ad hoc, without clearly defined objectives or adequate consideration of their implications for the workforce. Human capital expenditures are regarded as investments in people and in the agency's capacity to perform its mission. Investment strategies for acquiring, developing, and retaining staff are evaluated and developed in light of modern human capital management practices. Agency strategies for investing in human capital are fully integrated with needs identified through its strategic and annual planning. The goals and expectations for these investments are transparent and clearly defined, and their rationale is consistent across the range of human capital programs. The efficiency of the investments is continuously monitored and the effectiveness is periodically evaluated. Agency managers believe that meaningful improvements in human capital management are not feasible. The range of tools and flexibilities available to the agency under current laws and regulations have yet to be explored. In addition, the department or agency may have self-imposed constraints in place that are excessively process-oriented or based on obsolete perceptions of civil service laws, rules, or regulations. Standardization and by-the-book human capital management are yielding to flexible and innovative approaches. Managers have identified the tools and flexibilities available to them under current law and are using many of these to modernize their human capital approaches to help meet current and emerging needs. The agency is looking both within and outside itself for model principles and practices, and is pursuing opportunities to test new and more results- oriented approaches. The agency tailors its human capital strategies to meet its specific mission needs. As such, it is taking all appropriate administrative actions available to it under current laws, rules, and regulations. In addition, it is exploring opportunities to enhance its competitiveness as an employer and eliminate barriers to effective human capital management. If needed, this includes producing a compelling business case to support selected legislative initiatives. Managers and staff rigidly adhere to standardized procedures and traditional modes of thinking. Human capital management in the agency is driven by top-down decision-making; relations between management and employees and their representatives are frequently more adversarial than is necessary. Substantial time and resources are consumed by reacting to workplace disputes and long-standing sources of conflict. The agency's approach to equal opportunity is compliance- oriented and reactive. The agency is lessening its reliance on standardized approaches and encouraging program managers to innovate and take risks. Agency leaders are acknowledging the value of employee input and feedback to improve the workplace environment and focus on results; management and employee representatives stress communication and identify shared interests. The agency works to build a diverse workforce and has declared "zero tolerance" of discrimination. Managers, teams, and employees at all levels are given the authority they need to accomplish programmatic goals; innovation and problem-solving are encouraged. In developing approaches to managing the workforce, agency leaders seek out the views of employees at all levels and communication flows up and down the organization. Management and employee representatives work collaboratively to achieve organizational outcomes. The agency works to meet the needs of employees of all backgrounds, maintains "zero tolerance" of discrimination, strives actively to reduce the causes of workplace conflicts, and ensures that conflicts are addressed fairly and efficiently. The agency recognizes and demonstrates that an inclusive workforce is a competitive advantage for achieving results. The organizational culture is hierarchical, process-oriented, stovepiped, and inwardly focused. Performance expectations for managers and staff are blurred by an unclear organizational mission and a lack of clearly defined and consistently communicated core values. The agency has created the basis for employee expectations by defining and communicating its mission, core values, strategic goals and objectives, and business strategies. Expectations for managers are shifting from complying with detailed rules and procedures to accomplishing program goals. The agency's performance management and incentive systems are being designed and tested to make employees aware of their roles and responsibilities in helping the agency achieve its performance goals. Efforts are under way to enhance internal cooperation. The organizational culture is results-oriented and externally focused. Individual performance management is fully integrated into the agency's organizational goals and is used as a basis for managing the organization. Managers are held accountable through performance management and rewards systems for achieving strategic goals and objectives, creating innovation, and supporting continuous improvement. Clearly defined, transparent, and consistently communicated performance expectations addressing a range of results/customer/employee issues are in place to rate, reward, and hold accountable employees and teams at all levels. Maximizing the value of human capital is a function not just of specific actions but of cultural transformation. This section expands the discussion of the eight critical success factors to highlight the kind of thinking and action that marks high-performing organizations’ approaches to managing people through the presentation of Concepts and Steps for Progressing that agencies can pursue to help maximize the value of their human capital. Accompanying the discussion of each critical success factor is a case illustration involving a federal agency that has taken positive steps toward addressing one of its human capital challenges. We have also noted, where appropriate, the positive consequences that have resulted from such efforts. The fact that an organization is profiled for a particular critical success factor is not meant to imply complete success or lack of success in other dimensions. Furthermore, the efforts highlighted in the case illustrations are not intended to exemplify all the potential steps an agency may take to make progress under each critical success factor. Concepts: An effective organization includes a senior leadership team committed to developing more effective ways of doing business, accomplishing results, and investing in human capital. Perhaps the most important element of successful management reform is the demonstrated commitment of top leaders to change. Political leaders as well as senior career executives demonstrate this commitment by personally developing and directing reform, driving continuous improvement, and characterizing the agency’s mission in reform initiatives. Previous GAO reports and testimonies have underscored the importance of having agency leaders and managers with the skills and commitment to drive cultural change that focuses on results. Agency leaders, career and political alike, should be held accountable and should hold others accountable for the ongoing monitoring and refinement of human capital approaches to ensure continuous effectiveness, constant improvement, and increased mission accomplishment within the agency. Moreover, a key factor in the success of any specific strategic human capital initiative is the sustained attention of senior leaders and managers at all levels of the agency to valuing and investing in their employees. This leadership is critical for an agency to overcome its natural resistance to change, to marshal the resources needed in many cases to improve management, to build and maintain an organizationwide commitment to improving its way of doing business, and to create the conditions for effectively improving human capital approaches. Steps for Progressing: We have noted that successful organizations know the importance of fostering a committed leadership team and providing reasonable continuity through succession planning and executive development. Career executives can provide the long-term commitment and focus needed for the agency to achieve strategic human capital management. Two mechanisms for fostering a committed leadership team are an executive development program and comprehensive succession planning which are linked to agency goals and objectives. The executive development program can include planned developmental opportunities, learning experiences, and feedback for candidates. Support for and use of government and nongovernment executive development programs can help agency leaders in establishing an active executive development program. To hold managers accountable for human capital management, agency leaders can make an effort to select managers who have the ability to manage human capital and can see the connection between that responsibility and the organization’s ability to achieve its long-term goals. Performance appraisal feedback for those managers selected should include a review of human capital management competencies, technical skills, and the accomplishment of program results. Furthermore, agencies can modify incentive systems to emphasize the consideration of long-term consequences of human capital management decisions in addition to immediate results. Agency leaders have other opportunities for displaying their commitment to human capital. Continuous learning efforts, employee-friendly workplace policies, competency-based performance appraisal systems, and retention and reward programs are all ways in which agencies can value and invest in their human capital. The sustained provision of resources for such programs can show employees and potential employees the commitment agency leaders have to strategic human capital management. To demonstrate its commitment to human capital, agency leadership at the U.S. Mint (Mint) has supported several initiatives involving the use of human capital flexibilities. According to officials at the Mint, the full support of agency leadership was attained for these initiatives that were pursued for their strategic value and alignment with business goals. The Mint has pursued approaches in the areas of recruiting, hiring, developing, and retaining talent, increasing the flexibility of the workforce, and respecting and rewarding employees. The U.S. Mint is facing considerable challenges to recruit and retain a high quality workforce and, in response, has begun to explore ways to take advantage of all the human capital flexibilities currently available under existing laws and regulations. Recognizing this challenge, the Office of the Chief Financial Officer obtained the full support of agency leadership to assign two full time employees and provide budget resources for a Human Resources Flexibilities Team that was formed to do a two- phase study concerning human capital flexibilities. Phase one included an extensive review of all human capital flexibilities currently available to the Mint under existing laws and regulations. Phase two included an analysis of the Mint's current use of flexibilities and the development of recommendations to agency leadership for increasing their effectiveness as recruitment and retention tools that were prioritized against the Mint's strategic goals. Many new programs have been initiated or are planned as a result of this effort. For example, the Mint has implemented an Information Technology Pilot Study to facilitate the hiring process, which has resulted in the Mint hiring six new IT employees in an average time span of 15 days. Still in development is a Single Agency Qualifications Standard with the purpose of collapsing 13 occupations into one occupation to provide management and employees with the flexibility to move from one job to another. Also in development is an Occupational Training Agreements program in conjunction with the Competency Based Careers initiative. Under the Mint's competency based job description, an employee can sign an occupational training agreement stating that he or she will acquire new skills which will allow for promotion without the one year time-in-grade requirement. According to agency officials, managers, union representatives, human resources staff, and employees were informed of and involved in the development of these new approaches. Concepts: The effective pursuit of organizational alignment and strategic human capital management requires the integration of human capital approaches with strategies for accomplishing organizational missions and program goals. Such an integration allows the agency to ensure that its core processes efficiently and effectively support mission-related outcomes. This new strategic approach, or redirected focus of the human capital function, centers on the contributions that it can make to the long- term accomplishment of the agency’s mission. The new focus will also require an expansion of the role of human capital professionals from largely paperwork processors to functioning as advisors to and partners with senior leadership and managers as well as technical experts who ensure that merit principles and other national goals are upheld. With these newly skilled human capital professionals as trusted members of the management team, the agency can be provided with the knowledge of strategic human capital management that will allow it to incorporate such principles into the overall strategic and program plans of the organization. Steps for Progressing: High-performing organizations we examined treat strategic human capital management as fundamental to effective overall management, evidenced through the integration of the human capital function into management teams. The human capital office in such organizations provides effective human capital strategies to meet current and future programmatic needs and works to ensure that merit systems principles and other national goals are fulfilled. The role of human capital professionals should focus on: Developing, implementing, and continually assessing human capital policies and practices that will help the agency achieve its mission Leading or assisting in the agency’s workforce planning efforts Participating as partners with line managers Reaching out to other organizational functions and components through facilitation, coordination, and counseling Providing integrated mission support. Human capital professionals functioning in this role can serve as an important source of information for strategic workforce planning, continuous learning, and knowledge management initiatives. Moreover, they can provide agency leaders with an interpretation of agency data in areas such as retirement eligibility and projection numbers, retention rates, or skills assessments that can allow agency leaders to more effectively pursue strategic human capital management and organizational alignment. High-performing organizations also recognize the need for leveraging the internal human capital function with external expertise, such as consultants, professional associations, and other organizations, as needed. For human capital professionals to begin acting in this new capacity, agency leaders must ensure that they have the competencies and experience to effectively take on the expected role. One tool available to agencies for identifying the appropriate competencies is the International Personnel Management Association’s Human Resource Competency Model. The new role of the human capital function will require agencies to recruit new professionals and train existing professionals in the competencies to help align human capital management with the specific needs and circumstances of each agency. It will also require agencies to constantly reevaluate their internal procedures so that fewer staff resources are required for processing transactions and more resources can be dedicated to meeting the strategic needs of the organization. Streamlining personnel transactions in conjunction with the greater use of technology to automate paper-based personnel processes is critical to making this shift. Because of the varying needs of managers at different program and regional offices, the Nuclear Regulatory Commission (NRC) refers to its personnel specialists as account representatives in an attempt to integrate the human capital function into management teams. Approximately 5 years ago, the Office of Human Resources within NRC attained its goal of providing full service operations by creating one-stop shopping for its clients. In an effort to integrate human capital throughout the organization so that it is not a stand-alone function and it is incorporated during the budget process, NRC refers to its personnel specialists as account representatives. Teams of account representatives are assigned to specific program/regional offices within NRC to act in a consultant role for managers. This provides managers in the field with an on- site team of HR account representatives whom they consult with on the full range of HR management issues, services, and operations. The account representatives provide information and insights on such matters as organizational structure and position management, staffing and recruiting strategies, performance management, awards and recognition, and labor and employee relations issues to managers at the various program/regional offices. NRC reported that for the first time, internal fiscal year 2003 budget documents reflected the agencywide human capital management component of agency programs and resources. NRC also reported that this approach, implemented as part of the agency's Planning, Budgeting, and Performance Management process, established an agencywide perspective for human capital management and facilitated an integrated and coordinated approach to human capital planning and budgeting. Concepts: Effective organizations integrate human capital approaches as strategies for accomplishing their mission and programmatic goals and results. The effectiveness of this integration and alignment is judged by how well it helps achieve organizational goals. Furthermore, high- performing organizations stay alert to emerging mission demands and human capital challenges and remain open to reevaluating their human capital practices in light of their demonstrated successes or failures in achieving the organization’s strategic objectives. Steps for Progressing: Organizations can evaluate the extent to which human capital approaches support the accomplishment of programmatic goals through the use of workforce planning. Workforce planning efforts, including succession planning, linked to strategic goals and objectives, can enable an agency to remain aware of and be prepared for its current and future needs as an organization, such as the size of the workforce; its deployment across the organization; and the knowledge, skills, and abilities needed for the agency to pursue its mission. This planning will entail the collection of valid and reliable data on such indicators as distribution of employee skills and competencies, attrition rates, or projected retirement rates and retirement eligibility by occupation and organizational unit. Agencies can use an organizationwide knowledge and skills inventory and industry benchmarks to identify current problems in their workforces and plan for future improvements. To begin assessing how well existing human capital approaches support their missions, goals, and other organizational needs, agencies can use GAO’s human capital framework, Human Capital: A Self-Assessment Checklist for Agency Leaders (GAO/OCG-00-14G). This assessment tool identifies a number of human capital elements and underlying values common to high-performing organizations. Furthermore, the planning requirements of the Government Performance and Results Act (GPRA) provide a useful framework for agencies to integrate their human capital strategies with their strategic and programmatic planning. Other tools available, including OPM’s five-step workforce planning model, may provide additional guidance. The appropriate geographic and organizational deployment of employees can further support organizational goals and strategies. Effective deployment strategies can enable an organization to have the right people, with the right skills, doing the right jobs, in the right place, at the right time by making flexible use of its internal workforce and appropriately using contractors. The use of contractors will require decisions to be made, based upon strategic planning efforts, about what types of work are best done by the agency or contracted out. While reviewing outsourcing options, it is also important to consider whether or not the agency has the expertise available to manage the cost and quality of contractor activities. In response to the Restructuring and Reform Act of 1998 (Restructuring Act), the Internal Revenue Service (IRS) has taken several steps toward modernizing its organizational structure and its performance management system.The Restructuring Act led IRS to adopt a new mission statement that places greater importance on serving the public and meeting taxpayer needs, developing and implementing a reorganization plan, and enhancing taxpayers' rights. In responding to the requirements of the Restructuring Act, IRS has begun to align human capital approaches to assist in accomplishing its strategic goal of improved customer service. In the first 3 years since the implementation of the Restructuring Act, IRS has developed an integrated modernization strategy and implemented a new organizational structure with four customer-focused operating divisions to meet the needs of the taxpayer segments it serves and reflect the agency's strategic plan. The four operating divisions that have resulted from the modernization strategy include: large and mid-size business, tax-exempt and government entities, small business and self-employed, and wage and investment. This new direction, reflected in the strategic plan, outlines three strategic goals and corresponding balanced measures, including a strategic goal of providing service to each taxpayer that is measured through customer satisfaction data. To achieve this goal, IRS has recently implemented a customer service employee-training program that offers employees specialized training geared toward the taxpayer segment they serve. In addition, IRS's new performance management plan calls for each operating division to have complementary goals, objectives, and measures for front-line managers to develop plans identifying the actions they need to take to support operational objectives. To assist in this effort, IRS implemented a realigned performance evaluation system for executives, managers, and supervisors in February 2000. Concepts: A fact-based, performance-oriented approach to human capital management is crucial for maximizing the value of human capital as well as managing risk. As discussed in the previous section, high-performing organizations use data to determine key performance objectives and goals which enable them to evaluate the success of their human capital approaches. These organizations also identify their current and future human capital needs, including the appropriate number of employees, the key competencies and skills mix for mission accomplishment, and the appropriate deployment of staff across the organization and then create strategies for identifying and filling gaps. Valid and reliable data are critical to assessing an agency’s workforce requirements and heighten an agency’s ability to manage risk by allowing managers to spotlight areas for attention before crises develop and identify opportunities for enhancing agency results. Although the cost of collecting data may be significant, the costs of making decisions without the necessary information can be equally significant. Steps for Progressing: Collecting and analyzing data is a fundamental building block for measuring the effectiveness of human capital approaches in support of the mission and goals of an agency. For example, agencies may have data on the number of people receiving training and money spent on training, however to measure the real impact of training, agencies should develop additional indicators to determine the relationship of training efforts to the accomplishment of agency goals and objectives. This effort should include developing a knowledge, skills, and competencies inventory for employees and updating it regularly to determine if there is an increase in the inventory of skills for which employees are being trained. Organizations should also consider collecting and using performance data to identify gaps in performance, skills, competencies, workforce shape, and other areas. Just as human capital approaches are aligned with strategies for accomplishing programmatic goals, so should performance measures of human capital approaches be aligned with performance measures of programmatic efforts. The types of data that can inform workforce planning efforts include but are not limited to: size and shape of the workforce, skills inventory, attrition rates, projected retirement rates and eligibility, deployment of temporary employee/contract workers, dispersion of performance appraisal ratings, average period to fill vacancies, data on the use of incentives, employee feedback surveys, feedback from exit interviews, grievances, or acceptance rates of job candidates. The Air Force Materiel Command (AFMC) has taken steps toward improving the collection and use of its human capital data to manage the risk it faces in light of retirement eligibility projections and the potential loss of institutional memory by developing plans for reshaping its workforce to meet its future business needs. The mission of AFMC, the largest employer of civilians in the Air Force, is to develop, deliver, and sustain the best products for the Air Force. In October 1998 AFMC began a workforce study, “Sustaining the Sword,” so that the agency's human capital approaches for civilian, military, and contractor employees could be tailored to meet future business needs, such as depot maintenance and information management. The work force study was conducted in two phases. AFMC reports that Phase I provided an overarching view of the current and projected 2005 work force and the potential impact of a prolonged hiring freeze and a workforce nearing retirement. Phase II was reportedly a more detailed analysis, focusing on work force data collected from AFMC centers, at position-level detail. Results were analyzed in support of workforce planning for the purpose of achieving future business needs. The study led to the development of metrics for fact-based personnel management to collect data for demonstrating the successes and shortcomings of AFMC human capital approaches. Such metrics include retention and attrition of new IT recruits, progress made in fulfilling individual professional development plans, and exit survey data, to name a few. AFMC reported that these data and the results of initial workforce shaping activities have led to a more informed understanding of current workforce gaps and those that may surface as large numbers of employees become eligible to retire. In light of this detailed effort, AFMC's workforce study was designated as one of a number of best practices by the Office of the Secretary of Defense that should be benchmarked for acquisition workforce planning across the department. Concepts: Agencies that embrace the principles of human capital management realize that as the value of their people increases, so does the performance capacity of the organization. They also realize that investing in and enhancing the value of employees is a win-win goal for employers and employees alike. In making this investment, leaders should provide resources and incentives that support new ways of working to encourage employees to attain agency goals and objectives and invest in tools to maximize the efficiency and effectiveness of administrative processes. In addition to investing in individual employees, new human capital approaches need sufficient resources for planning, implementation, and evaluation. Resources may include funds, personnel, staff hours, information technology, or facilities and should be provided for in the agency’s budget as appropriate. When considering these opportunities, agencies must also consider the competing demands confronting them, the limited resources available, and how those demands and resources require careful balancing and prioritization. Agencies should consider making targeted investments in specific human capital approaches in light of three fundamental ideas: First, the approaches should help the agency attract, develop, retain, and deploy the best talent and then elicit the best performance for mission accomplishment. Second, the approaches should have clearly defined, well-documented, transparent, and consistently applied criteria for making these investments. Third, decisions regarding these investments should be based largely on the expected improvement in agency results. Steps for Progressing: In confronting the human capital challenges posed by downsizing and the hiring freezes of the 1990s, agencies have the opportunity to develop an environment that supports continuous learning and invest in training and professional development programs, recruitment and retention strategies, or performance incentives. Agency leaders can show their commitment to strategic human capital management by investing in professional development and mentoring programs that can also assist in meeting specific performance needs. These programs can include opportunities for a combination of formal and on-the-job training, individual development plans, rotational assignments, periodic consultations with senior managers, periodic formal assessments, and mentoring relationships with other employees. One critical area on which to focus better training investments is contract management, where agencies must have enough skilled staff on board to oversee the quality, cost, and timeliness of the services delivered by third parties. In addition to investing in training and professional development, agencies have the authority to offer recruiting bonuses, retention allowances, and skill-based pay to attract and retain the critical skills needed for mission accomplishment. Investing in performance incentives can also be particularly important in steering the workforce. The success of the incentives can be measured through the use of balanced measures that are results-oriented, client-based, encompass employee feedback and reveal the multiple dimensions of performance. Widespread shortfalls in the human capital area have contributed to demonstrable shortfalls in agency and program performance in the information technology (IT) area governmentwide. To address its challenge, the State Department is making targeted investments in its IT workforce to ensure it has the critical skills and competencies on hand for mission accomplishment through the use of retention and skill development strategies. In light of the increased demand and competition for information technology workers to perform mission-critical tasks, the State Department is investing in learning incentives as a tool to attract and retain IT professionals. The State Department uses professional qualification incentives for IT employees in the Foreign Service and retention allowances for civil service IT employees. Incentives and allowances that range from 5 to 15 percent of base pay are available for those who obtain job- related degrees and certifications; State also provides the majority of funding for the cost of required classes. This program has helped State to increase the skills base of its information technology workforce, attract new IT employees, and retain current employees. Concepts: Agencies need not wait for comprehensive civil service reform to modernize their human capital approaches. Under current laws, rules, and regulations, agencies have the flexibility to offer competitive incentives to attract employees with critical skills; to create the kinds of performance incentives and training programs that motivate and empower employees; and to build constructive labor-management relationships that are based on common interests and the public trust. Agencies should develop a tailored approach to their use of available flexibilities by taking advantage of those flexibilities that are appropriate for their particular organization and its mission accomplishment. Steps for Progressing: Successful organizations develop and implement human capital approaches based on a data-driven assessment of the organization’s specific needs and capabilities. As discussed in an earlier section, valid and reliable data are the starting point for such assessments. With these data in hand, leading organizations use benchmarking to compare their processes with those of public and private organizations that are considered the best in their fields. Agencies informed by best practices are more likely to develop their own innovative practices. The International Personnel Management Association and the National Academy of Public Administration, among others, have extensive case studies and examples of leading practices that may provide useful lessons for agencies considering employing flexibilities or developing new human capital approaches. As agencies pinpoint human capital approaches that can help improve performance, they can explore the range of authorities available to them. In that regard, OPM has published Human Resource Flexibilities and Authorities in the Federal Government to assist agencies in identifying available flexibilities. Moreover, agencies may develop demonstration projects through OPM, temporarily waiving selected federal civil service laws, rules, and regulations for the purposes of developing, testing, and evaluating new human capital approaches that may have broader applicability than existing ones. Demonstration projects can focus on recruiting and hiring procedures, classification and compensation systems, incentive systems, or on involving employees and labor organizations in personnel decisions. OPM has recently expressed a willingness to conduct more demonstration projects. Educating agency personnel about the availability and proper use of human capital flexibilities, such as student loan repayment and childcare services, is an important step toward tailoring these approaches to meet identified needs. For example, agencies can explore opportunities to offer recruitment bonuses and retention allowances, skill-based pay, and flex time/flexiplace schedules that will enhance their employees’ ability to balance their work and personal lives. Educating agency managers about available workforce restructuring tools, such as targeted early retirements or targeted buyout authorities, can also be an important step to realigning an agency’s workforce in light of mission needs and/or to correct skills imbalances. When determining which approaches are appropriate to use, agencies should seek stakeholder input from human capital professionals, agency managers, and employees and employee unions. As managers are provided authorities to manage human capital, they need to be held accountable for using these authorities in a manner that is fair for employees across the agency. A number of agencies across the federal government have made a business case for additional human capital flexibilities. Lessons learned during a demonstration project at the Department of Agriculture are currently under consideration for broader applicability in the area of hiring flexibilities. Beginning in 1990, the Department of Agriculture conducted a demonstration project in the Forest Service and the Agriculture Research Service to test a streamlined job application process that allows interviewing officials to consider all "quality" applicants who meet minimum qualification standards, as opposed to considering only the top three candidates. The first evaluation of the project showed that both the number of candidates per job announcement and the hiring speed increased; the results led Congress to make this authority permanent for these two divisions of Agriculture in October 1998. Concepts: As in many cases in the human capital area, how you do something is as important as what you do. The involvement of employees both directly and through employee organizations will be crucial to success. Involving employees in the planning process helps to develop agency goals and objectives that incorporate insight about operations from a front-line perspective. Including employees can also serve to increase employees’ understanding and acceptance of organizational goals and objectives and improve motivation and morale. In addition to considering employee input, leading organizations we studied create a set of mission-related program guidelines within which managers operate, and give their managers extensive authority to pursue organizational goals. They seek to ensure that internal processes provide managers with the authority and flexibility they need to contribute to the organization’s mission. Allowing managers to bring their judgment to bear in meeting their responsibilities, rather than having them merely comply with rigid rules and standards, can lead to more effective operations. Managers may also consider delegating authorities to front-line employees who are closer to citizens, drawing from the strengths of employees at all levels and of all backgrounds. Providing managers the discretion to delegate responsibilities to their employees can enable employees to look at customer needs in an integrated way and can streamline processes. As program responsibilities are delegated to employees, agencies should make reasonable efforts to ensure that conflicts of interest are minimized so that the integrity of the program is maintained. Organizations that promote and achieve a diverse workplace can attract and retain high-quality employees and increase customer loyalty. For public organizations, this also translates into effective delivery of essential services to communities with diverse needs. Leading organizations understand that they must support their employees in learning how to effectively interact with and manage people in a diverse work place. They recognize the impact that diverse clients will have upon the success or failure of an organization. In an effort to foster an environment that is responsive to the needs of diverse groups of employees, these organizations identify opportunities to train managers in techniques that create a work environment that maximizes the ability of all employees to fully contribute to the organization’s mission. They also identify opportunities for resolving workplace disputes fairly and efficiently and work to ensure that they create a workplace free of discrimination and in which employees do not fear or experience retaliation or reprisal for reporting waste, fraud, and abuse or for engaging in activities protected by antidiscrimination statutes. Steps for Progressing: Our work has shown that leading organizations commonly sought their employees’ input on a periodic basis and explicitly addressed and used that input to adjust their human capital approaches. The organizations collected feedback using employee satisfaction surveys, convening focus groups or employee advisory councils, and/or including employees on task forces. Another way managers can obtain feedback is through the use of upward feedback, which one organization we studied used to assess both the individual manager’s and team performance. A high-performing agency also maintains an inclusive workplace in which perceptions of unfairness are minimized and workplace disputes are resolved by fair and efficient means. In an effort to find a more effective method for resolving workplace disputes, federal agencies have been expanding their alternative dispute resolution programs. One approach used to deliver alternative dispute resolution services has been the creation of ombudsmen offices to provide an informal option to deal pragmatically with conflicts and other organizational climate issues. An ombudsman not only works to resolve disputes, but is also in a position to alert management to systemic problems and thereby help correct organizationwide issues and develop strategies for preventing and managing conflict. To complement investments in alternative dispute resolution programs, organizations we studied also invested in training efforts aimed at preventing disputes and equipping employees and managers with skills to resolve disputes themselves. Organizations we studied reported that they involved unions and incorporated their input into proposals before finalizing decisions. Engaging employee unions in major changes such as redesigning work processes, changing work rules, or developing new job descriptions can help achieve consensus on the planned changes, avoid misunderstandings, speed implementation, and more expeditiously resolve problems that occur. Overall, our work suggests that leading organizations take the following steps to foster an environment that empowers and involves employees (many of these points are discussed in further detail under other Critical Success Factors): Demonstrate top leadership commitment to management reform Engage employee unions Train employees to enhance their knowledge, skills, and abilities Use employee teams to help accomplish agency missions Involve employees in planning and sharing performance information Delegate authority to front-line employees. As suggested above, agencies are taking different steps to create cultures that empower employees, are inclusive of different work styles, and resolve workplace disputes effectively. The National Institutes of Health (NIH) Office of the Ombudsman is an integral part of the agency's strategy to create a fair, equitable, and nondiscriminatory workplace. The Office of the Ombudsman at NIH has served the NIH community since 1999. In addition to helping resolve commonly recognized workforce conflicts like discrimination, the ombudsman office serves as a complement to the agency's formal dispute resolution process on issues not related to discrimination, such as disputes over credit for authorship and intellectual property rights arising from scientific research. The office assists in conflict intervention, conflict prevention, and internal education on ways to manage individual and group conflict. NIH officials said that the office has helped to resolve and prevent disputes to such a degree that the two offices that handle formal workplace disputes, the Office of Intramural Research and the Office of Equal Opportunity, have seen a drop in their caseloads since the office was established. Concepts: Shifting the orientation of individual performance expectations and accountability systems from an adherence to process and the completion of activities to a greater focus on contributions to results will require a cultural transformation in most federal agencies. One way to embed a results-orientation is to align individual employee performance expectations with agency goals so that individuals understand the connection between their daily activities and their organization’s success. High-performing organizations have recognized that a key element of a fully successful performance management system is to create a “line of sight” that shows how individual responsibilities can contribute to organizational goals. As a first step, these organizations align their top leadership’s performance expectations with organizational goals and then cascade performance expectations to lower organizational levels. Steps for Progressing: At the most senior level, one way to encourage accountability within an organization is through the use of executive performance agreements. Our work has shown that agencies have benefited from their use of results-oriented performance agreements for political and senior career executives. Although each agency developed and implemented performance agreements that reflected its specific organizational priorities, structures, and cultures, the performance agreements met the following characteristics. They strengthened alignment of results-oriented goals with daily operations fostered collaboration across organization boundaries enhanced opportunities to discuss and routinely use performance information to make program improvements provided a results-oriented basis for individual accountability, and maintained continuity of program goals during leadership transitions. Governmentwide, agencies are to place increased emphasis on holding senior executives accountable for organizational goals. OPM amended regulations that change the way agencies evaluate the members of the Senior Executive Service (SES). While agencies will need to tailor their performance management systems to their unique organizational requirements and climates, they nonetheless are to hold executives accountable for results; appraise executive performance on those results balanced against other dimensions, including customer satisfaction and employee perspective; and use those results as the basis for performance awards and other personnel decisions. Agencies were to implement the new policies for the SES appraisal cycles that began in 2001. High-performing organizations design and implement performance management systems that further cascade accountability for results to managers and front-line employees. These systems define individual accountability by setting expectations so staff understand how their daily activities contribute to results-oriented programmatic goals. A well- managed system facilitates communication during the year so that staff receive constructive feedback about organizational and individual performance. At the end of the year, appraisal systems provide ratings and feedback that meaningfully differentiate among performers. To promote teamwork and enhance internal cooperation, organizations we studied encourage the use of cross-functional or matrixed teams for achieving strategic goals and objectives and commonly attempt to develop pay and incentive programs to foster such efforts. High-performing organizations balance their pay and incentive programs to encourage both individual and team-based contributions to achieving results. One tool available to agencies for aligning individual employee expectations with agency goals is the executive performance agreement. The Veterans Health Administration (VHA) has used performance agreements between career executives and the Under Secretary for Health since 1996. For staff at lower levels, agreements are being cascaded to varying extents. VHA serves the medical needs of veterans by providing primary and specialized care at hundreds of service delivery locations that are grouped into Veterans Integrated Service Networks (VISN), designed to coordinate the activities of hospitals, clinics, nursing homes, and other facilities within a given geographic area. The head of each VISN, the director, is responsible for fulfilling an individual annual performance agreement. The performance agreement consists of three parts: 1. “core competencies” that define the management behaviors directors are expected to exhibit, such as interpersonal effectiveness and technical competency; 2. priority areas for VHA as an organization, such as patient safety; and 3. health care performance goals that gauge each VISN's progress towards meeting VHA’s mission with specific targets that establish achievement levels necessary to receive a performance rating of "fully successful" or “exceptional.” VHA has a standardized approach to monitoring VISN directors’ progress during the year that consists of formal reporting of progress towards the health-care related goals and areas of special interest included in their agreements, which is then followed by meetings to discuss performance. To evaluate the performance of VISN directors at the end of each fiscal year, VHA uses both quantitative and qualitative performance information to contribute to its judgment when making evaluation decisions. Although VHA has not evaluated the discrete contributions performance agreements have made to its performance, VHA indicated that including corresponding goals in the performance agreements of VISN directors contributed to improvement in those key organizational goals. Managing Human Capital in the Government Workplace, delivered by the Honorable David M. Walker, Comptroller General of the United States at the JFK School of Government, Harvard University on November 9, 2001. Human Capital: Taking Steps to Meet Current and Emerging Human Capital Challenges. GAO-01-965T. Washington, D.C.: July 17, 2001. The Performance Conference: Managing for Results, delivered by The Honorable David M. Walker, Comptroller General of the United States, sponsored by National Academy of Public Administration (NAPA), on June 12, 2001. Managing For Results: Federal Managers’ Views on Key Management Issues Vary Widely Across Agencies. GAO-01-592. Washington, D.C.: May 25, 2001. Government Challenges in the 21st Century, delivered by The Honorable David M. Walker, Comptroller General of the United States, at the National Press Club, Washington, D.C., on April 23, 2001. Human Capital and Knowledge Management: Connecting People to Information, delivered by The Honorable David M. Walker, Comptroller General of the United States, April 12, 2001 Major Management Challenges and Program Risks: A Governmentwide Perspective. GAO-01-241. Washington, D.C.: January 2001. High Risk Series: An Update. GAO-01-263. Washington, D.C.: January 2001. High Risks and Major Challenges, delivered by The Honorable David M. Walker, Comptroller General of the United States, at the Association of Government Accountants Federal Leadership Conference, on January 25, 2001. GAO’s 2001 Performance and Accountability and High-Risk Series Press Briefing Talking Points, presented by Comptroller General David M. Walker, Washington, D.C., on January 17, 2001. Human Capital: A Self-Assessment Checklist for Agency Leaders. GAO/OCG-00-14G. Washington, D.C.: September 2000. Human Capital: Managing Human Capital in the 21st Century. GAO/T- GGD-00-77. Washington, D.C.: March 9, 2000. Human Capital: Key Principles From Nine Private Sector Organizations. GAO/GGD-00-28. Washington, D.C.: January 31, 2000. Management Reform: Elements of Successful Improvement Initiatives. GAO/ T-GGD-00-26. Washington, D.C.: October 15, 1999. The Excepted Service: A Research Profile. GAO/GGD-97-72. Washington, D.C.: May 1997. Executive Guide: Effectively Implementing the Government Performance and Results Act. GAO/GGD-96-118. Washington, D.C.: June 1996. Transforming The Civil Service: Building The Workforce Of The Future. GAO/GGD-96-35. Washington, D.C.: December 20 1995. Human Capital: Attracting and Retaining a High-Quality Information Technology Workforce. GAO-02-113T. Washington, D.C.: October 4, 2001. Small Business Administration: Steps Taken to Better Manage Its Human Capital, but More Needs to Be Done. GAO/T-GGD/AIMD-00-256. Washington, D.C.: July 20, 2000. Senior Executive Service: Retirement Trends Underscore the Importance of Succession Planning. GAO/GGD-00-113BR. Washington, D.C.: May 12, 2000. Medicare: 21st Century Challenges Prompt Fresh Thinking About Program’s Administrative Structure. GAO/T-HEHS-00-108. Washington, D.C.: May 4, 2000. Human Capital: Using Incentives to Motivate and Reward High Performance. GAO/T-GGD-00-118. Washington, D.C.: May 2, 2000. Human Capital: Strategic Approach Should Guide DOD Civilian Workforce Management. GAO/T-GGD/NSIAD-00-120. Washington, D.C.: March 9, 2000. Management Reform: Using the Results Act and Quality Management to Improve Federal Performance. GAO/T-GGD-99-151. Washington, D.C.: July 29, 1999. Management Reform: Agencies’ Initial Efforts to Restructure Personnel Operations. GAO/GGD-98-93. Washington, D.C.: July 13, 1998. Small Business Administration: Current Structure Presents Challenges for Service Delivery. GAO-02-17. Washington, D.C.: October 26, 2001. Human Capital: Attracting and Retaining a High-Quality Information Technology Workforce. GAO-02-113T. Washington, D.C.: October 4, 2001. Single-Family Housing: Better Strategic Human Capital Management Needed at HUD’s Homeownership Centers. GAO-01-590. Washington, D.C.: July 26, 2001. Human Capital: Implementing an Effective Workforce Strategy Would Help EPA to Achieve Its Strategic Goals. GAO-01-812. Washington, D.C.: July 2001. Managing For Results: Human Capital Management Discussions in Fiscal Year 2001 Performance Plans. GAO-01-236. Washington, D.C.: April 24, 2001. IRS Telephone Assistance: Opportunities to Improve Human Capital Management. GAO-01-144. Washington, D.C.: January 30, 2001. Managing For Results: Emerging Benefits From Selected Agencies’ Use of Performance Agreements. GAO-01-115. Washington, D.C.: October 30, 2000. Customer Service: Human Capital Management at Selected Public and Private Call Centers. GAO/GGD-00-161. Washington, D.C.: August 22, 2000. Human Capital: Design, Implementation, and Evaluation of Training at Selected Agencies. GAO/T-GGD-00-131. Washington, D.C.: May 18, 2000. Tax Administration: IRS’ Implementation of the Restructuring Act’s Personnel Flexibility Provisions. GAO/GGD-00-81. Washington, D.C.: April 28, 2000. Human Capital: Strategic Approach Should Guide DOD Civilian Workforce Management. GAO/T-GGD/NSIAD-00-120. Washington, D.C.: March 9, 2000. SSA Customer Service: Broad Service Delivery Plan Needed to Address Future Challenges. GAO/T-HEHS/AIMD-00-75. Washington, D.C.: February 10, 2000. Federal Workforce: Payroll and Human Capital Changes During Downsizing. GAO/GGD-99-57. Washington, D.C.: August 13, 1999. Management Reform: Using the Results Act and Quality Management to Improve Federal Performance. GAO/T-GGD-99-151. Washington, D.C.: July 29, 1999. Performance Management: Aligning Employee Performance With Agency Goals at Six Results Act Pilots. GAO/GGD-98-162. Washington, D.C.: September 4, 1998. Managing For Results: Experiences Abroad Suggest Insights for Federal Management Reforms. GAO/GGD-95-120. Washington, D.C.: May 2, 1995. Overseas Presence: More Work Needed on Embassy Rightsizing. GAO-02- 143. Washington, D.C.: November 27, 2001. HUD Management: Progress Made on Management Reforms, but Challenges Remain. GAO-02-45. Washington, D.C.: October 31, 2001. Human Capital: Attracting and Retaining a High-Quality Information Technology Workforce. GAO-02-113T. Washington, D.C.: October 4, 2001. Single-Family Housing: Better Strategic Human Capital Management Needed at HUD’s Homeownership Centers. GAO-01-590. Washington, D.C.: July 26, 2001. Human Capital: Implementing an Effective Workforce Strategy Would Help EPA to Achieve Its Strategic Goals. GAO-01-812. Washington, D.C.: July 2001. Human Capital: Taking Steps to Meet Current and Emerging Human Capital Challenges. GAO-010-965T. Washington, D.C.: July 17, 2001. Federal Employee Retirements: Expected Increase Over the Next 5 Years Illustrates Need for Workforce Planning. GAO-01-509. Washington, D.C.: April 27, 2001. IRS Telephone Assistance: Opportunities to Improve Human Capital Management. GAO-01-144. Washington, D.C.: January 30, 2001. Office of Workers’ Compensation Programs: Goals and Monitoring Are Needed to Further Improve Customer Communications. GAO-01-72T. Washington, D.C.: October 3, 2000. Customer Service: Human Capital Management at Selected Public and Private Call Centers. GAO/GGD-00-161. Washington, D.C.: August 22, 2000. Information Technology: Selected Agencies’ Use of Commercial Off-the- Shelf Software for Human Resources Functions. GAO/AIMD-00-270. Washington, D.C.: July 31, 2000. Small Business Administration: Steps Taken to Better Manage Its Human Capital, but More Needs to Be Done. GAO/T-GGD/AIMD-00-256. Washington, D.C.: July 20, 2000. Human Capital: Design, Implementation, and Evaluation of Training at Selected Agencies. T-GAO/GGD-00-131. Washington, D.C.: May 18, 2000. Veterans Benefits Administration: Problems and Challenges Facing Disability Claims Processing. GAO/T-HEHS/AIMD-00-146. Washington, D.C.: May 18, 2000. Tax Administration: IRS’ Implementation of the Restructuring Act’s Personnel Flexibility Provisions. GAO/ GGD-00-81. Washington, D.C.: April 28, 2000. Human Capital: Strategic Approach Should Guide DOD Civilian Workforce Management. GAO/T-GGD/NSIAD-00-120. Washington, D.C.: March 9, 2000. SSA Customer Service: Broad Service Delivery Plan Needed to Address Future Challenges. GAO/T-HEHS/AIMD-00-75. Washington, D.C.: February 10, 2000. Managing For Results: Challenges Agencies Face in Producing Credible Performance Information. GAO/GGD-00-52. Washington, D.C.: February 4, 2000. Equal Employment Opportunity: The Postal Service Needs to Better Ensure the Quality of EEO Complaint Data. GAO/GGD-99-167. Washington, D.C.: September 28, 1999. Federal Workforce: Payroll and Human Capital Changes During Downsizing. GAO/GGD-99-57. Washington, D.C.: August 13, 1999. Management Reform: Using the Results Act and Quality Management to Improve Federal Performance. GAO/T-GGD-99-151. Washington, D.C.: July 29, 1999. IRS Customer Service: Management Strategy Shows Promise But Could Be Improved. GAO/GGD-99-88. Washington, D.C.: April 30, 1999. Managing For Results: Experiences Abroad Suggest Insights for Federal Management Reforms. GAO/GGD-95-120. Washington, D.C.: May 2, 1995. HUD Management: Progress Made on Management Reforms, but Challenges Remain. GAO-02-45. Washington, D.C.: October 31 2001. Human Capital: Attracting and Retaining a High-Quality Information Technology Workforce. GAO-02-113T. Washington, D.C.: October 4, 2001. Securities And Exchange Commission: Human Capital Challenges Require Management Attention. GAO-01-947. Washington, D.C.: September 17, 2001. Single-Family Housing: Better Strategic Human Capital Management Needed at HUD’s Homeownership Centers. GAO-01-590. Washington, D.C.: July 26, 2001. Federal Employee Retirements: Expected Increase Over the Next 5 Years Illustrates Need for Workforce Planning. GAO-01-509. Washington, D.C.: April 27, 2001. Managing For Results: Human Capital Management Discussions in Fiscal Year 2001 Performance Plans. GAO-01-236. Washington, D.C.: April 24, 2001. Customer Service: Human Capital Management at Selected Public and Private Call Centers. GAO/GGD-00-161. Washington, D.C.: August 22, 2000. Small Business Administration: Steps Taken to Better Manage Its Human Capital, but More Needs to Be Done. GAO/T-GGD/AIMD-00-256. Washington, D.C.: July 20, 2000. Human Capital: Design, Implementation, and Evaluation of Training at Selected Agencies. GAO/T-GGD-00-131. Washington, D.C.: May 18, 2000. Senior Executive Service: Retirement Trends Underscore the Importance of Succession Planning. GAO/GGD-00-113BR. Washington, D.C.: May 12, 2000. Medicare: 21st Century Challenges Prompt Fresh Thinking About Program’s Administrative Structure. GAO/T-HEHS-00-108. Washington, D.C.: May 4, 2000. Human Capital: Using Incentives to Motivate and Reward High Performance. GAO/T-GGD-00-118. Washington, D.C.: May 2, 2000. Tax Administration: IRS’ Implementation of the Restructuring Act’s Personnel Flexibility Provisions. GAO/GGD-00-81. Washington, D.C.: April 28, 2000. SSA Customer Service: Broad Service Delivery Plan Needed to Address Future Challenges. GAO/T-HEHS/AIMD-00-75. Washington, D.C.: February 10, 2000. Managing For Results: Experiences Abroad Suggest Insights for Federal Management Reforms. GAO/GGD-95-120. Washington, D.C.: May 2, 1995. Human Capital: Attracting and Retaining a High-Quality Information Technology Workforce. GAO-02-113T. Washington, D.C.: October 4, 2001. Securities And Exchange Commission: Human Capital Challenges Require Management Attention. GAO-01-947. Washington, D.C.: September 17, 2001. Customer Service: Human Capital Management at Selected Public and Private Call Centers. GAO/GGD-00-161. Washington, D.C.: August 22, 2000. Human Capital: Using Incentives to Motivate and Reward High Performance. GAO/T-GGD-00-118. Washington, D.C.: May 2, 2000. Tax Administration: IRS’ Implementation of the Restructuring Act’s Personnel Flexibility Provisions. GAO/GGD-00-81. Washington, D.C.: April 28, 2000. Human Capital: Strategic Approach Should Guide DOD Civilian Workforce Management. GAO/T-GGD/NSIAD-00-120. Washington, D.C.: March 9, 2000. Human Capital: Practices That Empowered and Involved Employees. GAO-01-1070. Washington, D.C.: September 14, 2001. Human Capital: The Role of Ombudsmen in Dispute Resolution. GAO- 01-466. Washington, D.C.: April 2001. Senior Executive Service: Diversity Increased in the Past Decade. GAO- 01-377. Washington, D.C.: March 16, 2001. Customer Service: Human Capital Management at Selected Public and Private Call Centers. GAO/GGD-00-161. Washington, D.C.: August 22, 2000. Senior Executive Service: Retirement Trends Underscore the Importance of Succession Planning. GAO/GGD-00-113BR. Washington, D.C.: May 12, 2000. Human Capital: Strategic Approach Should Guide DOD Civilian Workforce Management. GAO/T-GGD/NSIAD-00-120. Washington, D.C.: March 9, 2000. Equal Employment Opportunity: The Postal Service Needs to Better Ensure the Quality of EEO Complaint Data. GAO/GGD-99-167. Washington, D.C.: September 28, 1999. Alternative Dispute Resolution: Employers’ Experiences With ADR in the Workplace. GAO/GGD-97-157. Washington, D.C.: August 12, 1997. HUD Management: Progress Made on Management Reforms, but Challenges Remain. GAO-02-45. Washington, D.C.: October 31, 2001. Single-Family Housing: Better Strategic Human Capital Management Needed at HUD’s Homeownership Centers. GAO-01-590. Washington, D.C.: July 26, 2001. IRS Modernization: Continued Improvement in Management Capability Needed to Support Long-Term Transformation. GAO-01-700T. Washington, D.C.: May 8 2001. IRS Modernization: IRS Should Enhance Its Performance Management System. GAO-01-234. Washington, D.C.: February 23, 2001. Managing For Results: Emerging Benefits From Selected Agencies’ Use of Performance Agreements. GAO-01-115. Washington, D.C.: October 30, 2000. Customer Service: Human Capital Management at Selected Public and Private Call Centers. GAO/GGD-00-161. Washington, D.C.: August 22, 2000. Medicare: 21st Century Challenges Prompt Fresh Thinking About Program’s Administrative Structure. GAO/T-HEHS-00-108. Washington, D.C.: May 4, 2000. Tax Administration: IRS’ Implementation of the Restructuring Act’s Personnel Flexibility Provisions. GAO/GGD-00-81. Washington, D.C.: April 28, 2000. Human Capital: Strategic Approach Should Guide DOD Civilian Workforce Management. GAO/T-GGD/NSIAD-00-120. Washington, D.C.: March 9, 2000. Managing For Results: Challenges Agencies Face in Producing Credible Performance Information. GAO/GGD-00-52. Washington, D.C.: February 4, 2000. IRS Customer Service: Management Strategy Shows Promise But Could Be Improved. GAO/GGD-99-88. Washington, D.C.: April 30, 1999. Performance Management: Aligning Employee Performance With Agency Goals at Six Results Act Pilots. GAO/GGD-98-162. Washington, D.C.: September 4, 1998. Managing For Results: Experiences Abroad Suggest Insights for Federal Management Reforms. GAO/GGD-95-120. Washington, D.C.: May 2, 1995. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full-text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO E-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily e-mail alert for newly released products” under the GAO Reports heading. Web site: www.gao.gov/fraudnet/fraudnet.htm, E-mail: fraudnet@gao.gov, or 1-800-424-5454 or (202) 512-7470 (automated answering system). | GAO released an exposure draft on its Model of Strategic Human Capital Management, which is intended to help federal agency leaders better manage their organizations' most important asset--their people. The model is designed to help agency leaders effectively use their people, or human capital, and determine how well they integrate human capital considerations into daily decision-making and planning for the program results they seek to achieve. In so doing, the model highlights the importance of a sustained commitment by agency leaders to maximize the value of their agencies' human capital and manage related risks. |
MSDs as a workplace concern have received increased attention over the last several years. While there is some debate about what injuries should be considered MSDs, data from the Bureau of Labor Statistics (BLS) show that, in 1995, there were 308,000 cases of illness due to repeated trauma, accounting for over 60 percent of all work-related recorded illnesses and continuing the decade-long increase in illness due to repeated trauma. However, the 1995 total was a slight decrease from 1994 and represented a small percentage of the total number of recordable injuries and illnesses. In 1997, the National Institute for Occupational Safety and Health (NIOSH), a federal agency that conducts independent research on workplace safety and health issues, reported that, for all cases involving days away from work in 1994, about 700,000 (or 32 percent) were the result of repetitive motion or overexertion. It also reported that MSDs accounted for 14 percent of physician visits and 19 percent of hospital stays. To protect employees from workplace hazards, OSHA issues workplace standards and enforces the provisions of those standards through citations issued as a result of on-site inspections of employers. OSHA can also provide information and technical assistance or work with employers and employees in a cooperative manner that rewards compliance instead of penalizing noncompliance. Because currently no standard exists specifically for MSDs, federal and state-operated OSHA programs have generally relied on what is referred to as the “general duty clause” of the Occupational Safety and Health Act, or its state equivalent, to cite employers for ergonomic hazards. This clause requires employers to furnish employees with employment and a place of work “free from recognized hazards that are causing or are likely to cause death or serious physical harm.” To justify using this authority, OSHA must prove that the hazard is likely to cause serious harm, that the industry recognizes the hazard, and that it is feasible to eliminate or materially reduce the hazard—conditions that require major OSHA resources to demonstrate. Over the last several years, OSHA has tried to develop a standard specifically for MSDs to carry out its mandate to protect workers and improve worker health. In 1992, OSHA announced in the Federal Register its intent to develop a standard for MSDs. Before formally proposing a standard, in March 1995, OSHA circulated a draft of a standard to selected stakeholders to obtain their comments. The standard was subsequently distributed widely and has come to be known as the “draft standard.” This draft standard required employers to identify problem jobs on two bases: where there had been one or more recorded MSD (for example, on the OSHA 200 log or as a workers’ compensation claim) and where an employee had daily exposure during the work shift to any “signal risk factor.”Employers would have to “score” these jobs using a checklist provided in the draft standard, or an alternative checklist if the employer could demonstrate that it was as effective, to determine the severity of the problem. If a job received more than five points, the employer would have to conduct a job improvement process to address the hazards on that job. This process involved a detailed job analysis (identification and description of each risk factor) and the selection, implementation, and evaluation of controls. Some employers opposed this requirement, stating that the net effect of this approach would result in considering virtually every job a problem job and necessitating considerable resources from employers to analyze and develop controls for each problem job. Others said that because MSDs are cumulative or chronic in nature, they may take a long time to develop and may have many contributing factors. Because of this, some employers questioned whether OSHA could demonstrate that provisions in the standard would be able to address the hazards that cause MSDs. OSHA has now said its 1995 draft standard is no longer under consideration, and it has renewed efforts to determine the best approach to protect workers from ergonomic hazards. OSHA is currently undertaking a “four-pronged approach,” which involves (1) education, outreach, and technical assistance to employers; (2) research on the effectiveness of ergonomic improvements that employers have implemented; (3) enforcement efforts targeted toward high-hazard employers, issuing citations when warranted under the general duty clause; and (4) continued work on a standard that will take findings from these efforts into account. The California state-operated program also spent several years developing a standard, which program officials said was initiated in response to a legislative mandate. The two-page standard, which went into effect in July 1997, covers only those employers with 10 or more employees, thus excluding a significant number of California’s employers. The standard is triggered only when an injury has been reported. While the standard requires employers to implement particular elements of an ergonomics program, such as worksite evaluation, development of controls, and training, the standard does not require a medical management program, nor are there many requirements as to specifically how these elements should be implemented. An employer who makes an effort to comply will not be cited for being out of compliance unless it can be shown that a control known to, but not taken by, the employer is substantially certain to have caused a greater reduction in these injuries and that this alternative control would not have imposed additional unreasonable costs. Some labor organizations believed this standard fails to provide adequate protection to employees and were skeptical that it would be effective in reducing MSDs. Additionally, even though the standard had been revised significantly to reduce employers’ responsibilities in response to employer concerns, some employer groups still question the merit of a standard for MSDs. As a result, both labor and employer groups are challenging the standard. Experts, available literature, and officials at our case study facilities generally agreed that, to be effective, an ergonomics program should include a core set of elements or provisions to ensure management commitment, employee involvement, identification of problem jobs, development of controls for problem jobs, training and education for employees, and appropriate medical management. These core elements are said to be typical of any comprehensive safety and health program and, together, they can help an employer ensure that ergonomic hazards are identified and controlled and that employees are protected. Research provides a wide spectrum of options for how these elements can be implemented, requiring varying levels of effort on the part of employers and employees. In addition, federal and state-operated OSHA programs have undertaken a number of enforcement and education efforts to encourage employers to adopt the core elements of an ergonomics program. Occupational safety and health literature stresses that management commitment is key to the success of any safety and health effort. Management commitment demonstrates the employer’s belief that ergonomic efforts are essential to a safe and healthy work environment for all employees. Specific ways in which management commitment can be demonstrated include assigning staff specifically to the ergonomics program and providing time during the workday for these staff to deal with ergonomic concerns; establishing goals for the ergonomics program and evaluating results; communicating to all staff the program’s importance, perhaps through policy statements, written programs, or both; and making resources available for the ergonomics program itself, such as by implementing ergonomic improvements or providing training to all employees or to staff assigned to the ergonomics program. Involving employees in efforts to improve workplace conditions provides a number of benefits, including enhancing employee motivation and job satisfaction, improving problem-solving capabilities, and increasing the likelihood that employees will accept changes in the job or work methods. Some of the ways in which employee involvement can be demonstrated include creating committees or teams to receive information on ergonomic problem areas, analyze the problems, and make recommendations for corrective action; establishing a procedure to encourage prompt and accurate reporting of signs and symptoms of MSDs by employees so that these symptoms can be evaluated and, if warranted, treated; undertaking campaigns to solicit employee reports of potential problems and suggestions for improving job operations or conditions; and administering periodic surveys to obtain employee reactions to workplace conditions so that employees may point out or confirm problems. A necessary component of any ergonomics program is the gathering of information to determine the scope and characteristics of the hazard that is contributing to the MSD. Especially in this element, research has highlighted a wide variety of ways employers can identify problem jobs or job tasks. For example, a relatively straightforward way to identify problem jobs is for employers to focus on those jobs where there is already evidence that the job is a problem, because MSDs have already occurred or symptoms have been reported. For this approach, employers could use the following methods to identify problem jobs: following up on employee reports of MSDs, symptoms, discomfort, physical fatigue, or stress; reviewing the OSHA 200 logs and other existing records, such as workers’ conducting interviews or symptom surveys or administering periodic medical examinations. A more complex approach to identifying problem jobs before there is evidence of an injury entails employers’ looking for workplace conditions that may contribute to MSDs. This more complex method could include screening and evaluating jobs for particular workplace conditions that may contribute to MSDs, such as awkward postures, forceful exertions, repetitive motions, and vibration. Screening and evaluation could be achieved through walk-through observational surveys, interviews with employees and supervisors, or the use of checklists for scoring risk factors. Experts and recent literature also recognize that employers may have to prioritize which jobs or job tasks will receive immediate attention. It is generally agreed that jobs in which MSDs are being reported should be given top priority. Factors to consider in prioritizing problem jobs might be whether past records have noted a high incidence or severity of MSDs, which jobs have a large number of affected employees, or whether changes in work methods for that job will be taking place anyway. The first step in eliminating the hazard is to analyze the job or job task to identify the ergonomic hazards present in the job. Once ergonomic hazards have been identified, the next step is to develop controls to eliminate or reduce these hazards. Research offers a hierarchy of controls that can be put in place. Analyzing the job or evaluating an employee’s workstation to identify the ergonomic hazards present in the job can involve a variety of activities, including observing workers performing the tasks, interviewing workers, or measuring work surface heights or reach distances; videotaping a job, taking still photos, measuring tools, or making biomechanical calculations (for example, of how much muscle force is required to accomplish a task) in order to break jobs down into component tasks and identify risk factors present; and administering special questionnaires. Efforts to develop appropriate controls can include “brainstorming” by employees performing the job in question or by team members performing the analysis; consulting with vendors, trade associations, insurance companies, suppliers, public health organizations, NIOSH, labor organizations, or consultants; and following up to evaluate the effectiveness of controls. The hierarchy of controls is as follows: Engineering controls are generally preferred because they reduce or eliminate employees’ exposure to potentially hazardous conditions. They include changing the workstation layout or tool design to better accommodate employees (for example, adopting better grips for knives to reduce wrist-bending postures) or changing the way materials, parts, and products are transported to reduce hazards (such as using mechanical assist devices to lift heavy loads). Administrative controls refer to work practices and policies to reduce or prevent employee exposure to hazards, such as scheduling rest breaks, rotating workers through jobs that are physically tiring, training workers to recognize ergonomic hazards, and providing instruction in work practices that can ease the task demands or burden. Identifying and controlling MSDs requires some level of knowledge of ergonomics and skills in remedying ergonomic hazards. Recognizing and filling different training needs is an important step in building an effective program. The different types of training that a facility might offer include overall ergonomics awareness training for employees so they can recognize general risk factors, learn the procedures for reporting MSDs or symptoms, and become familiar with the process the facility is using to identify and control problem jobs and targeted training for specific groups of employees because of the jobs they hold, the risks they face, or their roles in the program, such as for line supervisors and managers to recognize early signs and symptoms of MSDs; for engineers to prevent and correct ergonomic hazards through equipment design, purchase, or maintenance; or for members of an ergonomics team to perform job analysis and develop controls. An employer’s medical management program is an important part of its overall effort to reduce MSDs, even though this program may exist regardless of whether the employer has implemented an ergonomics program. A medical management program emphasizes the prevention of impairment and disability through early detection of injuries, prompt treatment, and timely recovery for the employee. Different ways facilities can carry out medical management include encouraging early reporting of symptoms of MSDs and ensuring that employees do not fear reprisal or discrimination on the basis of such reporting; ensuring prompt evaluation of MSD reports by health care providers; making health care providers familiar with jobs, perhaps through periodic facility walk-throughs or review of job analysis reports, detailed job descriptions, or videotapes of problem jobs; and giving employees with diagnosed MSDs restricted or transitional duty assignments (often referred to as “light” duty) until effective controls are installed on the problem job, and conducting follow-up or monitoring to ensure that they continue to be protected from exposure to ergonomic hazards. Federal and state-operated OSHA programs have undertaken a number of enforcement and education efforts to encourage employers to adopt the core elements of an ergonomics program. For example, as a result of inspections under the general duty clause, OSHA has entered into a number of corporate settlement agreements, primarily with automobile manufacturing and food processing employers, that allow these employers to take actions to implement these core elements in an effort to reduce the identified hazards according to an agreed-upon timetable. OSHA monitors the employers’ progress under the agreement and will not cite them as long as the terms of the agreement are upheld. In 1996, OSHA introduced a nursing home initiative, under which it targeted nursing homes in seven states for inspection to look for evidence of safety and health programs as evidenced by these core elements. Before launching the enforcement part of the effort, OSHA sponsored safety and health seminars for the nursing home industry to help employers implement safety and health programs. The North Carolina state-operated program makes extensive use of settlement agreements for employers that have been found during investigations to have ergonomic hazards. Under what it calls the Cooperative Assessment Program (CAP) for Ergonomics, employers are not cited for ergonomic hazards if they enter into and make a good faith effort to comply with these agreements, under which they must take actions to implement the core elements of a safety and health program. To help these and other employers learn how to develop programs, the state recently established an ergonomics resources center that provides a variety of ergonomic services to employers. The California state-operated program creates joint agreements and “special orders” for individual employers when ergonomic hazards are identified during an inspection. These agreements and orders require employers to take corrective action to reduce the identified hazards according to a particular timetable; if the employers take the corrective actions specified, no penalties are assessed. Instead of using the general duty clause, some states have used existing regulatory authorities that require employers to establish worksite safety and health programs, workplace safety committees, or both to encourage employers to address MSDs. These safety and health programs must have particular elements, such as the identification of problem jobs and training, and in some cases, the committees themselves are responsible for undertaking particular activities. For example, in Oregon, workplace committees are required to conduct particular activities as they relate to identification of ergonomic hazards. Through Cooperative Compliance Programs, federal and state-operated OSHA programs have targeted certain employers because of their high rates of injuries or high numbers of workers’ compensation claims and offered them a chance to work with OSHA to reduce hazards in exchange for not being inspected. If employers agree, they must implement a program containing these elements to reduce hazards and injuries. For example, in the Maine 200 program, about 200 Maine employers were invited to develop a comprehensive safety and health program to reduce the injuries and hazards identified by OSHA. Employers “graduate” from this program once they demonstrate that they have successfully implemented the core elements of a safety and health program, not necessarily because they have achieved a particular reduction in injuries or hazards. Also, OSHA’s Voluntary Protection Program allows employers to be excluded from programmed inspections if they can demonstrate they have an exemplary safety and health program consisting of these core elements. Federal and state-operated OSHA programs and other organizations also educate employers about how to reduce MSDs and other safety and health hazards through consultation and technical assistance. The services are typically coordinated by federal or state-operated programs but are actually delivered by state government agencies, universities, or professional consultants. Consultation programs allow employers to contact OSHA or its designee to identify and address safety and health problems outside the enforcement arena. If employers address the hazards identified by these consultants, they can be exempt from inspections for up to 1 year. The consultation and technical assistance services provide information on how to develop effective safety and health programs. A key document used in the provision of these services is OSHA’s Safety and Health Program Management Guidelines, which provides information on how to implement a safety and health program (although it does not include a medical management component). Additionally, because of high rates of MSDs in the meatpacking industry, in 1990 OSHA published the Ergonomics Program Management Guidelines for Meatpacking Plants, a voluntary set of guidelines on how to implement the core elements of an ergonomics program in that industry. Each of the facilities we visited displayed all of the core elements of an effective ergonomics program, but the facilities implemented them in a variety of ways that reflected their unique characteristics, such as their different industries and product lines, corporate cultures, and experiences during program evolution. For example, although each facility demonstrated management commitment by assigning staff to be specifically responsible for the program, some facilities used ergonomists to lead the program, while others used standing teams of employees. For two of the elements—identification of problem jobs and development of controls—the facilities displayed a lower level of effort than many of the options identified in the literature would entail. To illustrate, the facilities primarily identified jobs on an “incidence basis,” that is, on the basis of reports of injury, employee discomfort, or other employee requests for assistance, and did not typically screen jobs for ergonomic hazards. The facilities also used an informal process to analyze jobs and develop controls, often relying on in-house resources, and did not typically conduct complex job analyses. Finally, facilities typically implemented what they called “low-tech” controls, those solutions that did not require significant investment or resources, as opposed to more complex controls that drastically changed jobs or operations. Following are selected examples of facility experiences for each of the elements; for more information on how all of the facilities demonstrated these elements, see appendixes III through VII. All of the facilities’ programs had evolved over time—often over many years—and a number of factors were key to facilities’ decisions to take actions to reduce MSDs. Primary among them was an interest in reducing the workers’ compensation costs associated with MSDs. Additionally, the variation in implementation was often explained by industry type, product lines or production processes, corporate cultures, or experiences during program evolution. For example, most of the employees at the headquarters of American Express Financial Advisors, a financial services employer, are engaged in similar operations that require significant use of computers, so they face similar hazards associated with computer use. Because of this similarity, the cornerstones of the ergonomics program are training for all employees on how to protect themselves from these hazards and developing furniture and equipment standards, which is accomplished by involving such departments as real estate and facilities. Facility product lines, production processes, and other individual facility characteristics also affected implementation of the elements. For example, the Navistar facility’s layout has constrained the implementation of some controls. Additionally, Navistar offers customized truck assembly, which often contributes to frequent production and schedule changes. This makes it difficult to ensure that controls are effective in the long run. Finally, because few new employees have been hired in recent years, the facility now has an older workforce that could be more vulnerable to these types of injuries. Corporate culture may also influence program development. Both AMP’s and Texas Instruments’ corporate cultures emphasize decentralized operations whereby individual facilities are given considerable flexibility to reach production goals. Local employee teams are key to their operations because they allow for this type of decentralized approach. As a result, the facilities rely extensively on employee teams to implement their ergonomics programs. Texas Instruments has a number of teams throughout its management structure, which address ergonomics in some aspect. Additionally, performance targets drive all corporate and facility activities at Texas Instruments, so these kinds of targets have also been established for the facility’s ergonomics program. Experiences during program evolution also have influenced the ultimate shape of the program. At the Texas Instruments facility, where the ergonomics program has been in place the longest (since 1992), the facility is beginning to identify problem jobs on a more proactive basis given that many problem jobs identified on an incidence basis have already been addressed. The Sisters of Charity facility, which initiated its program in 1994 at the invitation of OSHA to participate in the Maine 200 program, is still principally working to control problem jobs as a result of employee requests. In addition, because this facility was selected for the Maine 200 program on the basis of its injuries of all types, it set up a safety and health program that addresses MSDs as well as other injuries and illnesses. All of the facilities had assigned staff to be specifically responsible for the program and had provided them the resources, time, and authority to operate the program on a daily basis. Some of the other indicators of management commitment were incorporating ergonomic principles into corporationwide accountability mechanisms, such as strategic goals or safety audits, and integrating ergonomic principles into equipment purchase and design. Although some of the facilities had a written program, officials did not view these as key to program operations and said that management commitment was best illustrated in more tangible ways, such as assigning staff to ergonomics programs or incorporating ergonomics into accountability measures. The examples below highlight some of the variety in the ways management commitment was demonstrated and generally reflect the range of activities that appears in the literature. The American Express Financial Advisors facility has an ergonomist who leads the program, an ergonomics specialist who performs the workstation evaluations and develops controls, and a half-time administrative assistant who tracks information about what types of training and ergonomics services each employee has been provided. The AMP facility uses an ergonomics value-added manufacturing (VAM) team of line employees who are responsible for identifying problem jobs and developing controls. The Texas Instruments facility has both an ergonomics team and an ergonomics specialist who works under the direction of the team. The Texas Instruments facility works toward a corporationwide strategic goal of eliminating all preventable occupational and nonoccupational injuries and illnesses by the year 2005, a goal toward which ergonomic activities at all facilities are expected to contribute. At the Navistar facility, the 5-year strategic plan sets targets for the number of processes to be redesigned ergonomically, the percentage of technical support staff to receive ergonomic training, and the reduction in lost workdays and associated workers’ compensation costs. At the Sisters of Charity facility, the on-site occupational health clinic must approve any new construction to ensure that new work areas are designed with ergonomic considerations. At the American Express Financial Advisors facility, the ergonomist works with several departments involved with procurement to establish standards for purchasing furniture and equipment that are ergonomic. At the AMP and Texas Instruments facilities, most of the suggestions for controlling problem jobs submitted by the ergonomics teams are approved at the facility level. The American Express Financial Advisors facility provides weekly 1-1/2-hour training sessions that are open to all employees. Sisters of Charity spent about $60,000 to purchase 14 automatic lifts to reduce ergonomic hazards associated with moving residents at the nursing home. The Texas Instruments facility’s Site Safety Quality Improvement Team (QIT), which is composed of program managers, provides overall focus and strategy to the ergonomics team and approves most capital investments to improve ergonomic conditions. Twice in 1996, the facility sponsored “Ergonomic Management Seminars” for middle managers to demonstrate how ergonomically related losses affected the bottom line by discussing the costs of these injuries and their impact on productivity. Employee involvement at these facilities was often demonstrated through the use of employee teams or committees charged with identifying problem jobs and developing controls for them. In addition, employees had direct access to services; for example, some facilities had procedures that ensured a job analysis was done upon employee request. The examples below highlight some of the variety of ways that these facilities fostered employee involvement and generally reflect the range of activities that appears in the literature. The AMP facility’s ergonomics VAM team consists of about 12 employees from different departments who meet biweekly during work hours. This team, led by an industrial engineer, is responsible for identifying and prioritizing problem jobs as well as for developing controls for the jobs. Both the team leader and secretary of the team are elected by the team members. Individual team members play leadership roles in “championing” various projects. At the Navistar facility, the ergonomist and local union representative form the nucleus of the ergonomics committee, with other employees involved on an ad hoc basis to provide information and feedback for the particular problem job being addressed. At the Navistar facility, any employee can request a job analysis by filling out a one-page “Request for Ergonomic Study” form and passing it along to the ergonomist or the union representative. At the American Express Financial Advisors facility, employees can request a workstation evaluation through a phone call, by E-mail, or even by scheduling an evaluation themselves on the ergonomics specialist’s electronic calendar. American Express Financial Advisors’ discomfort surveys help the ergonomics staff identify areas of concern for employees as well as the type of discomfort employees are feeling in various body parts. The Texas Instruments facility sponsors “wing-by-wing” measurement campaigns in which the team proceeds through the facility “wing by wing” to measure employees and adjust the workstations of those who may be experiencing problems but who have not requested services. All of the facilities in our review identified most of their problem jobs on an “incidence basis,” that is, from reports of MSDs or employee discomfort or as the result of an employee request for assistance. The procedures instituted for identifying problem jobs in this way were typically quite simple, with little paperwork involved. In most cases, only after problem jobs identified on an incidence basis were dealt with did officials at these facilities report they used more “proactive” methods to identify problem jobs where injuries might occur in the future. While the facilities used a variety of proactive methods for identifying problem jobs, they did not typically screen jobs for risk factors. Therefore, we characterize the facilities’ efforts to identify problem jobs as a lower level of effort than is reflected in the literature. The examples below highlight some of the ways facilities carried out this lower level of effort. All facilities had a system in place whereby any report of an MSD automatically triggered a job analysis. At the Sisters of Charity facility, the employee and supervisor must each complete a “Report of Employee Incident” form within 24 hours after an MSD is reported. This form is sent to staff at the on-site occupational health facility who conduct a physical examination of the employee, if necessary, and an evaluation of the employee’s workstation. A job analysis was also generally triggered whenever an employee reported discomfort or requested assistance. At the AMP facility, employees are encouraged to bring up any discomfort they are feeling with members of the ergonomics team. The Texas Instruments facility identified problem jobs on the basis of the high numbers of injuries and illnesses recorded in its workers’ compensation database. Because the Texas Instruments facility had already addressed many of the hazards at its manufacturing workstations, it launched an administrative workstation adjustment campaign in recognition of its need to shift its focus to identify potential hazards at administrative workstations. The Navistar facility has begun to identify problem jobs as those with high employee turnover and those staffed by employees with low seniority. The AMP facility uses an Ergonomic Prototype Work Center to set up alternative types of workstations in order to determine the best types of tools to use and the most efficient workstation layouts to avoid future injuries. All of the facilities in our review used a simple, fairly informal procedure to analyze problem jobs, as compared with some of the more complex options detailed in the literature. Often the facilities’ efforts focused only on the particular job element that was thought to be the problem (for example, drilling or lifting). Facilities also said the process for developing controls was informal, relying heavily on brainstorming and the use of in-house engineering and medical resources. In some cases, facilities did conduct a detailed job analysis when the problem job was particularly complex, hazardous, or labor intensive. Also, while typically able to develop controls using in-house resources, the facilities on occasion used consultants and other external resources to develop controls for problem jobs. The process used to develop controls was typically iterative, in that the ergonomics staff at these facilities continually reviewed the job in question to ensure that the control was working. In some cases, eliminating the hazard would have been difficult without significant capital investment in a soon-to-be-phased-out product or without disruption to the production process. In other instances, even when a control was identified, resource limitations sometimes extended the length of time it took to introduce the control. However, officials emphasized that they always tried to take some kind of action on all problem jobs. Facilities used a mix of the controls described in the literature in their attempts to eliminate or reduce ergonomic hazards for problem jobs, generally preferring “low-tech” engineering controls—those that did not require significant capital investments and did not drastically change the job’s requirements. The examples illustrate the processes used by these facilities to identify problem jobs and the types of controls used. Appendix II profiles particular problem jobs at these facilities and the controls that were implemented. The AMP facility uses a one-page “Ergonomic Evaluation Form” that is tailored to the specific job and asks simple “yes/no” questions about the employee’s ease and comfort when performing certain job tasks. After reviewing this form, a member of the ergonomics VAM team interviews the employee and observes the employee performing the job. The ergonomics specialists at the American Express Financial Advisors and Texas Instruments facilities take workstation and personal measurements (for example, height of work surface and height of chair when seated properly), in addition to making observations or collecting information from employees through interviews. For more complex or hazardous jobs, facilities may videotape or collect more detailed documentation. The AMP facility videotaped its re-reeling job and used an additional evaluation form, which is several pages long, that provides space to record detailed observations about the adequacy of the work space, environmental conditions, and hand tool use. A physical assessment survey capturing the frequency of discomfort by various body parts was also conducted because the re-reeling department historically had higher numbers of MSDs. The Texas Instruments facility videotaped its manual electronic assembly job because it had identified this as an “at-risk” job—that is, one with high numbers of recordable injuries and illnesses. (See app. II for more detailed information.) Officials at all of the facilities said brainstorming was key to developing controls. At the Navistar facility, for example, the ad hoc committee informally develops prospective solutions and looks at other operations within the facility with similar job elements to get ideas for controls. Facility officials at Texas Instruments also said that, in addition to their own employees and line supervisors, their production engineering department was also a resource for developing controls on more complex or technical jobs. In other instances, outside resources were important contributors to developing effective controls. For example, the AMP facility regularly works out arrangements for vendors or suppliers to provide tools and equipment at no cost to the facility so the facility can test the products before purchasing them. Through AMP’s Ergonomic Prototype Work Centers, which are set up within each work area, these tools are then evaluated by the employees themselves in alternative workstation layouts. The Texas Instruments facility has used a consultant to help develop controls for its at-risk jobs, including its manual electronic assembly job. Because recommendations for controls came from the consultant, the ergonomics team found it was easier to get management buy-in to make the necessary job changes. (See app. II.) Ergonomics staff assess how well a control is working and, if necessary, continue to address the problem job. At AMP, the ergonomics VAM team administers the same Ergonomic Evaluation Form that is administered when first analyzing the job after the controls are in place to determine whether or not they are working. At the Texas Instruments facility, an adjustable-height workstation design was tested on the production floor, and employee feedback revealed that it was unstable and allowed products to fall off. Using this feedback and working with a vendor, the ergonomics staff developed a new design. The result was an adjustable table, referred to as “Big Joe” (essentially a fork lift with the wheels removed), which proved to be much more stable. Because the Navistar facility is still not satisfied with controls introduced to address its “pin job,” which it described as its most onerous job, it also is taking an iterative approach. The pin job requires several employees to manually handle the heavy frame of a truck in order to attach it to the axle. Because of the significant force, “manhandling,” and vibration involved, the ergonomics staff has focused considerable effort on controlling this job. However, changing the product and the line is difficult to justify, given constraints associated with the facility’s design. In the meantime, facility officials have tried to reduce employees’ exposure using administrative controls and personal protective equipment and have recently formed a special committee of line employees to develop ideas for controls for this job. According to Navistar Officials, this committee has been given 6 months, an “unlimited” budget, and the latitude to consider alternative design options for the production line. In some cases, facilities made efforts to ensure the long-term effectiveness of controls they had implemented. For example, both the Texas Instruments and American Express Financial Advisors facilities had developed databases that contained the results of workstation evaluations and employee preferences. At both of these facilities, employees are relocated frequently, so the information in the databases is used to ensure that, when an employee is relocated, his or her new workstation will be properly set up. The Navistar facility installed hoists to lift heavy fuel tanks and mechanical articulating arms to transport carburetors. It is gradually replacing “impact” guns—which are used to drill in bolts—with “nutrunner” guns, which expose employees to lower levels of vibration. American Express Financial Advisors has adjusted employee workstations (for example, repositioned monitors, designed corner work surfaces, and provided equipment to support forearm use) and introduced ergonomic chairs for employees’ use. (For more detail, see app. II.) Facilities also used administrative controls, particularly for problem jobs where they have been unable to eliminate the ergonomic hazards through engineering controls. For example, in the re-reeling job at the AMP facility, employees are rotated every 2 hours so they are not reeling the same product over long periods of time. The Texas Instruments facility also uses job rotation to protect circuit board welders from ergonomic hazards and other administrative controls rather than major investments, particularly when the product is soon to be discontinued. Some of the facilities also used personal protective equipment; for example, the Navistar facility has made extensive use of such equipment as padded gloves and elbow supports to provide protection and absorb vibration. Some of the facilities provided general awareness training to all employees, but this information was generally offered informally through written employee guidelines, posters, literature, and web sites. Most of the facilities emphasized training targeted to specific populations of employees. Examples below highlight some of the ways in which facilities provide training and education and were generally consistent with the literature. Not every facility offered formal general awareness training to all employees. For those that did, such training was brief and sometimes offered infrequently. For example, at Sisters of Charity, ergonomics training in the form of body mechanics and instruction on the proper use of video display terminals was offered as part of the 4-1/2-hour basic safety training that each employee is required to take once a year. At the Texas Instruments facility, all employees are required to take 1 hour of general ergonomics awareness training every 3 years. Training is the cornerstone of the American Express Financial Advisors ergonomics program, where the ergonomics specialist teaches a 1-1/2-hour course every week targeted to the many computer-oriented jobs at this facility. Employees are generally required to take this training before their workstations will be adjusted. Personal measurements are taken during training, and participants are taught how to make their workstations fit their needs. The Texas Instruments facility offers a wide range of targeted training, with an emphasis on instruction of production teams within their own work areas in which team members actually work together to develop controls for problem jobs. Courses offered at the facility include “Ergonomics for Computer Users,” “Factory Ergonomics Awareness,” and “Advanced Ergonomics for Electronic Assemblers and Teams That Handle Materials.” The ergonomics programs at these facilities had strong links with the medical management staff in ways that were consistent with the literature. For example, a report of an MSD automatically triggered a job analysis; medical management staff were often members of the ergonomics teams; and medical management staff were also familiar with jobs at the facility, which helped them identify the hazards to which employees were exposed. The facilities also emphasized a return-to-work policy that gave employees with diagnosed MSDs the opportunity to work on restricted or transitional (sometimes referred to as light duty) assignments during their recovery period. Facilities also conducted follow-up during the time an employee was on restricted duty. Examples below highlight some of the ways these facilities demonstrated this element. The Navistar facility has an on-site occupational health clinic and medical management staff who are easily accessible to all employees and who can treat most injuries, including MSDs. The medical director can request a job analysis whenever an employee reports an injury or discomfort to the clinic. The medical director participates on Navistar’s ad hoc ergonomics committee to help develop controls for problem jobs and on the facility’s workers’ compensation causation committee, which looks for the root cause of selected workers’ compensation claims. The American Express Financial Advisors facility has established a relationship with several local health care providers who are familiar with MSDs and has encouraged these health care providers to visit the facility to understand the jobs its employees perform. These health care providers provide early treatment to avoid unnecessary surgery, which is sometimes called conservative treatment. At Texas Instruments, the disability coordinator is responsible for developing a relationship with local health care providers and identifying doctors who are conservative in their treatment approach. At the Texas Instruments facility, the lost time intervention manager monitors health conditions of out-of-work employees and coordinates with all other medical management staff to determine if the employee can return to work on a restricted basis. Typically, the employee can be accommodated within his or her home work area. Several things have been done to facilitate these placements, including developing a database of available jobs for workers on restriction and creating a special account that covers the payroll costs of employees on transitional duty so the costs are not charged to that home work area’s budget. If the limitations are permanent and prohibit the employee from performing essential job functions with reasonable accommodation, the employee is referred to the Texas Instruments placement center for job search and other placement assistance. Officials at all the facilities we visited believed their ergonomics programs brought benefits, including reductions in workers’ compensation costs associated with MSDs. These facilities could also show reductions in facilitywide overall injury and illness incidence rates, and in the number of days injured employees were away from work, although some facilities reported an increase in the number of days employees were on restricted job assignments. Facility officials also reported improved worker morale, productivity, and quality, although evidence of this was sometimes anecdotal. However, measuring program performance—assessing these outcomes in light of program efforts—was complicated by uncertainties associated with determining which injuries should be included as MSDs and with tracking changes in those injuries in light of complicating factors. For example, facilities did not track the total costs of their ergonomics programs so they could not assess whether benefits gained exceeded the investments made. As a result, these employers found it helpful to track the progress they were making in implementing the program. All five facilities experienced a reduction in total workers’ compensation costs for MSDs (see fig. 1). Reductions are not comparable across facilities, but officials at each of these facilities said they believed the facility’s ergonomics program had contributed toward these reductions. At the Texas Instruments facility, where the ergonomics program has been in place for the longest period of time, workers’ compensation costs for MSDs have dropped appreciably—from millions of dollars in 1991 to hundreds of thousands of dollars in 1996. The achievement of these reductions is significant, given that high MSD costs were a major impetus for initiating these programs and lowering these costs was often a major outcome goal. These reductions can be attributed to a strong medical management component in the ergonomics program. As the medical director of the Navistar facility explained, the key to a cost-effective ergonomics program is getting injured employees back to work as soon as appropriate, minimizing lost workdays. Officials at several of the facilities said one of their first activities when implementing this program was to assist employees in returning to work. As figure 2 shows, the facilities were able to reduce the number of days injured employees were away from work. Conversely, restricted work days increased at facilities owned by AMP and Sisters of Charity, which officials said reflected their success at bringing employees back to work. This reflects an important challenge to a return-to-work policy, however, because bringing employees back to work as soon as possible may require a greater number of available restricted- or light-duty positions than are often available. For example, according to Navistar officials, light-duty positions for returning employees must be allocated according to the seniority provisions of the collective bargaining agreement, so if an injured employee does not have sufficient seniority, there may not be any light-duty jobs available. Or, the jobs available to less senior employees, such as clean-up duty, are often not appealing to employees who desire productive work. Sisters of Charity officials said they do not have difficulty finding light-duty jobs for employees, but there have been cases in which employees’ restrictions were so severe that it was difficult for these employees to be productive. Medical management also includes encouraging employees to report symptoms of MSDs before they become serious injuries requiring more expensive treatment or surgery; as a result, reductions in the average cost per claim reflect early reporting and treatment. The Sisters of Charity facility was the only facility that had not yet experienced a decline in the average cost per claim (although this cost is well within the range of the average cost per MSD claim at other facilities). (See fig. 3.) These facilities could also show reductions in the number of injuries and illnesses for their facilities as a whole, according to their OSHA 200 log records (see fig. 4). Trends in overall injuries and illnesses from the OSHA 200 log are important because MSDs accounted for a large portion of all injuries and illnesses and because these data are part of the information OSHA compliance officers review in the early stages of an inspection to focus their inspection efforts. Facility officials also reported improved employee productivity, quality, and morale since they had implemented the programs, although evidence of these outcomes was primarily anecdotal. For example, some facility officials said employees are more likely now to exercise control over their jobs and to be more actively involved with line supervisors in how jobs are performed. Officials from Sisters of Charity believed that turnover and absenteeism had been reduced and they had been able to hire better employees as a result of their efforts, even though employees initially resisted some of the changes proposed, such as the use of automatic lifts to move residents. The American Express Financial Advisors facility reported reductions in discomfort experienced by employees. Officials at several of the facilities said that as the program evolves, goals need to change as well, from reducing workers’ compensation costs to increasing productivity and quality. For example, officials at the Texas Instruments facility stressed that they were moving toward using productivity and other quality measures as indicators of the program’s success, since they had already achieved large reductions in workers’ compensation costs. Facilities also provided evidence, often only anecdotal, of productivity or quality improvements associated with implementing ergonomic controls.Several facilities have found that ergonomic hazards often contribute to production bottlenecks or problems. By minimizing employees’ stressful hand exertions during a windshield installation process, for example, the Navistar facility was also able to increase the quality of the installation, reducing a high rate of warranty claims (see app. II). Additionally, by identifying a newly automated way of extracting remnant metals when electronic connectors are stamped, the AMP facility not only eliminated awkward positions for employees but also reduced the volume of scrap waste and enhanced the quality of recycled metals made from these scrap metals. Facility officials said they faced a number of challenges in measuring the overall performance of their programs and tying outcomes to the efforts they were making in implementing their programs. Primary among these challenges was determining what injuries should be included as MSDs, and effectively tracking the changes in the number and severity of those injuries in light of what officials referred to as “confounding” factors that complicated their ability to interpret outcomes or changes that accompanied their program efforts. Although many of the officials from the facilities said a major influence for initiating the program was a concern about increased workers’ compensation costs due to MSDs, in the early stages of implementing the ergonomics programs some of the facilities reported uncertainties about what injuries and illnesses should be categorized as MSDs. American Express Financial Advisors officials said the lack of agreement about MSDs makes it difficult to know what to track when trying to isolate MSDs from other kinds of injuries and illnesses. Sisters of Charity officials said, in many cases, incident reports must be reviewed to identify whether the injury was caused by ergonomic hazards. Ergonomics staff at the facilities said the OSHA 200 log was not very useful to them for identifying MSDs because it does not allow various injuries that they believe are a result of ergonomic hazards to be recorded as such. For example, officials at several of the facilities said that back injuries, which are often a result of repetitive lifting, are not recorded in the OSHA 200 log in a way that they can be identified as MSDs. These employers used their respective corporate workers’ compensation databases to help them identify what types of injuries should be included as MSDs for the program, as well as to track reductions in these injuries and illnesses. Several of the facilities worked with their insurance company, or the administrator of their insurance policy, to help track these injuries and illnesses and related costs. However, because corporate workers’ compensation databases included different categories of injuries, and because facilities differed in the frequency and type of injuries experienced, facilities used different categories of injuries to track MSDs. For example, while all of the facilities included injuries or illnesses that resulted from obviously repetitive activity, some also included those that were the result of a one-time occurrence. Differences of opinion also existed in at least one facility between the ergonomist and corporate management as to what categories should be included to track MSDs.Using cost data, like workers’ compensation costs, to interpret outcomes is also problematic, because health care costs in general continue to rise and there is often a several-year lag between the time injuries occur and when a workers’ compensation claim is finally closed. Such lags, if large, could make tracking program performance difficult. Facilities experienced other factors that made it difficult to interpret outcomes in light of program efforts, including limited data on program costs, the effects of growing employee awareness of MSDs, changes in staffing levels, and the effect of increasing workloads. For example, facilities did not track the total costs of the ergonomics programs, so they did not know whether the reductions in MSD costs and other outcomes exceeded program expenditures. Facility officials said it was also difficult to know whether these outcomes resulted solely from investments taken to reduce ergonomic hazards or from other productivity and quality investments as well. However, these officials said that many ergonomic investments were small, and at several facilities, a written justification was needed only when the cost of proposed controls was over a certain threshold. Despite their strong commitment to their program, officials at AMP emphasized that the limited number of years of its trend data makes it difficult to draw any conclusions at this time regarding the impact of its program. Facility officials also stated that increases in MSDs and claims, at least initially, could result from growing awareness of ergonomic hazards. At the Texas Instruments facility, ergonomics awareness training contributed to employees’ making more MSD claims in 1994 (see app. VII). MSDs and workers’ compensation claims can also be affected by changes in staffing levels, as new employees may be more likely to get hurt, and the threat of layoffs may encourage employees to report discomfort or injuries. Since 1988, American Express Financial Advisors has experienced significant increases in staffing levels and workloads, increases that officials said need to be considered when looking at its claim experience (see app. III). Other facility officials said claims tend to increase before a layoff, then decline again when employees are recalled to work. Workload pressures and other work organization factors can also affect program outcomes.Several facility officials said issues associated with stress, workload demands, or other intangible work factors are more difficult to address than are physical hazards. Perhaps because of these difficulties in tying outcomes to program efforts, facility officials found it useful to track the actions taken to implement the core elements of the program. Several of the facilities, for example, had a corporationwide audit, which included a section on ergonomics. These audits assessed items such as whether a team had been established, whether the facility was providing ergonomics training, and whether the facility was conducting analyses of problem jobs. For example, in response to last year’s safety audit, the Navistar facility decided to form an ergonomics committee of high-level management personnel to spread awareness of its ergonomics program and to obtain greater commitment from these managers. Some facilities used other measures to track program implementation. The Texas Instruments facility uses a “productivity matrix” to track progress on various projects or initiatives, including its workstation adjustment campaigns, which have helped identify ergonomic hazards before injuries occur. Both the Texas Instruments and American Express Financial Advisors facilities’ databases, which include employee workstation measurements and preferences, allow them to track the number of employees who have received workstation evaluations and whose workstations have been adjusted. Some facilities are also tracking the number of requests for assistance they receive from employees. These private sector experiences highlight that employers can achieve positive results through simple, informal, site-specific efforts, with a lower level of effort to identify and analyze problem jobs than that generally reflected in the safety and health literature or in OSHA’s draft ergonomics standard. These experiences suggest that OSHA may need to provide flexibility to employers to customize their programs under a specified framework for a worksite ergonomics program and give them some discretion in deciding the appropriate level of effort necessary to effectively reduce identified hazards. Federal and state-operated OSHA programs’ current efforts to reduce MSDs in the absence of a standard provide employers this kind of flexibility; however, questions exist about whether current efforts alone are sufficient to address MSDs. Finally, the information problems that complicated these facilities’ efforts to identify their problem jobs, and then to measure their progress in addressing these hazards, suggest that OSHA’s recent efforts to revise injury and illness data collection methods are a step in the right direction. All of the facilities in our review implemented the core elements of effective ergonomics programs. In other words, each of the facility’s programs included all of the elements highlighted by literature and experts as necessary for an effective program. However, the facilities often customized the elements to adapt to their own often unique site-specific conditions. We also found that the processes for identifying and developing controls for problem jobs, and often the controls themselves, were simple and informal, generally requiring a lower level of effort than that called for in the OSHA draft standard or described in the literature. Yet, in all cases, the facilities were able to reduce workers’ compensation costs associated with MSDs and the number of days employees were away from work, as well as report improvements in product quality, employee morale, and productivity. This similarity in overall framework but variety in implementation suggests that there may be merit to an approach that requires programs to have these core elements but gives facilities some latitude to customize the elements as they believe appropriate, as well as some discretion to determine the appropriate level of effort necessary to effectively identify and control problem jobs. This approach may also mean that facilities would be able to identify problem jobs—at least initially—on an incidence basis (a report of an MSD or employee discomfort or a request for assistance) and move toward a more proactive identification as the program matures. Although this approach is viewed by some as inconsistent with accepted safety and health practices that emphasize prevention, our case study facilities found it to be a viable approach when starting their programs. In the absence of a standard specifically for MSDs, federal and state-operated OSHA programs have limited authority to take action against employers for ergonomic hazards, which has resulted in a variety of strategies and approaches to foster employer awareness and action to protect employees from these hazards. These efforts include a number of new initiatives at the federal and state levels as well as some long-standing efforts to encourage employers to take action against ergonomic hazards. These initiatives appear to provide the kind of flexibility that is consistent with the experiences of our case study employers. Although these initiatives illustrate the potential value of a flexible approach, many are small in scope, are resource intensive, are still being developed, or depend largely on an employer’s willingness to participate, so they may not offer a complete solution to protecting employees from MSDs, especially in light of the large numbers of employees that experience MSDs. Federal and state-operated OSHA programs have tried to provide information, technical assistance, and consultation in an effort to respond to employers’ interest in these initiatives. The flexibility provided by OSHA under the Maine 200 cooperative compliance program was key to the success of the Sisters of Charity facility in reducing MSDs. Sisters of Charity was not given targets for reduction of injuries or hazards, but it was required to implement a comprehensive safety and health program. To help Sisters of Charity accomplish this, an OSHA compliance officer was specifically assigned to it (and to other employers in the health care industry as well) for the duration of its participation in the program. The compliance officer was responsible for becoming familiar with the facility to help identify and evaluate controls, perform on-site monitoring inspections to ensure Sisters of Charity was implementing the core elements of a safety and health program, and review quarterly progress reports Sisters of Charity provided to OSHA. The compliance officer monitored Sisters of Charity’s progress against the provisions in the Safety and Health Program Management Guidelines, looking for continuous improvement and “scoring” the facility on how well it was implementing key elements of the program. Sisters of Charity graduated from the program in 1996 because it had, in the judgment of OSHA, made sufficient progress in establishing the elements of an effective program. Sisters of Charity officials said the value of this approach was not only the hands-on assistance provided by OSHA, but also the compliance officer’s familiarity with the facility, which made it possible for OSHA to appropriately judge the efforts Sisters of Charity was making. OSHA is currently developing a safety and health program management standard based on the guidelines and on evidence that such worksite programs can reduce injuries and illnesses. OSHA’s settlement agreements for MSDs have also provided some degree of flexibility, as they require employers to implement core elements of an ergonomics program but allow employers to carry out these elements under negotiated timetables with little threat of citation unless the company fails to comply with the overall agreement. OSHA attributes significant progress made by selected employers in reducing ergonomic hazards to a great extent to these agreements. In addition, we interviewed officials from two states with regulations that require employers to establish worksite safety and health programs or committees who view these regulations as a way to leverage existing resources to encourage employers to address ergonomic hazards, especially when MSDs constitute a significant portion of their injuries and illnesses. Officials said these programs require employers to take actions to reduce injuries and illnesses but allow the employers some discretion about what actions they will take. North Carolina offers a model of combining a flexible regulatory approach—as reflected in the CAP program, which has general requirements for implementing the core elements of an ergonomics program—with the provision of technical assistance through the state’s Ergonomics Resources Center. Several employers involved with this effort said that the flexibility in these agreements and the availability of technical assistance were very helpful to them, because they were new to ergonomics and did not know where to begin. Although these initiatives reflect the value of employer-provided flexibility, they may not offer a complete solution to protecting employees from MSDs. For example, while the Sisters of Charity facility demonstrated significant reductions in workers’ compensation costs for MSDs and in the number of days employees lost from work, progress was more mixed in terms of reducing all injuries and illnesses, the average cost per MSD, and the number of days employees were assigned to restricted work activity.While these results would suggest that the facility has made some progress, it is not clear whether the requirements of the Maine 200 program ensure that this would be the case for every employer or that employees are adequately protected. Additionally, OSHA officials in Maine said the Maine 200 program required more resources than originally anticipated and that if they were to do this again, they might be more selective in the number of employers they targeted. Moreover, safety and health program requirements exist only in some states and often for selected industries, which limits the number of employers covered. The North Carolina initiative is small and new and has not yet been fully evaluated. OSHA’s efforts to expand Cooperative Compliance Programs similar to Maine 200 to other states continue to evolve, as OSHA deals with the difficult issues raised by employers and labor advocates alike about the most effective ways to target employers for inclusion into these programs, provide employers flexibility to take action, and adequately protect employees. Additionally, labor representatives have stressed the need for OSHA to provide (1) the necessary guidance to employers who are targeted by these programs so they know what actions to take and (2) the tools to OSHA compliance officers to help them adequately evaluate employer efforts. In the absence of a standard, these programs rely largely on an employer’s willingness to take action to reduce ergonomic hazards. Our case study employers reported that, although they had made significant use of in-house engineering and other resources to analyze problem jobs and develop controls, they did, on occasion, call upon outside resources, including consultants, for information and technical assistance. These officials said that other employers, especially smaller ones, may have an even greater need for help from outside resources to learn how to implement a program or develop controls. This suggests a role for OSHA’s consultation assistance programs in providing, or facilitating the dissemination of, information and technical assistance. For example, 34 states have ergonomics resource personnel among their consultation program staff, according to a recent OSHA survey, and many states offer clearinghouses of information on MSDs, provide training, or have launched technical assistance initiatives specifically for ergonomics. Federal and state-operated OSHA programs also provide grants to employers—for example, to smaller employers to provide for ergonomic training, or, as in Oregon, to employers or employer groups to develop and implement solutions to workplace ergonomic problems that cannot be solved with available technology. The Washington state-operated program is conducting research to help employers address MSDs, and it has formed a task force to develop a strategy to reduce MSDs in high-risk industries. OSHA has also undertaken projects to help employers understand the financial benefits of taking action and to share practical experiences about how to implement an ergonomics program. At the facilities we visited, the impetus for developing an ergonomics program was often an initial concern with excessive workers’ compensation costs. At these facilities, this concern led to an examination of workers’ compensation and other data that ultimately identified MSDs as a cause of a major proportion of their total workers’ compensation costs. Later, to facilitate the tracking of their programs’ progress, these companies, either on their own or through their workers’ compensation insurers or third-party administrators, set up systems for tracking MSD-related injuries and associated costs. However, other companies, even if they have high workers’ compensation costs, may not have access to the information needed to determine whether they have a problem with MSDs and, if so, how to address the problem. Further, although employers are currently required to record information on workplace injuries and illnesses on the OSHA 200 log, the case study facilities have found that the log does not facilitate the collection of accurate data on MSDs. In 1996, OSHA proposed changes to simplify how all injuries and illnesses could be recorded on the OSHA 200 log. As a part of this proposal, OSHA specified criteria for recording MSDs that would include a diagnosis by a health care provider that an injury or illness is an MSD and an “objective” finding, such as inflammation, or a report of two or more applications of hot or cold therapy. These criteria would be applied equally to all cases involving any part of the body, including backs. This proposal would respond to concerns raised by the case study employers that the “repeated trauma” illness category in the OSHA 200 log does not adequately capture all MSDs. Currently, billions of dollars are spent by private sector employers on workers’ compensation claims associated with MSDs, and hundreds of thousands of workers each year suffer from MSDs. Our work has demonstrated that employers can reduce these costs and injuries and thereby improve employee health and morale, as well as productivity and product quality. More importantly, we found that these efforts do not necessarily have to involve costly or complicated processes or controls, because employers were able to achieve results through a variety of simple, flexible approaches. Our findings are based on a small number of cases and are not generalizable to all workplaces. However, the qualitative information provides important insights into employers’ efforts to protect their workers from ergonomic hazards. Additionally, experts from the business, labor, and academic communities reviewed the results of our case studies and said our findings on employer efforts to reduce MSDs were consistent with their experiences. Our work also found that these facilities’ programs included all of the core elements highlighted in the literature and by experts as key to an effective program—management commitment, employee involvement, identification of problem jobs, analyzing and developing controls for problem jobs, training and education, and medical management—with the elements customized to account for local conditions. Uncertainties continue to exist about particular aspects of MSDs that may complicate regulatory action by OSHA, and our analysis does not allow us to draw any conclusions about whether a standard for MSDs is merited. However, any approach OSHA pursues to protect workers from ergonomic hazards that sets a well-defined framework for a worksite ergonomics program that includes these elements while allowing employers flexibility in implementation would be consistent with the experiences of these case study employers. We obtained comments on a draft of this report from the Department of Labor’s Acting Assistant Secretary for Occupational Safety and Health. OSHA also provided technical changes and corrections to this report, which we incorporated as appropriate. In his comments, the Acting Assistant Secretary said that our report is a valuable contribution to the extensive literature on the benefits of ergonomic programs and that it reinforces conclusions found elsewhere in the literature that ergonomic interventions in the workplace significantly reduce work-related injuries and illnesses. He described the reduction in workers’ compensation costs for MSDs for these facilities as impressive and noted that these facilities had implemented substantially the same core elements as those OSHA has recognized as fundamental to ergonomics programs. Although the Acting Assistant Secretary described the report as consistent with OSHA’s ergonomics experience, he pointed out that our study cannot be used to draw any conclusions about the relative advantages of an incidence-based approach (identifying problem jobs on the basis of a report of injury or discomfort or an employee request for assistance) versus more proactive approaches. Although the facilities we studied used an incidence-based approach to identify problem jobs, the Acting Assistant Secretary expressed the view that incidence-based approaches are unlikely to work as effectively where there is a small number of workers in a job, as is typical of many small and medium-sized firms. We agree that our study does not allow us to compare the relative advantages of different approaches for identifying problem jobs. Rather, we found that these facilities believed an incidence-based approach was a viable way to start identifying where their problems lay. We also reported that these facilities are now moving to more proactive approaches to identify potential problem jobs, before complaints or discomfort occur. The comments of Labor’s Acting Assistant Secretary appear in their entirety in appendix VIII. We are providing copies of this report to the Secretary of Labor; the Acting Assistant Secretary for Occupational Safety and Health; state-operated program representatives; and others, upon request. If you have any questions on this report, please contact me on (202) 512-7014. Staff who contributed to this report are listed in appendix IX. We were asked to (1) identify the core elements of effective ergonomics programs and how these elements are operationalized at the local level, (2) discuss whether these programs have proven beneficial to the employers and employees that have implemented them, and (3) highlight the lessons to be learned from these experiences by other employers and by OSHA. We conducted our work in accordance with generally accepted government auditing standards between June 1996 and June 1997. To identify the core elements of effective ergonomics programs, we reviewed the pertinent literature, including key reports, studies, and guidelines issued by the Occupational Safety and Health Administration (OSHA), the National Institute for Occupational Safety and Health, the American National Standards Institute, and others over the last decade on ergonomics and implementation of safety and health programs; the OSHA 1995 draft ergonomics standard; the American National Standards Institute Voluntary Draft Standard on musculoskeletal disorders (MSD); public comments received in response to OSHA’s 1992 Advance Notice of Public Rulemaking for an ergonomics standard; OSHA’s settlement agreements regarding MSDs; and other OSHA efforts leading up the draft standard and interviewed and obtained data from experts in ergonomics and related fields and representatives from the employer and labor community with experience in implementing such programs. To identify how these elements were operationalized at the local level and determine whether these programs have proven beneficial, we interviewed and obtained data from experts known for their research on the costs and benefits of these programs to obtain information on how employers can measure effectiveness of programs, interviewed Bureau of Labor Statistics (BLS) officials about their efforts to track injuries and costs of those injuries, and obtained information on workers’ compensation costs; selected facilities of five employers that experts believed to have fully implemented programs and that had achieved reductions in workers’ compensation costs resulting from MSDs and conducted case studies between January and February 1997 to obtain information about their experiences implementing these programs; administered a results survey to the selected facilities to collect data used by these facilities to measure their success, such as data used to track program progress and information pertinent to the evaluation of these data, such as workforce size (we did not independently validate these data); and, following a detailed protocol that obtained information on how core elements were implemented and that identified results achieved, difficulties in implementing the programs, barriers faced, lessons learned by the employers from their experiences, and employers’ views of OSHA and others’ roles to reduce MSDs, visited each of these facilities and interviewed facility management, other officials responsible for or involved with the ergonomics program, and staff-level employees; obtained additional results information in order to corroborate information gained during interviews, as well as documentation of the program, training provided, and information provided to employees about the program; and interviewed pertinent officials from the corporate headquarters about the selected facilities’ experiences compared with those of the employers’ other facilities. To identify the lessons learned from employer experiences and the implications for OSHA strategies to reduce MSDs, we obtained case study employers’ views on OSHA’s role in reducing MSDs on the basis of employers’ experiences; interviewed officials in selected states that operated their own safety and health programs—California, Maryland, Michigan, Minnesota, North Carolina, Oregon, Washington, and Virginia—and obtained information about their efforts to encourage employers to reduce MSDs; reviewed the benefits and disadvantages of these approaches in light of our case study findings; and conducted on-site interviews with officials from North Carolina and California to discuss the merits and disadvantages of their particular efforts—an ergonomics resources center in North Carolina and a standard for repetitive trauma in California—to reduce MSDs; interviewed various OSHA officials, officials from Labor’s Solicitor’s office, and other Labor officials to obtain information on Labor’s efforts to encourage employers to reduce MSDs; interviewed OSHA officials in Maine to obtain information on the merits and disadvantages of the Maine 200 program; and reviewed the status of Labor’s past efforts to reduce MSDs, including challenges by employers of Labor’s use of the general duty clause for MSDs and of other OSHA programs; and reviewed results with several panels of business and labor representatives and noted experts in the field of ergonomics. Through interviews, a review of the literature, and requesting nominations using trade association bulletin boards, we identified 132 employers that experts believed had made gains in reducing workers’ compensation costs associated with MSDs. We used a multitiered screening process to select the five case study facilities. We had decided that three of our five case studies would be in the manufacturing industry since the manufacturing industry has had the longest experience with MSDs. BLS 1994 data reported this industry had the highest number of occupational injuries and illnesses involving days away from work for repetitive motion, and OSHA had targeted sectors of this industry in the early 1990s for the presence of ergonomic hazards. We decided that the other two case studies would be in industries where concerns about emerging ergonomic hazards were increasing. BLS 1994 data showed that other industries (such as services, retail trade, and communications) known for office environments and the use of computers were reporting high rates of illnesses due to repeated trauma, and interviews with experts and a review of the current articles in the press revealed increasing concerns about hazards in the office environment. There was also concern about the hazards in the health care industry; in fact, in 1996, OSHA instituted an initiative to provide training to nursing homes to reduce injuries. As a result, we decided the other two case studies would include an employer whose employees worked largely in an office or computer environment and an employer in the health care industry. We categorized the 132 nominations by manufacturing and other industries. Focusing on the nominations in the manufacturing industry, we narrowed the selection to 25 employers on the basis of the data available at that time about the employer’s program; general knowledge of the employer’s safety and health practices; and other factors, such as whether these employers had already been subjects of other case studies. We discussed each of these 25 employers and then, through a multivoting approach, narrowed the selection to 11 employers that we would contact for further information. We followed the same procedure for the nominated employers in the other industries and narrowed the selection to 11 employers that we would call for additional information. We then attempted to contact the headquarters office of each of these employers and, using a screening protocol, obtained basic information about program implementation and results. We asked for additional information to allow us to make a final selection, including whether these employers used data to track their programs’ success, whether they believed the program was fully implemented, and any results data that had already been collected. Given the results of the screening protocols and information subsequently provided by these employers, including their willingness to participate, we selected five employers for our case studies: American Express Financial Advisors (AEFA), AMP Incorporated (AMP), Navistar International Transportation Corp. (Navistar), Sisters of Charity Health System (SOCHS), and Texas Instruments (TI). We asked each of these employers to nominate a facility that it felt had the most fully implemented program. Our work is based predominantly on case studies of five employers that believe their programs are effective at reducing workers’ compensation costs for MSDs. It was not possible for us to discern whether the characteristics of effective programs are unique to these programs. The information we present is not generalizable to the employer community as a whole. We reviewed the findings of our case studies with representatives from the employer, labor union, and academic communities who were knowledgeable about ergonomics and worksite ergonomics programs to gauge the plausibility of the information we collected. The first panel, held in San Jose, California, on March 18, 1997, was cosponsored by the Silicon Valley Ergonomics Institute, which is part of San Jose State University. The business panel members were predominantly high-tech computer manufacturers who had experience with or were interested in implementing ergonomics programs. Medical practitioners and researchers also sat on this panel. The second panel was held on April 8, 1997, in Washington, D.C., with members of the Center for Office Technology, which is a trade association representing employers in the manufacturing, communications, and other industries. The third panel was held on April 15, 1997, in Alexandria, Virginia, with selected members of the National Coalition on Ergonomics. We also reviewed our findings with a labor union panel on May 15, 1997, that consisted of employee representatives from the manufacturing, construction, and service industries, among others. These panelists said our findings regarding the level of effort being made by employers to identify and address MSDs, the results of the efforts, and the issues regarding the difficulty of measuring program effectiveness were generally consistent with their experiences and knowledge about employers’ current efforts to implement worksite ergonomics programs. We also provided the draft report to a selection of representatives from business, labor, and academia for their review and comment and incorporated their comments as appropriate. The following employers, unions, and associations were represented in these panels or reviewed our draft report. Significant differences in the data provided by the case study facilities make comparison among the facilities inappropriate. For example, data presented for each of the facilities vary depending upon when the facility believes the program was fully implemented (according to its own definition of what constitutes “fully implemented”) and the availability of data. We made every effort to present cost and injury- and illness-related data starting with the year prior to the program’s full implementation through 1996 in order to show changes at the facility during the program’s operation. We worked with each of these facilities to agree upon a date that could be appropriately used as the year before the program’s full implementation and obtain the appropriate data. However, in some cases, appropriate data were not available, and we were unable to present data prior to the program’s full implementation. Table I.1 shows the years the programs were fully implemented at the facilities and the resulting years used for the data. Case study facility data also cannot be compared because each facility tracks different categories of injuries, illnesses, or both as MSDs at their facilities. Table I.2 shows the categories used by the facilities. Computer, mouse, and other repetitive motion injuriesSprains and strains in which a cause of injury is lifting, repetitive motion, pushing, or pulling Injuries due to repetitive trauma, carpal tunnel syndrome, thoracic outlet syndrome, tendinitis, epicondylitis, rotator cuff injuries, torn meniscus, and acute strains to the back Cumulative trauma injuries (for example, carpal tunnel syndrome and overuse syndrome), tendinitis, epicondylitis, and back injuriesInjuries from repetitive motion and body stress (from performing lifting tasks) Formerly Investors Diversified Services, Inc., American Express Financial Advisors, Inc., was acquired by American Express in 1984 and provides financial planning services. AEFA is headquartered in Minneapolis, Minnesota, and employs about 8,000 nonunion employees in about 250 locations throughout the country. Most of the employees work at the headquarters office, and the majority of AEFA employees work in an office environment using computers, so they face similar types of hazards. To date, the ergonomics program has focused on these employees but is now beginning to study more closely employees who face lifting and other manual material handling hazards. The culture of AEFA has influenced program implementation. AEFA’s efforts began many years ago as a commitment to improving employee comfort and satisfaction. AEFA officials told us they believed a significant portion of their employees’ injuries, and resulting workers’ compensation costs, was MSD-related, caused by repetitive motion, stress, strain, and lifting. AEFA has made a significant investment in training employees in the office environment to increase their awareness of hazards and the need for early reporting. Recent managerial and organizational changes, such as changes in program staff and the results of decisions by corporate management, pose new challenges for the continuity of the program. Program implementation also needs to be considered in light of the local facility characteristics. AEFA as an organization has experienced significant growth in staffing levels since 1988. Additionally, many of AEFA’s employees work in the Client Service Organization (CSO), which is one of the most computer- and phone-intensive units in AEFA. Employees in this unit are responsible for responding to client questions or problems, accessing information from their computers, and recording information in manual logs. Some employees spend 3 to 4 hours a day answering about 30 to 40 telephone calls, while others average about 6-1/2 to 7 hours per day on the telephone answering 80 to 100 calls. Issues related to workload and increased staffing levels present special challenges to the program; officials told us these issues are more difficult to address than are physical workplace hazards. The current ergonomics program at AEFA was fully implemented in 1993, when a full-time ergonomist and other ergonomics staff were hired, training was provided to all employees, and an effort was made to infuse ergonomic principles into equipment purchase and design. The current program has evolved from a decade of effort originally based on the goal of making AEFA “the best place to work” by removing employee discomfort and reducing workers’ compensation costs associated with MSDs. AEFA started to address ergonomics in 1986, when it established an ergonomics task force and began conducting a limited number of workstation evaluations. In 1990, it hired a consultant to provide ergonomics awareness training to selected departments that faced ergonomic hazards. AEFA’s safety department began to receive employee complaints about physical discomfort and requests to evaluate their workstations to improve the layout, which officials believed was at least partly the result of this training. AEFA staff tried to accommodate these requests but were unable to keep up with the demand. Additionally, in 1992, workers’ compensation costs for MSDs increased significantly. Then, after the 1993 budget had been approved, the director of support services decided to establish an ergonomics function in his department. Assuring top management that this action would not affect budget or personnel ceilings, he reallocated a portion of his furniture budget to support a full-time ergonomist to be responsible for the program. This ergonomist was hired in 1993 and took the lead in implementing the program. A major staff reorganization also provided the opportunity to develop an ergonomics function. This reorganization required a physical relocation to new space and new furniture. In determining what type of furniture to obtain, the purchasing, real estate, and facilities departments believed that, if AEFA could buy furniture that could be easily adjusted for different employees, AEFA could reduce the costs associated with retrofitting workstations every time employees moved. Because AEFA employees move offices or work locations quite frequently (referred to as the “churn” rate), costs associated with these moves can be significant. This adjustability would also make the furniture “ergonomic”; that is, it could be appropriately adjusted for each employee and provide additional savings from reduced discomfort and reported injuries. AEFA’s ergonomics program is led by the ergonomics staff (the ergonomist, the ergonomics specialist, and a half-time administrative assistant) and is currently located in the support services department. Various other departments work with the ergonomics staff (such as the real estate, purchasing, facilities, and risk management departments) to design equipment standards, purchase equipment, adjust workstations, and track workers’ compensation claims and costs. Management commitment to the ergonomics program at AEFA is demonstrated in a number of ways. AEFA has no formal written program laying out the elements of its ergonomics program. AEFA officials told us a written program is not as key to daily program operations as is the information disseminated during the training and discussed in the employee guidelines, which are provided to each employee (see the training and education section below). Primary among the ways AEFA has demonstrated management commitment has been the assignment of staff—the ergonomist, the ergonomics specialist, and the administrative assistant—to be responsible for the program. The ergonomics staff identifies problem jobs, conducts workstation evaluations, develops controls, provides training to employees, and tracks information about what training and services employees have been provided. Various employees we interviewed said they knew whom to call when they had a question or complaint; the response was quick; and, in most cases, necessary changes were made in a reasonable period of time. AEFA has also integrated ergonomic principles into the purchase and design of equipment. For example, AEFA assembled a team of employees (for example, the ergonomist, officials from the real estate department, and representatives from various on-line jobs) to select chairs to offer to all employees. This team reviewed available information and selected several potential chairs, which employees then tested and rated. On the basis of employee feedback and other criteria (such as delivery time and warranty), the team selected for purchase the two highest rated chairs. In so doing, AEFA reduced purchasing costs, by buying in large quantities, as well as increased employee comfort. In much the same manner, a team was assembled to design and select new adjustable furniture for private offices. The team, which included the ergonomist, developed specifications for the furniture, then the purchasing and real estate departments worked with a vendor to develop furniture that met these specifications. In the end, AEFA was able to buy this adjustable furniture for about the same price as other furniture, while it also increased comfort, reduced future injuries, and now expects to save additional resources from not having to retrofit furniture every time employees relocate. AEFA also has invested significant resources to train employees. Office ergonomics training is strongly encouraged, and employees generally are not able to have their workstations adjusted by the facilities department without first attending training. Additionally, several of the line managers we spoke with said they encourage their employees to go to ergonomics training if they believe any productivity or quality problems may be due to ergonomic hazards. Moreover, many of the employees we spoke with told us they feel their managers take training seriously and encourage them to attend training and obtain the necessary ergonomic equipment to improve comfort. This training is offered every week for 1-1/2 hours—more time than is devoted to any other subject of training, according to AEFA officials. AEFA officials reported that about 70 percent of the headquarters staff have received training since 1993. AEFA does not use employee committees to identify problem jobs or develop controls. Instead, AEFA has established procedures that enable employees to directly access services. For example, at AEFA, employees are encouraged to attend the weekly ergonomics training, which provides employees information about office ergonomics and how to maintain comfort and health while working on computers. Additionally, during training, employees are measured for appropriate workstation setup (for example, chair height when sitting) and asked to complete an anonymous discomfort survey so that the ergonomics staff can obtain information on the extent to which employees are experiencing discomfort on their current jobs, and on what body parts they are experiencing that discomfort. This survey has also been provided to a random sample of employees annually since 1993. The results of this survey are used to track program performance and, in some cases, identify problem jobs. Additionally, at the end of each training session, employees are asked to provide feedback on the quality of the training received and whether they anticipate making changes to their daily work as a result of the training. Employees also have direct access to ergonomic services through a process that allows them to order computer accessories (such as foot rests, wrist rests, document holders, and monitor risers) from a standard listing. Costs for these accessories are not charged back to the employees’ home work area; instead they are paid for by the real estate department. Employee requests also trigger workstation evaluations, and, during these evaluations, employees also are asked for their input about controls they believe would be appropriate. Employees we interviewed acknowledged their responsibility to look for ergonomic hazards and apply ergonomic principles to their work habits. AEFA identified problem jobs primarily on an incidence basis. In other words, most of AEFA’s efforts result from a report of injury or discomfort or an employee request for assistance based on other reasons. AEFA officials said reports of discomfort and employee requests account for the majority of workstation evaluations performed. On a more proactive basis, AEFA strongly encourages any employee who is relocating to attend training in order to be measured so the facilities department can set up the employee’s new workstation appropriately. The ergonomics specialist also regularly walks the floor to look for potential problems. Moreover, officials told us that AEFA builds in what it learns to furniture and equipment design. At AEFA, a simple system has been established to ensure that a problem job is identified when an injury is reported. When an employee reports an injury to the risk management department, the department fills out a “First Report of Injury” form. If the risk management department determines the injury was due to ergonomic hazards, it forwards the form to the ergonomics staff. After receiving the form, the ergonomics staff contact the employee (after the employee has returned to work, if appropriate) to schedule a workstation evaluation. There is also an informal system to identify problem jobs when no injury has occurred but employees are feeling discomfort or want an evaluation. Employees can request a workstation evaluation through a phone call or an E-mail message to the ergonomics specialist, or by scheduling the evaluation on the ergonomics specialist’s electronic calendar. In some instances, AEFA has also used the results of the discomfort surveys to identify problem jobs. The ergonomics staff respond to every request for an evaluation (whether due to an injury, report of discomfort, or other request for assistance) within a few days, typically on a first-come, first-served basis. Several employees we spoke with said the ergonomics staff usually perform evaluations within 48 hours of the request. AEFA officials emphasized that, in most cases, they do not do job analysis but instead perform workstation evaluation, and the process used is simple and informal. The process used to develop controls is also typically informal, relying on in-house resources, such as the employees doing the work or staff in the facilities department. AEFA has implemented a mix of controls, focusing on those that increase employee comfort while using computers. Appendix II profiles some of the controls AEFA has implemented. At AEFA, workstation evaluations are typically performed rather than job analysis. AEFA officials said the reason for this is that they focus primarily on identifying what changes need to be made to the physical characteristics of a workstation to make the employee more comfortable performing the tasks. In so doing, certain risk factors (such as awkward postures) may be eliminated, but others (such as repetition) may remain. A job analysis would assess whether the actual job tasks should be changed to reduce hazards associated with that particular job. The ergonomics specialist conducts about 10 workstation evaluations a week during two set periods (at other times, if neither of these is convenient for the employee). During these evaluations, which take about 30 minutes, the ergonomics specialist interviews the employee, watches him or her perform the job, and determines whether he or she is performing any activities outside of work that may be contributing to the discomfort or injury. When the evaluation is triggered by an injury, the ergonomics specialist adheres to a questionnaire that collects information about the job (such as whether the workstation is shared, what types of tasks are performed, and how often tasks are performed) as well as about the workstation itself (such as height of the work surface, location of the keyboard and mouse, and height of the monitor). The questionnaire also asks for information about the presence of risk factors for particular parts of the body. As a part of this questionnaire, the employee is asked to provide information about what tasks he or she believes contributed to the discomfort. A less detailed version of this questionnaire is used for evaluations triggered by reports of discomfort or requests for assistance. In some cases, AEFA has done job analysis for problem jobs identified through the discomfort survey. Officials said a job analysis studies the actual tasks of the job and work organization and determines whether actual job tasks should be changed to reduce hazards. AEFA analyzed the CSO job categories several years ago, a task that included interviewing the employees working in these positions, evaluating the job tasks, and determining what type of equipment and furniture would be best suited for these tasks. Additionally, the ergonomics staff is currently looking for controls that reduce or eliminate the hazards associated with a mailroom job that requires lifting often heavy packages out of a large mail bin. Officials said they would like to do more job analysis so that problem jobs could be addressed on a broader basis. However, this would require additional resources that are not necessarily available. AEFA officials described their process for developing controls for problem jobs as “informal” and using in-house resources. AEFA takes this approach to have the resources available to provide some type of control for every job it evaluates. The ergonomics specialist uses the information obtained during the evaluation to develop and implement controls, often brainstorming with the affected employee or relying on in-house expertise. Because most employees covered by the program face similar computer-related hazards, in many cases, controls have been developed by first determining whether employees have the equipment available from the approved computer accessories listing. If necessary, AEFA works with its real estate and purchasing departments to design or obtain a piece of furniture or equipment that is not already available in-house. If the ergonomics specialist recommends controls such as taking rest breaks, the employee and supervisor are supposed to work together to achieve this. If adjustments to the employee’s workstation are required, the ergonomics specialist will put in a requisition to the facilities department to adjust the workstation, which is typically done within a week. To ensure that these controls are effective over the long term, AEFA has developed a database that contains the results of each workstation evaluation performed. Each employee’s “profile” (that is, workstation measurements, preferences such as left- or right-handed mouse, appropriate monitor height, and equipment used) is kept in this database; currently the database contains about 4,000 employee profiles. The availability of this information means that the facilities department can set up an employee’s workstation correctly the first time when an employee relocates. This ensures that employees continue to work in appropriately designed workstations and eliminates “post-move” adjustments (readjusting the workstation after the employee has moved in). Officials said they follow up if employees continue to feel discomfort or if injuries continue to be reported. For workers’ compensation cases, the ergonomics specialist follows up monthly to update the questionnaire used during the first evaluation. This iterative approach is important when financial or organizational issues affect the implementation of controls. For example, a number of employees still do not have adjustable furniture, because it is not feasible from a cost perspective to replace all of the existing furniture at once. Instead, AEFA is gradually providing this furniture to more and more employees. AEFA has implemented a mix of controls, primarily focused on improving the comfort of employees working with computers. In many cases, these controls can be considered “low-tech” engineering controls, since they did not change the job or the employee’s tasks. For example, AEFA has provided ergonomic chairs to employees and adjusted workstations (for example, adjusting work surfaces, moving equipment, repositioning monitors, or providing corner work surfaces). AEFA has also provided articulating arm rests to selected employees. These arm rests fasten to the edge of the workstation and allow the employees to rest their forearms on a moveable padded support while using the mouse. AEFA has also used administrative controls, such as encouraging employees to take stretch breaks and providing information and training. For example, AEFA published guidelines that provide information about the best colors to use on monitors for the best viewing. Many of the computer accessories supplied serve as personal protection equipment—such as wrist rests, foot rests, and holders to support documents referred to while keying. AEFA has also provided information to managers about the processes they should follow to ensure employees receive training. However, several employees said workload demands and cubicle size affected their ability to implement certain ergonomic practices, such as taking breaks or putting their monitors in an appropriate location. Training is the cornerstone of AEFA’s program. Part of the reason training plays such a major role in the program is that most of AEFA’s headquarters employees work in an office environment and therefore face similar computer-related hazards. Office ergonomics training is taught by the ergonomics specialist for 1-1/2 hours every Thursday; this module has also been built into orientation training for selected employees. The training provides employees information on what they should do to make their workstation more comfortable, including how they should adjust their chairs and monitors, how they should use the phone, and the importance of reporting symptoms and pains early. During this training, employees are also measured so their workstations can be set up properly and are asked to fill out the discomfort survey as well as the feedback survey on the quality and effectiveness of the training. AEFA has also recently begun to provide training on proper lifting techniques to employees who face hazards associated with manual material handling. To supplement this training, AEFA has provided written employee guidelines and a video, which cover much of the same information as is provided in the training. The ergonomics specialist also uses E-mail and other electronic media to send out messages about ergonomics and the availability of training. AEFA’s ergonomics program has established links with its medical management staff (in-house risk management officials as well as local health care providers) to ensure early reporting and prompt evaluation of injuries. Through the training and discomfort surveys discussed above, AEFA emphasizes the importance of early reporting. The risk management department, which is responsible for tracking workers’ compensation costs, can also trigger a workstation evaluation by providing the First Report of Injury form to the ergonomics staff when reported injuries are believed to be due to ergonomic hazards. To ensure prompt evaluation, AEFA has identified local health care providers with expertise in diagnosing and treating MSDs that employees can use if they desire. AEFA has also encouraged these health care providers to visit the facility and become familiar with AEFA’s operations to understand what AEFA employees do and how AEFA can accommodate any medical restrictions. AEFA also uses transitional or restricted-duty assignments to return employees to work as soon as appropriate and follows up on the employees’ recovery once they return. AEFA has classified a number of jobs as “temporary modified duty” positions, and officials said they have had a positive experience with bringing previously injured employees back to work. If an employee has been out for 10 days, AEFA contacts the health care provider and suggests various light-duty jobs the employee might be able to do. Once the employee has returned to work, the ergonomics specialist conducts a workstation evaluation to ensure that work conditions support whatever restriction the employee may have. AEFA allows employees a 12-week transition period to ease back into the job requirements, during which time the ergonomics specialist conducts monthly follow-up. If it is determined that the employee cannot perform the job tasks anymore, AEFA works with the employee to find another job, within AEFA if possible. AEFA officials said they are pleased with the results of the program, which they believed has helped reduce workers’ compensation costs for MSDs and improve employee productivity and morale. However, they raised several issues that complicated their ability to tie the results directly to program efforts and that therefore should be considered when reviewing these results. As shown in figure III.1, AEFA reduced its costs for MSD workers’ compensation claims by about 80 percent (from about $484,000 to about $98,000) between 1992 and 1996. Because the program has to date focused on employees who use computers in an office environment, AEFA tracks MSDs by looking at “computer and mouse injuries” and other “repetitive motion injuries not related to computer use.” Additionally, the officials said the reduction in the average cost incurred for MSD claims (from about $9,100 in 1992 to about $1,700 in 1996, as shown in fig. 3) is an indication of AEFA’s emphasis on early reporting and treatment of injuries before they become serious. Total Dollars for MSD Claims (in Thousands) Data include headquarters and field staff, since data are not available for headquarters employees only. AEFA officials said several factors have affected AEFA’s ability to reduce costs further and account for some of the yearly fluctuations. For example, the spikes in workers’ compensation costs for MSDs in 1994 and 1996 (that is, policy years 1993 and 1995) may be the result of the emphasis on closing open cases. Additionally, there is often a lag between the time an injury occurs and when the costs appear. Costs also are significantly affected by any big claim, as is evident in 1996 (policy year 1995), when several major cases required surgery. Additionally, AEFA officials said the increase in claims in the first year after the program was fully implemented may be at least partly attributed to increased employee awareness. AEFA has also experienced a significant increase in staffing levels since 1988 as well as increased workloads. Officials said that the reductions AEFA has achieved should be considered in light of these factors. AEFA officials also said there is some question about what types of injuries should be considered MSDs. As long as there is no agreed-upon definition, it is sometimes difficult to know what to track and how to distinguish MSDs from other injuries. Although ergonomics staff rely on their workers’ compensation database rather than on the OSHA 200 log data, they said the database in the past has not allowed them to break out data by geographic location or department or to track lost workdays. Working with its insurer, AEFA enhanced the database so that, starting in 1997, it now provides this information. As a financial institution, AEFA is not required to maintain the OSHA 200 log. However, AEFA’s safety department does keep the OSHA 200 log voluntarily because AEFA is among the universe of employers included in BLS’ Survey of Occupational Injuries and Illnesses, which collects data (from the OSHA 200 log) about workplace injuries and illnesses. However, the ergonomics staff at AEFA did not use the OSHA 200 log to track program progress for several reasons. First, because the ergonomics staff were not responsible for monitoring the log, they were uncertain of how the data were input onto the log. Second, ergonomics staff believed it was more efficient to use the workers’ compensation database, since it allowed ergonomics staff to track injuries, claims, and costs. Finally, the safety officials who maintained the log said there is confusion about how to categorize ergonomically related injuries; for example, back injuries are not typically coded under the repetitive trauma category. Facility management officials said the ergonomics program has contributed to increased productivity and quality of work as well as employee morale. AEFA’s annual discomfort surveys have shown significant declines in the number of employees reporting discomfort in numerous body parts, including head, neck, back, shoulders, elbows, and wrists, between 1993 and 1996. Furthermore, according to results from numerous feedback surveys filled out by employees who have attended training since 1994, between 80 and 90 percent of employees believed that learning about ergonomics was an effective use of their time, and most indicated they planned to change some work habits on the basis of information received from the training. Because AEFA has not, to date, tracked the direct effects of the program on productivity and quality, officials said it would be very difficult to pinpoint any changes that resulted directly from the ergonomics program. However, in an effort to establish whether discomfort affects employee productivity, AEFA has revised its discomfort survey to ask employees the extent to which they believe their discomfort affects their productivity. The ergonomics staff hopes to use these results in future assessments of the ergonomics program’s effect on productivity. AMP Incorporated, which began operation in 1941, is a manufacturer of electrical and electronic connection devices. AMP supplies connectors to a wide variety of industries, including automotive, computer and office equipment, and consumer and home electronics industries. AMP employs 40,800 employees in 212 facilities, with subsidiaries in 40 countries. The Tower City facility, which began operation in 1972, stamps metals with mechanical presses to form electronic terminals and connectors. The majority of employees are die machinists and mechanics. The dies are metal blocks, shaped through a grinding process, that fit into the mechanical presses for use in stamping connectors into any one of a wide variety of forms, depending upon the particular application of that connector. Current employment at the Tower City facility is approximately 300. None of the workforce is unionized. AMP’s corporate culture allows for a decentralized approach that provides business groups and local facilities flexibility to organize safety and health activities in order to achieve production goals. As a result, a lot of variation in operations exists among facilities, and this is reflected in the ergonomics efforts. This variation in ergonomic programs across facilities is also attributed by AMP management to business conditions, which affect the level of investments for ergonomics, as for any other initiative, and to local cultural and regulatory conditions. For example, facilities located in states where some types of MSDs are not compensable may have less incentive to reduce these injuries. The ergonomics program at Tower City was fully implemented as of 1993, when the facility formed an ergonomics team. The team was formed in response to the global safety department’s promotion of ergonomics efforts across the company out of its concern regarding rising workers’ compensation costs for MSDs. The strategy of the global safety department was to promote and train local ergonomic task teams in each of AMP’s facilities. AMP’s ergonomics efforts, including those at Tower City, appear to have been evolving since the late 1980s, when the global safety department began offering ergonomics training courses. Corporate productivity initiatives were also being launched, and business groups across AMP were forming teams of employees to get them more involved in production activities and to identify production problems. The heart of the ergonomics program at Tower City is the value-added manufacturing (VAM) team for ergonomics. This team is composed of employees from a wide variety of departments—including tool and die making, maintenance, and packaging—and is led by an industrial engineer. The team is responsible for identifying problem jobs and developing controls. The global safety department serves in a consulting capacity to the different teams and facilities across AMP for all safety and health issues, including ergonomics. The global safety department has a total of nine staff, six of whom are professional staff. In addition, the department provides training and administers the corporationwide safety audits of all facilities, of which an assessment of ergonomic activities is a small part. In addition to global safety staff, there are environmental safety and health coordinators across AMP who report to individual facilities and business groups as well as overseas operations. Management commitment to the ergonomics program at Tower City is demonstrated in a number of ways. Primary among them is the assignment of staff—to the ergonomics team—specifically to address ergonomic hazards. Corporationwide accountability mechanisms are in place in the form of a safety audit, the recent integration of an overall safety goal into AMP’s pay-for-performance system, and recommended criteria to help develop performance measures. An AMP-wide safety audit, the Safety Assessment of Facility Excellence (SAFE), helps ensure accountability for the ergonomics program, among other safety efforts, and can be used by facilities to conduct self-assessments of their safety programs. For example, SAFE includes questions on whether an ergonomics team has been established, routine workplace inspections for ergonomic opportunities are being conducted, and specific worksites where MSD risks or symptoms have been identified are being evaluated. Additionally, the 1997 overall safety goal of one accident involving lost or restricted days per 100 employees has been integrated into AMP’s pay-for-performance system. This goal was based on the experiences of other employers in this industry who are members of the National Safety Council. Finally, suggested criteria or activities, some of which are ergonomic-specific, were recommended by the global safety department to the local facilities to help them develop pay-for-performance measures that are meaningful at the local level and that contribute toward this overall safety goal. An ergonomic criterion, for example, is whether or not ergonomic teams have been recruited and trained at each local facility to evaluate job tasks. Ergonomic principles are also integrated into the purchasing of tools, equipment, and furniture and the design of new facilities. Tower City works closely with its suppliers to test and evaluate a variety of ergonomic tools and equipment before purchasing these items. For this purpose, Tower City has set up Ergonomic Prototype Work Centers in virtually every work area to test new products and controls, and to obtain employee acceptance of new controls. AMP’s corporate facilities services center has developed a catalog of furniture that is modular and adjustable, and global safety has recommended that individual facilities order items from this catalog. In designing a new, larger facility in nearby Lickdale, Pennsylvania, where operations at Tower City and another facility will be combined, focus groups were formed to provide input so that ergonomic principles, among other design considerations, would be addressed. Resources are also made available for the ergonomics program. The team leader said that most of the team’s suggestions for controlling problem jobs are approved at the facility level and that a written justification and approval from a higher level of management are needed only when a capital investment of $2,000 or more is involved (which is the case for all investments). When developing the cost justification, the ergonomics team routinely includes an estimate of the cost of MSDs should controls not be implemented. AMP has a written program in the form of a section in its safety manual, although this document is not key to program operations at the facility level because facilities are given considerable flexibility to implement ergonomics programs as they see fit. This section in AMP’s “124 Specification” identifies specific areas of responsibility to be assumed by local facilities and various departments to address ergonomic hazards. For example, local facilities are encouraged to perform routine, periodic workplace inspections for ergonomic hazards as part of the facilities’ ongoing loss prevention efforts, and the facility services department is responsible for the selection of adjustable office furniture. In addition, the global safety department is in the process of developing guidelines that include ergonomic activities to help local facilities develop or improve their safety programs. The ergonomics VAM team drives the effort at the Tower City facility. About 12 employees (referred to as “associates”) serve as team members and are responsible for identifying and prioritizing problem jobs as well as developing controls for these jobs. Both the team leader, who is an industrial engineer, and the secretary of the VAM team are elected. One member of the ergonomics team is assigned to each project that the team, after prioritizing, agrees to take on. In this way, projects are “championed” by individual team members. The team meets biweekly during work hours because weekly meetings were found to be too time consuming. Employees are involved in an ad hoc fashion as well. Any employee can choose to participate on the ergonomics team on a project-by-project basis if, for example, the team is trying to develop controls for that employee’s job. Many employees on problem jobs are interviewed by members of the team who are investigating the problem jobs, and these employees are the source of ideas for many of the controls developed. Procedures have been established so employees can directly access ergonomic services, although these procedures are very informal at this facility. Employees can request that the ergonomic team look at their job by raising their concerns with a member of the team, their representative on the local safety committee, their supervisor, or their human resources representative. This is done by word of mouth. Although an analysis of the job is not automatically triggered, the job or task is added to a list of problem jobs, which the team then prioritizes. (A discussion of prioritization appears below.) In addition, the ergonomics team leader “walks the floor,” so he is accessible to employees should they be experiencing discomfort. As evidence of employee interest, the team leader said many associates voice their ideas informally for how jobs might be controlled or changed to reduce exposure to ergonomic hazards. The facility also has a suggestion system that awards employees for suggestions regarding any aspect of the facility’s operations, including ergonomic improvements. There are several ways in which the ergonomics team learns that a job might be a problem. The following methods for identifying problem jobs are incidence-based; that is, they are based on employee reports of injury or discomfort or employee requests for assistance: Information from incident reports, which are completed whenever an accident or “near miss” incident has occurred or whenever an employee reports symptoms to a supervisor or the facility nurse (who is a member of the ergonomics team), is provided to the ergonomics team if ergonomic hazards appear to be involved. Periodic walk-through audits by AMP’s third-party insurance administrator alert the facility to opportunities to address ergonomic hazards. In some cases, insurance representatives may look specifically at those areas where workers’ compensation costs are high. Employees can bring up any discomfort they are experiencing with members of the ergonomics team, their representative on the local safety committee, their supervisor, or their human resources representative; ergonomics team members themselves identify problem jobs on the basis of symptoms they are experiencing or complaints they have heard from fellow employees. The suggestion system also may provide information on potential problem jobs. Requests to the ergonomics team to address a problem job can also come from management of the facility or business group, the departments, the local safety committee, or one of the other 17 VAM teams at Tower City. Prioritization of problem jobs is done by the ergonomics team. Once the ergonomics team is alerted that a job may be a problem, the team prioritizes which jobs it will analyze. Each team member is asked to identify the two or three jobs he or she feels are most important to address. The problem jobs are then ranked on the basis of how many team members have identified them as important. Jobs in which MSDs have already occurred are typically given the highest priority. Because the team identifies its own priorities, this process also serves the purpose of keeping the team focused and interested. As indicated previously, individual team members are assigned to “champion” each selected project. Facility officials described their process for analyzing problem jobs and developing controls as “intentionally flexible” and “informal.” Analysis of a problem job might involve simply analyzing a particular job element or task that is thought to be the source of the problem. However, if a problem job is more complex or labor intensive, Tower City will undertake a more detailed job analysis. Members of the team and management at the facility and corporate levels all emphasized that developing controls is not “rocket science” and that the answers typically come from employees on the production floor. The process of developing controls was described as “iterative” and involving “continuous improvement.” The ergonomics team leader said that its work is never done, because new problem jobs or tasks are always being identified and controls initially introduced for problem jobs are not always adequate. A mix of controls is employed, but many were described by facility officials as “low-tech” engineering controls. To analyze a problem job, a one-page “Ergonomic Evaluation Form,” is administered to the employee on the problem job. The form is tailored to that specific job, and asks “yes/no” questions about the employee’s ease and comfort when performing certain job tasks. After reviewing this form, a member of the ergonomics team interviews the employee and observes the employee performing the job. This Ergonomic Evaluation Form was initially longer and more complex but was subsequently simplified to encourage employees to fill it out. As an incentive, those who fill out this form are provided the opportunity to test any new equipment or tools, and will be involved in the final decision about which equipment or tools to purchase. For jobs involving keyboarding, a one-page “yes/no” workstation checklist is used to record observations such as whether the chair and keyboard are adjusted properly, or whether there is adequate variety in tasks performed throughout the day. If a problem job is more complex or labor intensive, Tower City will undertake a more detailed job analysis, which may involve videotaping the job and collecting more documentation. According to the ergonomics team leader, problem jobs are videotaped whenever possible because the team finds this helpful for identifying the ergonomic hazards of a job and possible controls. For example, the team has videotaped jobs in the re-reeling department, where connectors and terminals manufactured at this facility are wound onto reels for packaging and distribution; the packaging department, where boxes are stretch-wrapped for shipping; and the machine shop, where the grinding and milling of dies takes place. Additional documentation is collected to develop controls for these problem jobs using the “Job/Task Evaluation” form. This form is several pages long and provides space to record more detailed observations about the adequacy of the workspace, environmental conditions, and hand tools as well as for comments regarding possible controls. A physical assessment survey may also be administered to capture frequency of discomfort in various body parts. This was done in the re-reeling department because that department historically had higher numbers of MSDs. Tower City also used “process mapping” sometimes, which involves breaking down the steps of a job process and then, on the basis of that information, developing a new method of performing that same job that eliminates unnecessary steps. Although the focus of this type of job analysis is usually improving productivity, this analytical tool is recognized by the ergonomics team as helping the facility make important ergonomic improvements. The controls themselves are developed informally, through “brainstorming” by the ergonomics team members using the information collected from analyzing the job, interviewing employees, and suggestions from employees on the production floor. Although the ergonomics team takes the lead in developing controls, it has access to in-house engineering support. For example, the team had developed a prototype cutoff device to reduce stress on employees from ripping paper placed between layers of connectors as they are wound onto reels. Because this device was found to be inadequate, the ergonomics team has requested assistance from the engineering group to develop a fully automated paper cutter. Although Tower City officials said many controls were developed internally, there were instances in which outside resources were integral. For example, the Tower City facility arranges with vendors or suppliers to provide tools and equipment at no cost to the facility so the facility can test the product before making a purchase. Through its Ergonomic Prototype Work Centers, which are set up within each work area, these tools and equipment are then evaluated. By creating an Ergonomic Prototype Work Center in the tool and die work area, the ergonomics team enabled employees to experiment with different tools and different ways of arranging tools to eliminate awkward reaching. The facility now suspends the tools by magnetic strips in easy arm’s reach above the workstation. Also, tools are organized by specific jobs to make it easier for the employee to locate the appropriate tool. In addition, the ergonomics team also uses electronic media, including the Internet, to obtain information on ergonomics and available tools. The ergonomics team leader then distributes this information throughout the facility, both for education and awareness purposes as well as for ideas for controls. In select instances, the facility may also use the services of its third-party administrator’s loss control engineers to help identify controls, such as in the re-reeling department (see app. II). The ergonomics team tries to address in some way every job that has been identified as a problem job. According to AMP officials, small and focused efforts to develop and implement controls were important in achieving early successes and convincing employees and management alike that the ergonomics program was worthwhile. Some of the initial projects of this team involved little or no capital investment, were relatively easy to develop and implement, and were inherently good candidates for success. The process of developing and implementing controls was described by facility officials as “iterative” and involving “continuous improvement.” Controls initially introduced for problem jobs might not be adequate or may introduce new problems, such as slowing operations down, which underscores the importance of going back to monitor the job once the controls have been introduced to see if they are working and employees have accepted them. So, while controls already implemented have helped to reduce reports of MSDs in the re-reeling department, the ergonomics team continues to work to improve this job. For example, the introduction of vacuum lifts to lift boxes from the conveyor to a skid for packaging slowed the operator down while he or she manipulated the boxes so they were properly oriented before being placed on the skid. As a result, the ergonomics team is researching other, perhaps more efficient, possibilities for safe handling. The team also continues to identify other solutions to problem jobs and tasks, such as redesigning racks where reels are stored so that employees are not lifting the heavy reels as high. This facility has instituted a formal follow-up process to determine whether or not controls introduced on problem jobs are working. The ergonomics team administers a postevaluation form, the same one-page form administered before controls were introduced, to document whether or not the ease and comfort of employees performing that job or job task have improved. Formal follow-up also occurs through performance agreements, which are drawn up for each major project undertaken by the ergonomics team and posted in a public area. These performance agreements require the team to document its desired and actual results for comparison, as well as its standards of performance or accountability. For example, one desired result was to establish a procedure for employees to obtain ergonomic chairs, with a performance standard of securing at least one chair per quarter. The ergonomics team documented the success of this project by developing criteria for individual employees to qualify for ergonomic seating, selecting a line of products, and establishing a system by which the team identifies seating requirements and counsels individual employees regarding appropriate ergonomic chairs. Sometimes the ergonomics team will also circulate a written comment sheet to employees to elicit feedback on the controls that have been introduced, as the team did for the re-reeling job. In addition, informal follow-up occurs through ongoing review of medical reports and walk-throughs conducted by members of the ergonomics team to determine whether or not employees continue to experience problems in jobs where controls have been introduced. A mix of controls is employed, but many were described by facility officials as “low-tech” engineering controls. For example, this facility uses mechanical arms to maintain tension of electronic connectors as they are reeled and has modified the tool and die workstations so that tools are suspended within easy reach. Sometimes administrative controls are used when engineering controls are difficult to implement or do not completely eliminate all ergonomic hazards. For example, in the re-reeling job, employees are rotated every 2 hours so they are not reeling the same product over long periods of time. General awareness training is provided only to members of AMP’s local ergonomics task teams (including Tower City’s ergonomics team). This training consists of a half-day course offering a basic overview of ergonomic principles. Global safety conducts this course and also follows up to see how well the teams are implementing their programs. Training provided to all employees is informal—through distribution of literature and promotion of the activities of the ergonomics team. Also, Tower City integrates ergonomics into ongoing worker training on all equipment. This is done by the facility’s equipment trainer, who serves as a member of the ergonomics team and is responsible for teaching all employees proper work practices and how to avoid ergonomic hazards. In addition, training is provided to each employee on a particular job when that job has been changed to reduce exposure to ergonomic hazards. Tower City emphasizes focused, specialized training for employees based on their respective roles in addressing these hazards. Training for engineers, supervisors, and members of the ergonomics team is offered through AMP’s Engineering Education Program and conducted by global safety staff. The courses include an “Introduction to Ergonomics,” which covers basic ergonomic design principles for machines, tooling, and workstations and the benefits of ergonomic design in relation to corporate strategic goals. An “Advanced Human Factors Workshop” offers in-depth discussion of human factors principles in design and task analysis. This course includes workshops in analyzing facility loss trends, conducting job analysis, implementing controls, and computing return on investment for management reports. Global safety has recently started to offer training in behavior-based safety management at several facilities. This training is intended to help staff identify the root cause of behaviors that lead to accidents or contribute to MSDs. This training will also cover how to document savings from changing behaviors. Because it has had a good business year, Tower City has been able to meet its targets for training this year. However, global safety staff have found training participation is affected by business conditions. In addition, sometimes it is difficult to justify training, including ergonomics training, during work hours. The result is that courses are often offered in the evenings, which can also limit participation. Strong linkages between Tower City’s ergonomics program and medical management staff have been established to ensure early reporting and prompt evaluation. An occupational nurse serves the Tower City facility and two other facilities. This nurse, along with other AMP nurses, reports to AMP’s department responsible for all health services. The nurse and supervisors try to document whether the source or nature of injuries is ergonomic-related. The nurse completes a medical report for every accident for which medical treatment is required, and space is provided for descriptive information to capture whether the problem may be related to an ergonomic hazard. Incident reports are also completed by the direct supervisor and reviewed by several managers before being sent to global safety for analysis. Poor workstation design and incorrect use of equipment or tools are among the hazardous condition categories that can be indicated. These reports are regularly reviewed by the local safety committee and the ergonomics team, and the nurse, as a participant in both groups, calls attention to problems related to ergonomic hazards. Although most of the care provided for MSDs is through referral to local health care providers, a list of several area physicians, known by AMP’s insurance administrator to be knowledgeable about MSDs and familiar with AMP’s operations, is provided to injured employees. The nurse works closely with these physicians when an employee is diagnosed with an MSD to develop appropriate treatment and to identify restricted- or light-duty jobs. Nurses and occupational therapists employed by the insurance administrator are also available to assist the facility nurse. These nurses will, on occasion, observe the employee doing the job in question to help the physician determine the exact nature of exposure. In addition, the facility nurse told us she conducts informal walk-throughs to increase her familiarity with the jobs and associated risks. Facility tours are also provided to physicians in the community. Tower City has a return-to-work policy to reduce workers’ compensation costs. Finding restricted- or transitional-duty jobs has not been difficult at this facility because there have never been many employees on this type of duty, according to facility officials. Only three staff are currently on restriction. In addition, Tower City can also bring employees in on half shifts or restricted hours, and there are many opportunities for temporary assignments because of the variety of jobs within each department. In fact, this facility has always been able to place an injured worker in a restricted job within his or her same department. AMP officials said they were generally satisfied with the results of Tower City’s ergonomics program, which has sought to improve worker safety and health through reduced injury rates and lower workers’ compensation costs. However, officials raised a number of issues associated with Tower City’s ability to assess program performance. Global safety officials said that the identification of “metrics” by which to measure progress in safety and health has been a challenge for the company. This difficulty prompted this department to work to introduce safety goals into AMP’s corporationwide pay-for-performance system and to solicit local facilities to help develop meaningful measures. Workers’ compensation data provide some evidence that the ergonomics efforts at Tower City are helping to reduce costs associated with MSDs. To capture MSDs, Tower City tracks sprains and strains in which the cause of the injury is lifting, repetitive motion, pushing, or pulling. As shown in figure IV.1, Tower City has achieved a reduction in workers’ compensation costs for MSDs from about $73,000 in 1993 to about $28,000 in 1996. Additionally, during this same time period, the average cost for each MSD claim declined from $6,601 in 1993 to $2,512 in 1996 (see fig. 3). Total Dollars for MSD Claims (in Thousands) While AMP officials believe these data suggest improvements at the facility, officials emphasized it would be difficult to attribute all improvements to the operation of the VAM team, given other contributing factors. First, there is a limited number of available years of workers’ compensation data available, and officials said it may take several years before real changes occur. Second, officials said there is often a lag in workers’ compensation data, and the injury may have occurred years before the costs show up in the data. This sometimes makes it difficult to interpret changes in workers’ compensation costs. Trends in overall injuries and illnesses from the OSHA 200 log are important because MSDs account for a significant portion of all injuries and illnesses at our case study facilities and because these data are what OSHA looks at when inspecting a facility. From 1993 through 1996, the facility’s rate of injuries and illnesses for every 100 employees, known as the incidence rate, declined from 12.8 to 7.1 (see fig. 4). The incidence rate for 1995 of 5.4 is lower than the 1995 industry average of 7.1 for manufacturers of electronic connectors, according to the most recent available data. Additionally, Tower City reduced the number of lost days by 78 for every 100 employees from 1993 through 1996. In contrast, during the same period the number of restricted days increased by 21 for every 100 employees, which, in fact, may be the consequence of bringing more injured workers back to work (see fig. 2). However, the team generally does not use the OSHA 200 data to assess its progress, preferring instead to rely on the facility nurse to do so because she is knowledgeable about recording and interpreting the data. Tower City has also established a linkage between ergonomic investments and productivity or quality improvements. By examining production bottlenecks, this facility has identified ergonomic hazards that contribute to the production problem. The facility used an analytical tool called “process mapping,” which involves describing each step of a job process and then, on the basis of that information, developing a new method of performing that same job process that eliminates unnecessary steps. Process mapping enables the facility to demonstrate how comparatively fewer steps (less time and shorter distances) are required to perform the same activity. For example, employees used to have to manually search through bins filled with numerous channels, or attachments, to locate, align, and fix a particular channel on a die to guide a newly manufactured terminal as it was re-reeled. Through process mapping, a new way of attaching the matching channel to the die earlier in the process was identified. In another application of process mapping, employees no longer have to crawl under the press to feed a vacuum hose to remove scrap material after connectors are stamped. A new extraction system has been installed underneath the press that automatically removes remnant or scrap metals. This improvement has also reduced the facility’s scrap rate and improved the quality of recovered metals. Worker morale has also improved, as reflected by employee interest and involvement in the activities of the ergonomics team. In general, the ergonomics program has been a vehicle to get employees more involved in how their jobs are performed, according to the team leader, as evidenced by employees’ significant use of the “suggestion system.” Navistar International Transportation Corp. manufactures heavy- and medium-duty trucks, school buses, diesel engines, and service parts.Navistar has 10 facilities in the United States, Canada, and Mexico, employing about 15,000 employees worldwide. The Springfield Assembly Facility assembles the heavy- and medium-duty trucks. Originally designed to produce pick-up trucks, the facility was built in 1967 and currently employs about 4,000 employees, most of whom work on the production floor assembling truck parts. About 80 percent of Navistar’s workforce is unionized and under contract with the United Auto Workers (UAW). Some office employees and security personnel are also unionized at the local level. The culture at Navistar has influenced the implementation of the ergonomics program. For example, the UAW bargaining agreement requires each facility to have an ergonomics program that includes employee involvement in the identification of hazards and selection of control methods; job analysis to identify ergonomic risk factors and target ergonomic interventions; training for employees; and active involvement of the medical department in the identification of problems, medical evaluation, treatment, rehabilitation, record keeping, and job placement of restricted workers, among other requirements. Navistar’s facilities have flexibility in how they carry out their ergonomics programs and achieve bargaining agreement requirements, safety and health standards, and injury reporting requirements. Thus, the programs differ somewhat from one facility to another. For example, only three of Navistar’s facilities have full-time ergonomists to lead the ergonomics programs. Additionally, because of experiences during program evolution, the membership of the ergonomics committees may differ from one facility to the next. Local facility conditions also affect program implementation. A key feature of Navistar’s products is that they can be customized; this means that production lines and processes at the Springfield facility can change frequently. Additionally, because there is cyclical demand for any particular product, production line speeds can vary significantly. Both of these factors mean that jobs or job tasks may change every few months. This poses challenges for Springfield to identify particular problem jobs and ensure that controls are effective over the long term. Additionally, Springfield has hired relatively few new employees over the past 2 decades, and over the past several years its staffing level has remained fairly stable. As a result, the facility’s workforce is composed largely of men whose average age is 50. While the collective experience of this workforce helps to prevent injuries, it also may be problematic, because as employees age they may be more susceptible to injury. In 1994, Springfield did hire about 500 new employees, a large number of whom were women, but they were subsequently laid off throughout 1995. Because these employees were new and perhaps not used to these physical requirements, Springfield suffered increased numbers of injuries while they were on board. The current ergonomics program at Springfield was fully implemented in 1994 with the hiring of the current ergonomist. However, Springfield’s program has evolved over a decade of experimenting with a number of different ways to reduce ergonomic hazards and MSDs. Springfield began to implement an ergonomics program as early as 1984, when the UAW required Navistar, in its collective bargaining agreement, to establish a pilot ergonomics program. Navistar corporate officials said there were other influences that contributed to their decision to implement an ergonomics program, including witnessing other employers in the auto industry being cited by OSHA for MSDs, and being encouraged by a consultant who demonstrated ergonomics’ relationship to improved productivity and quality. The pilot ergonomics program was based on local ergonomics committees. Composed of line employees, these committees were tasked with looking for problem jobs and developing controls. However, the employees on these committees often lacked knowledge of ergonomics, lacked the engineering resources necessary to implement suggested controls, and found it difficult to meet because of workload demands. Additionally, Springfield also found there were too many employees on its committee to make it effective. As a result, Navistar and the UAW decided to restructure committee membership so that the only required members would be the local union safety representative and a management safety representative, with other employees brought in as appropriate. In 1991, Springfield decided to hire its first ergonomist to coordinate the ergonomics program. According to the facility manager, most of Springfield’s injuries with lost workdays are caused by ergonomic hazards. However, because the ergonomist reported to the engineering department, competing priorities often meant that ergonomics was not given the same priority as other engineering activities. Springfield subsequently decided to place the ergonomist in the safety department. According to Springfield officials, this organizational change was instrumental in ensuring the ergonomics program received the attention it deserves. Springfield’s ergonomics program is led by a full-time ergonomist and a local UAW representative (who works on ergonomics about 3 days a week). The ergonomist reports to the environmental safety and health manager, who reports directly to the facility manager. Other departments are involved with the program, such as the workers’ compensation branch (which tracks workers’ compensation costs), the medical department (which treats injured employees), and the in-house engineering staff (which helps design and implement controls). Management commitment to the ergonomics program at Springfield is demonstrated in a number of ways. Springfield has a written document that lays out the various elements of its program, but this is not key to the daily operations of the program. Instead, officials said other, more tangible signs are better indications of management commitment. Springfield has assigned staff—referring to the full-time ergonomist and UAW representative—to manage the program. Specifically, this ergonomics staff is responsible for identifying and analyzing problem jobs, leading efforts to develop controls for those jobs, and overseeing implementation of controls. Additionally, the ergonomist provides training to Springfield employees and develops ergonomic guidelines for them to follow. Navistar has also integrated ergonomic principles into corporate accountability mechanisms. For instance, Springfield is given a cumulative percentage reduction goal for injuries and illnesses. The percentage reduction is based on the number of incidents, the frequency of those incidents, the number of incidents with lost time, and costs for workers’ compensation. Springfield also uses 5-year strategic business plans that lay out goals and timeframes for completion of those goals. Achieving these goals contributes to compensation decisions affecting managers. For the last 2 years, these strategic plans have included goals for the ergonomics program that have been developed by the ergonomist and the UAW representative. The most recent plan calls for redesigning processes ergonomically to reduce injuries and costs associated with MSDs, training technical support staff on ergonomics, and reducing lost time days and dollars by bringing employees on workers’ compensation or medical layoff back to work. Springfield officials said including ergonomic requirements in the strategic business plan has brought ergonomics to the forefront and represents a tangible sign of management commitment. Ergonomic principles have also been incorporated into Navistar’s yearly safety audits. For the first time, in 1996, Navistar conducted a safety audit at each of its facilities that scored each facility on various safety matters, including ergonomics. Although the audit was predominantly compliance based (relating to, for example, record-keeping and maintenance issues), it also looked for evidence that an effective ergonomics program was in place—for example, that there was evidence of employee awareness about ergonomics, that processes were in place to evaluate repetitive trauma injuries, and that medical staff were involved in the program. The 1996 score will be used as a baseline to evaluate future performance, and Springfield’s progress relative to this baseline score will be included in future years’ injury and illness reduction goals. Springfield takes the results of this audit seriously; as a result of last year’s audit, Springfield created a management-level ergonomics committee to spread awareness of the ergonomics program. This committee also helps to ensure management support for the program. The committee meets bimonthly and includes representatives from each of the departments of the facility (primarily department heads or their designees). The committee reviews the status, feasibility, and appropriateness of various controls that have been suggested or implemented. The ergonomics staff also said that suggestions for ergonomic controls generally have been implemented, although recent budget restrictions have made it more difficult to justify all types of capital investments. However, if Springfield does not have the funds to obtain safety-related items, it can request that corporate Navistar pay for them. Cost justifications are typically required for ergonomic controls, as they are for all capital investments. To justify the purchase of the control, the ergonomist typically cites the costs of injuries or the potential costs of injuries if the control is not implemented. For example, in a cost justification for additional automatic lift tables (tables that keep supplies at an appropriate distance and level for employees by rising as the loads on them decrease), the ergonomist reported that these tables help to reduce shoulder and back injuries, which have cost the facility well over $200,000 a year in workers’ compensation costs. Navistar relies on committees to accomplish the employee involvement required by the collective bargaining agreement. Springfield’s primary ergonomics committee for identifying problem jobs and developing controls is purposely fluid, based on Navistar’s previous experience with large standing committees during program implementation. The only required members of this committee are the ergonomist and the UAW representative. Other employees (such as the employee doing the job, a line supervisor, an engineer, and the medical director) are pulled in on an ad hoc basis depending upon the particular job being studied and the expertise needed to develop a control. Officials said this type of committee works well because it is relatively small and focused on a particular job, so the analysis and control development can be done fairly quickly. Additionally, corporate officials said this approach allows Springfield to involve a large number of employees in identifying problem jobs and developing controls in a more efficient way than using a standing committee would allow. In some cases, Springfield has formed special committees to address particularly difficult jobs. For example, the “pin job” is considered the most onerous job in the facility. On this job, the frame of the truck is lowered onto the axle. Employees have to “manhandle” the frame so it aligns with the axle, while simultaneously manually hammering in pins that attach the frame to the axle. This job requires significant force, vibration, and awkward postures. Because previously suggested long-term controls for this job would require significant changes in the production process or in the design of the product, Springfield officials said they have recently created a new committee and given it 6 months and an “unlimited” budget to assess the job and develop alternative types of controls. Springfield has also established procedures that allow employees direct access to services. For example, employees can trigger a job analysis simply by submitting a “Request for Ergonomic Study” form to the ergonomist or UAW representative if they feel discomfort or just want to have an analysis performed. This one-page form elicits basic information about the employee involved (name, time of injury, or type of discomfort reported); the “ergonomic concern” being reported (that is, the action that has caused the injury, discomfort, or both); the area of the body affected; and any suggestions the employee may have to alleviate the ergonomic concern. In 1996, the form was revised to also request information on ergonomic risk factors present on the job (repetition, force, awkward postures, vibration, and lifting). Once the ergonomist or UAW representative receives this form, the appropriate employees are convened to conduct a job analysis. Springfield identifies problem jobs primarily on an incidence basis. In other words, Springfield’s efforts most often are the result of job-related reports of injuries or discomfort to the medical department but can also result from employee requests for job analysis. Springfield has implemented a simple system by which jobs are identified for analysis. Facility officials emphasized that this process must be simple in order to encourage employees to report their injuries or discomfort early. When an employee reports an injury or discomfort to the medical department (Springfield has an on-site occupational health clinic), the medical director evaluates whether the injury or discomfort was caused by an ergonomic hazard, and, if so, completes a Request for Ergonomic Study and gives it to the ergonomist or UAW representative. As noted above, employees or supervisors can also complete this form if they or their employees are feeling discomfort that has not yet resulted in a visit to the medical department or if other conditions exist that lead them to believe there are potential problems with the job. Employees can also informally tell the ergonomist or UAW representative about a problem job during their frequent walk-throughs of the facility without using the form to generate a job analysis. Springfield does not use a discomfort survey to identify potential problem jobs because the results are difficult to interpret, and a survey carried out by an intern several years ago identified those jobs that the ergonomics staff already knew were problematic. Officials said it is difficult to know whether the discomfort being experienced by employees on particular jobs is attributable to the employee’s aging, or whether it is in fact due to a particular job. Even if it could be determined that the job was causing the discomfort, because the nature of jobs changes frequently, it would be difficult to tell whether the discomfort was the result of the job itself or of the interaction between the employee and the job. Although Springfield has spent most of its time on incidence-based identification, the facility has recently started to identify problem jobs on a more proactive basis. The ergonomist asked all supervisors to identify problem jobs on the basis of those staffed mostly by employees with low seniority and those with high turnover. In a unionized environment, as employees gain seniority, they can “bid off” of certain less desirable jobs and onto more desirable ones. This means that those jobs done by employees with the lowest level of seniority are probably jobs that most employees do not want to do—and the probable reason for this is that there are ergonomic hazards on these jobs. Officials said using these indicators may be more appropriate than using risk factors. Virtually any job in a manufacturing environment involves risk factors, they said, so it would be prohibitively time consuming and expensive to use risk factors as a basis to identify problem jobs. Although the ergonomist and the UAW representative complete an analysis on every job for which they receive a Request for Ergonomic Study, they currently give the highest priority to those jobs on which injuries have already occurred or discomfort has already been reported to the medical department. The next highest priority is given to those jobs for which a large number of requests for job analysis have been submitted. At this time, the lowest priority is given to those jobs identified by supervisors on the basis of high turnover and low seniority. Aiding in this prioritization is a database developed by the ergonomist called the “Ergonomic Log Line Breakdown,” which tracks all requests for job analysis and provides information such as the employee who was involved, the time the injury occurred or discomfort was reported, the job the employee was working on, and the body part affected. Springfield’s process for analyzing jobs and developing controls was described as simple, informal, and purposely not paper intensive. The ergonomist pointed out that a company is less likely to analyze a large number of jobs if there is a lot of paperwork to do for each job analyzed. She said Springfield analyzes about 250 jobs a year, which would not be possible if a lot of paperwork was required. Officials said this process relies heavily on the in-house resources at the facility, such as the employees doing the job and facilities engineering staff. In some cases, a detailed analysis is done if the job is particularly complex. The ergonomics staff stressed that the process must be continuous, as it is not always feasible to correct all hazards on every job, especially the first time out. While some effort is always made to alleviate at least some of the hazards on the job, the process must ensure that the problem job is revisited as long as the problem continues to exist. Officials also said that most of the controls that have been implemented have been administrative or “low-tech” engineering controls. For a description of controls developed to eliminate ergonomic hazards associated with windshield installation, see appendix II. To analyze a job, the ergonomist or the UAW representative assembles a committee of individuals and watches an employee perform the job in question to get a good understanding of the job requirements and what may be causing the problem. In some cases, the analysis is based on the information already provided on the Request for Ergonomic Study form. Typically, the analysis does not involve breaking the job down into component parts, although the committee often studies problem areas, which are generally the “ergonomic concern” stated on the Request for Ergonomic Study form, such as lifting or reaching. If necessary, a more detailed analysis is conducted. Jobs are not videotaped, because that would violate provisions of the bargaining agreement, but if the job is particularly complex, the analysis process is lengthy, or a large number of people are involved, Springfield may use an additional form called the “Ergonomic Assessment Form.” This two-page form elicits additional information, such as the type of work being done (for example, hand-intensive and manual materials handling), the risk factors present, and the tools and parts used. This form is used by a sister facility for all of its job analyses; however, according to the Springfield ergonomist, it is not reasonable for Springfield to use this form because of the number of jobs analyzed each year. Once the committee has finished analyzing the job, it follows an informal process to develop controls. The officials told us no specific tools are used to develop controls. Instead, the process is fluid and varies depending upon the problem itself. In some cases, the employee, supervisor, or whoever submitted the Request for Ergonomic Study has already suggested a control based on his knowledge of the job. In other cases, the committee identifies other operations in the facility to determine whether their controls may be appropriate for this job. The officials said it is imperative that they “walk the floor” to understand what the jobs are and what types of controls may be effective. For example, for the cab part of the truck to be adequately attached to the frame, the cab must be positioned at a particular angle. To accomplish this, employees previously had to “jack up” the cab with a car-type jack numerous times a day and were experiencing back, shoulder, and other problems as a result. The UAW representative knew that employees on other production lines were using a hydraulic pump to lift up the cab and suggested to the employees working on this process that they look into whether this type of control would work. These employees are now using a hydraulic pump, and discomfort has been reduced. For more complex situations, the committee presents the problem to the in-house engineers and asks them to develop controls. For example, on the radiator line, employees had to attach a metal casing (called a “horse collar”) to the radiator, which was suspended from an overhead line. Because the holes on the casing and the radiator were not lining up properly, employees had to manually pry the components with a screwdriver to adjust the holes before inserting the bolts. A number of employees were complaining of fatigue and pain from this job, and there were quality problems because the bolts were sometimes inserted incorrectly. In this case, the in-house engineers designed a U-shaped “spreader bar” that precisely aligns the holes in the radiator with those in the casing. The spreader bar has eliminated the physical strain of the employees and also improved the quality of the work. Springfield officials said they used no specific threshold to determine whether and when a control should be put in place. In most cases, these are judgment calls based on several factors, such as the severity of the problem or hazard, the extent to which the problem can be fixed, and the time or resources needed to develop and implement controls. Because of the limited number of in-house engineers to design or implement controls, Springfield tries to prioritize controls on the basis of likely injuries and other costs if the job is not fixed. Facility officials acknowledged that the program is never completed and the ergonomics staff is always on the lookout for improving existing controls. However, follow-up is typically informal, as there are insufficient time and resources to formally follow up on all jobs where controls have been implemented. However, the Ergonomic Log Line Breakdown can help the ergonomist determine whether jobs that have been analyzed continue to be the subject of requests for ergonomic study. If they are, the ergonomics staff will continue to revisit those jobs. The iterative nature of the program is especially important because not every hazard on every job can be totally eliminated. Facility officials said a small number of jobs they have analyzed have not been able to be fixed, primarily because it would have been prohibitively expensive to do so, requiring a change in product or in the production process. However, even in these cases, as with the pin job, Springfield has made repeated efforts to reduce exposure to hazards through other means. The establishment of the committee to develop controls for the pin job is the most recent example of this iterative process. In some cases, it is difficult to implement controls immediately because of the complexity of the product, the customization of the product, or the facility layout. In these cases, changes must often be implemented when a production or schedule change takes place. This was the case with the change in how windshields are installed (see app. II). On the other hand, constantly making changes can make it difficult to know whether controls are working. Additionally, it is not always feasible or appropriate to take a control implemented on one job or workstation and implement it on all similar jobs or workstations. For example, Springfield currently has about 30,000 guns at the facility that are used to drill in bolts. Many of these guns are “impact” guns that have excessive vibration, but they are very powerful. As many of the impact guns wear out, Springfield is replacing them with “nutrunner” guns, which are less powerful but cause less vibration. Facility officials said it is not reasonable or feasible to expect Springfield to replace every impact gun immediately; moreover, in some cases, nutrunner guns are not an acceptable replacement for impact guns. Springfield has implemented a mix of controls, focusing on the most cost-effective controls in their efforts to at least partially address identified hazards on every job analyzed. The ergonomist estimates that only about 10 percent of the controls implemented have been engineering controls, and most of these have been considered “low tech,” because they have not been extremely costly or significantly changed the job. For example, Springfield has installed hoists to lift 120-pound fuel tanks and mechanical articulating arms to transport carburetors down an assembly line. These controls have eliminated the manual lifting and strain associated with handling these heavy objects. The facility has also installed automatic lift tables, which rise as the load lessens, to reduce reaching and bending by employees and has improved hand tools used to do the jobs. Springfield’s program also covers employees who work in an office environment. There, Springfield has provided ergonomic chairs, filters for computer screens, and articulating keyboard trays. Most of the controls Springfield has implemented are administrative controls or personal protective equipment. Administrative controls have included training for office employees and a guideline for engineers to use when designing products. Padded gloves, elbow supports, and other protective equipment are commonly used throughout the facility, especially in those cases, such as the pin job, where it has been difficult to address hazards through engineering controls. To date, Springfield has not provided basic awareness training to employees but has instead provided general information about ergonomics informally through posters, word of mouth, and pamphlets. While Springfield would like to provide awareness training to all new employees and employees working on the production floor, there has been some difficulty taking employees off the floor during work hours for training. Springfield has focused on providing targeted training to office employees and production supervisors. For example, the ergonomist provided training to office employees to help them understand how to arrange their workstations to be more comfortable. In 1997, the ergonomist began to teach a technical training class for supervisors and engineers. This class provides 4 hours of basic information on MSDs, as well as up to 4 hours of additional information for material handling analysts, supervisors, and all engineers. Springfield’s program has established strong linkages with its medical management staff to ensure early reporting and prompt evaluation. Springfield has a fully equipped on-site occupational health clinic that is able to treat most of the injuries experienced by Springfield employees, with rare referrals to local health care providers. The medical director told us that having a clinic on site means that employees are less likely to leave work for medical attention and that she is more involved with and aware of what the employees are doing, how the injury or discomfort occurred, and how similar problems can be avoided in the future. Other officials said having an in-house doctor and medical staff helps Navistar, which is self-insured, keep medical costs down. The medical director is closely linked with the ergonomics program in several ways. Primarily, she can request a job analysis (through the Request for Ergonomic Study form) when an employee reports to the medical department discomfort or an injury that she believes was due to an ergonomic hazard. In fact, the recent change to this form to identify risk factors was initiated at the request of the medical director. Also, in many cases, the medical director participates on the ad hoc ergonomics committee, as well as on the management-level ergonomics committee, and helps analyze and develop controls for problem jobs. Additionally, when there are questions about the premise of a workers’ compensation claim, the medical director calls together the ergonomist and a representative from the workers’ compensation branch to discuss the validity of the claim. This workers’ compensation causation committee also helps to identify causes of injuries. Springfield also uses restricted- and transitional-duty assignments in an effort to return injured employees to work. The medical director said this is key to a successful, cost-effective program. However, Springfield faces several challenges in this regard. For example, if an injured employee has been given a particular work restriction, the available job that accommodates that restriction may not be available to the employee because he or she does not have enough seniority to work on that job. In other cases, some of the jobs available to injured employees, such as sweeping, are not seen as being productive, so employees are reluctant to take these jobs. Navistar officials said they are generally satisfied with Springfield’s ergonomics program’s contribution to improved worker safety and health, reduced injury rates, and lower workers’ compensation costs. Officials said they use a number of measures to look for results of the ergonomics program, since it is inappropriate to consider just one measure and exclude others. However, officials raised a number of issues that need to be considered when reviewing these results and that often complicate their ability to tie results directly to their efforts. As shown in figure V.1, Springfield reduced its costs for workers’ compensation claims associated with MSDs from almost $1.4 million in 1993 to $544,000 in 1996—a decline of over 60 percent. Additionally, during this same period, the average cost for each claim declined almost by half, from $9,500 in 1993 to $4,900 in 1996 (see fig. 3), which provides some evidence that the facility has been encouraging early reporting and providing early treatment. According to data provided by the ergonomist, Springfield also avoided about $250,000 in workers’ compensation costs between 1994 and 1996 as a result of reductions in carpal tunnel syndrome, repetitive trauma, and back injuries. During this same period, total costs for workers’ compensation declined by about 15 percent. But the facility did not achieve its overall safety percentage reduction goal in 1996 because of several large claims and the difficulty it experienced in returning injured employees to work. Total Dollars for MSD Claims (in Thousands) Navistar officials said several factors need to be considered when looking at their experience with workers’ compensation costs. First, there is uncertainty about what injuries should be considered MSDs. The ergonomist preferred to track injury categories directly tied to identifiable ergonomic hazards, such as lifting or repetition. On the other hand, corporate officials preferred to track all injuries to which ergonomic hazards may contribute. Officials also said that hiring 500 new employees in 1994 and laying them off shortly thereafter contributed to increases in injuries, claims, and associated costs. New, inexperienced employees are more likely to become injured, and claims also tend to increase before a layoff because, if an employee can qualify for a medical restriction, he or she will be able to receive workers’ compensation during a layoff. When the layoff ends, claims generally decrease. In 1995, Navistar did experience an increase in total workers’ compensation claims, although this spike did not appear in costs associated with MSD claims. Navistar also uses the OSHA 200 log to assess its performance in reducing injuries and illnesses on a facilitywide basis. Additionally, these data are used by OSHA in its inspection activities. According to these data for 1993 through 1996, Springfield reduced the number of injuries and illnesses for every 100 employees (referred to as the incidence rate) from 20.3 in 1993 to 14.2 in 1996 (see fig. 4). Additionally, in 1995, Springfield’s incidence rate of 16.1 was significantly lower than the industry average of 22.5, based on the most recent available data, for other assemblers of truck and bus bodies. Springfield also reduced the number of lost and restricted days for every 100 employees by 122 days and 35 days, respectively (see fig. 2). However, the ergonomics staff at Springfield said these data are not helpful for identifying or tracking reductions in MSDs. They said the OSHA log does not provide enough information to enable them to fully understand the circumstances surrounding an injury, or how it should be recorded. Officials also said injuries such as back injuries are recorded as acute, rather than as repetitive trauma, while in a manufacturing environment, most back injuries are the result of repeated lifting. Officials believed that, in many cases, ergonomic improvements had contributed to productivity, quality, and morale improvements. While the facility is not formally tracking productivity or quality improvements resulting from the program, the facility manager said the relationship between ergonomics and improving quality and performance cannot be denied. Additionally, the ergonomist reported that those departments with the most quality problems also tend to have the lowest seniority and most ergonomic problems. Officials cited examples, such as the redesign of the windshield installation process as discussed in appendix II, in which Navistar has been able to achieve quality as well as ergonomic improvements. However, corporate officials said it is difficult to distinguish the benefits gained by “ergonomic” investments from those resulting from efforts to increase productivity or reduce rework. Concerns were also raised that, in some cases, ergonomic controls may actually decrease productivity—for instance, when additional employees are assigned to do the same amount of work that one employee had been doing. The Sisters of Charity Health System is a for-profit health care provider located in Lewiston, Maine. It includes a not-for-profit 233-bed acute/behavioral medical care facility (St. Mary’s Regional Medical Center) and a not-for-profit 280-bed long-term-care nursing facility (St. Marguerite d’Youville Pavilion). These two entities employ about half of SOCHS’ workforce of 1,400 nonunion employees—522 employees work at the medical center and 253 work at the nursing home. A number of local conditions set the stage for the implementation of the ergonomics program at the medical center and nursing home. In 1993, to prepare for managed care, SOCHS began to streamline management structures, improve client relations, and gain a better handle on costs by becoming self-insured. As a result, when OSHA invited the medical center and nursing home to participate in the Maine 200 program, SOCHS agreed. SOCHS realized the ultimate goal of the program—to reduce injuries and illnesses through establishing a safety and health program—supported SOCHS’ efforts to reduce costs and increase efficiency. OSHA’s offer to provide assistance and the good relationship SOCHS had with OSHA were also factors in the decision. SOCHS had been aware of its high workers’ compensation costs because, when it became self-insured, it was required by the Bureau of Insurance to set aside considerable funds to develop a trust to cover future workers’ compensation claims (the amount was based on historical claim experience). Additionally, SOCHS knew that a leading cause of lost time was back injuries of CNAs who did most of the patient handling at the nursing home. Also, employees working in the laboratory, medical records, registration, and other heavily computer- and phone-intensive operations at the medical center were suffering various hand and wrist injuries. The offer from OSHA provided additional incentive for SOCHS to address these injuries. Officials told us the program was fully implemented in 1994 after they had undertaken a number of efforts in response to OSHA’s September 1993 invitation to participate in the Maine 200 program. These efforts were generated by the requirements to participate in the program. To participate, the medical center and nursing home had to conduct a baseline hazard survey to identify existing hazards, set up an action plan that outlined the steps the facility would take to address identified hazards, and establish a comprehensive safety and health program that would seek to reduce injuries and the contributing hazards. The facilities were also required to report quarterly to OSHA on their progress and allow OSHA inspectors to conduct on-site monitoring visits. Along with its invitation, OSHA also provided SOCHS its Safety and Health Program Management Guidelines, which were to be the framework for SOCHS’ safety and health program. The first thing SOCHS did was contact a consultant who said that staff should be assigned to manage the program. Soon after, SOCHS hired a safety coordinator to establish a safety and health program. The consultant also suggested setting up a system to track injuries and workers’ compensation costs. Because existing systems were inadequate, SOCHS hired a risk management coordinator to develop a database to track the number and type of employee injuries, the number of lost and restricted workdays, and related information. A second system was developed in conjunction with the third-party administrator to track costs of claims. The safety coordinator conducted the required baseline hazard survey. On the basis of the survey results, SOCHS developed action plans that laid out how the medical center and nursing home would address the identified hazards and injuries. SOCHS also began to establish procedures to implement the elements of an effective safety and health program. SOCHS’ ergonomics program is led by several officials located in the human resources department—the director of risk management and safety, the safety coordinator, and the risk management coordinator. A doctor and an ergonomist/nurse with the on-site occupational health clinic (called WorkMed) dedicate most of their time to conducting workstation evaluations, helping to develop controls, and treating injured employees. Other in-house resources, such as engineering staff, also work with these staff to develop controls. Officials said that when MSDs constitute the majority of injuries and illnesses, they are a priority under SOCHS’ safety and health program. When other injuries (such as slips and falls on icy parking lots or injuries from combative patients) constitute a majority of the injuries, then they are a priority. Management commitment to the ergonomics program at SOCHS is demonstrated in a number of ways. SOCHS does not have a formal ergonomics document for either the medical center or the nursing home, but officials told us the quarterly reports to OSHA that chart the facilities’ progress in meeting goals and information provided in meetings and training for senior management and supervisors are the best indicators of the daily operations of the program. SOCHS officials said there must be a point person responsible for making sure things get done and that person must have the resources to deal with problems. Because of this view, SOCHS has assigned staff to be responsible for the program. Key are the director of risk management and safety, the safety coordinator, and the risk management coordinator. These employees are responsible for addressing hazards, providing training, and tracking injuries and costs. Additionally, SOCHS has integrated ergonomic principles into the purchase and design of equipment. For example, WorkMed must certify that all new office construction incorporates ergonomic furniture and design. WorkMed has helped design new office space in the medical records department and the emergency registration area at the medical center, as well as in other areas. Additionally, the nursing home recently bought new medical carts to eliminate identified ergonomic hazards. Medical carts are used to store residents’ medications and are wheeled around the nursing home when medications are dispersed. Several shorter employees had suffered wrist injuries resulting from having to reach into awkward positions to get the medications. Because the ergonomics staff notified the nursing home administration about this hazard, the nursing home looked for and purchased shorter carts that had side drawers that could hold medications and accommodate these shorter employees. SOCHS has also made financial resources available to the program. For example, early on, SOCHS spent $60,000 on 14 automatic lifts for the nursing home and has since purchased another as a “spare.” Officials said making such a significant investment early in the program required a “leap of faith” that it would pay off, because there were no real data to support such an investment. Ergonomics staff noted, however, that this investment needs to be considered in light of the cost of just one back injury, which could cost more than $60,000. Additionally, officials said suggestions for ergonomic controls are typically implemented; in fact, in 1997, the director of risk management and safety was given additional funding for ergonomic controls that were not accounted for in departmental budgets. SOCHS has also ensured management support for the program in several ways. For example, if managers do not address identified hazards and employee complaints promptly, the safety coordinator has the authority to take action against these managers. SOCHS relies on a number of committees to identify hazards, including ergonomic hazards. These committees do not identify problem jobs or develop controls; instead, according to SOCHS management, these committees work to provide a heightened awareness of safety and ergonomic principles throughout SOCHS by keeping an eye on overall workplace conditions and notifying the ergonomics staff when they see items that need to be addressed. The committees meet once a month during work hours and draw membership from hourly as well as managerial employees and, in some cases, doctors. Management reviews the minutes from these committee meetings. Recently, an ergonomics task force was formed. The task force has about nine volunteer employees, and the safety coordinator, the director of risk management and safety, doctors, and officials from purchasing and engineering provide guidance to the task force. The ultimate goals of the task force are to help develop priorities for hazards that need to be addressed and to help employees address those hazards that may not be serious enough to merit a workstation evaluation by WorkMed. SOCHS has also established procedures that provide employees direct access to services. For example, if employees want a workstation evaluation, they can simply call WorkMed to request one. Officials also emphasized the value of employee input during these evaluations and said many of the controls come from employees. SOCHS identifies problem jobs primarily on an incidence basis. In other words, most of SOCHS’ efforts result from a report of injury or discomfort or from employee requests for assistance because of other reasons. SOCHS has established a simple system by which problem jobs are identified. If an “incident” occurs (at SOCHS this means an injury or feeling of discomfort), the employee and supervisor are required to complete separate “Report of Employee Incident” forms within 24 hours. The employee’s form elicits information about the employee involved (such as, age, sex, and position); the incident (location, time, date, witnesses, explanation of what the employee was doing at the time of the incident, and the body part affected); and steps taken after the incident occurred (whether first aid was provided or referral to WorkMed was made). The supervisor’s form elicits information about the length of time the employee has been doing this task or job, what may have contributed to the incident, corrective actions the supervisor has taken for the affected employee (which must be taken within 72 hours), and actions the supervisor is taking to prevent a similar incident in the future. This form is then forwarded to WorkMed, which performs a physical examination of the employee. After the examination, WorkMed determines whether the injury or reported discomfort is due to ergonomic hazards (such as experiencing shoulder pain from prolonged use of microscopes) and, if so, WorkMed performs a workstation evaluation. Workstation evaluations can also be triggered simply by a phone call to WorkMed if the employee does not need a physical examination. Although SOCHS devotes most of its time to workstation evaluations resulting from complaints of discomfort or employee requests for assistance, SOCHS also identifies problem jobs on the basis of potential risks. For example, when an employee relocates or changes jobs, WorkMed is required to conduct a workstation evaluation to ensure that the employee’s new workstation is set up correctly and that the employee is aware of potential hazards on his or her new job. Additionally, when entire departments are relocating or when new construction is taking place, WorkMed provides guidance on appropriate workstation and equipment design and must certify that design is ergonomic before final approval. SOCHS officials said the process it uses to analyze problem jobs is simple. In fact, it stressed that, in most cases, it conducts workstation evaluations—making physical changes to an individual’s workstation to make the job more efficient and the employee more comfortable—rather than job analyses—evaluating whether tasks of a job or operation should be changed. Although there have been times where SOCHS has done job analyses, officials said it is not always practical or necessary to conduct a detailed job analysis in order to reduce hazards. The safety coordinator said that if a job was causing problems for more than one employee, he might undertake a job analysis to break down the job into tasks and make recommendations to change some of those tasks. However, he has not done this recently, because he can often make changes without having to do such detailed analysis. SOCHS officials described their process for developing controls for problem jobs as informal. They emphasized the importance of using in-house resources to develop controls because employees know the job process and often can provide the best information on how the workstation can be improved. The officials also noted that the process is a continuous one. There is no specific threshold for when and whether a control should be implemented, and something can always be done to reduce a hazard or respond to the cause of the injury. Officials said a large number of the controls that have been implemented have concerned better work practices, while others have been “low-tech” engineering controls that have not drastically changed the job or operation. When WorkMed officials conduct evaluations, they spend about an hour watching the employees perform the job and taking physical measurements of the current workstation design (desk height, monitor placement, and chair height) and the employee as he or she relates to the workstation (appropriate elbow height when seated, for example). WorkMed may also assess the general workplace conditions, such as light and noise levels, but it does not follow a particular format for these evaluations. Because WorkMed is not technically a component of SOCHS, it charges SOCHS for these evaluations. Since 1995, SOCHS has spent about $10,000 for evaluations at the nursing home and the medical center. Although SOCHS does not typically videotape jobs, it may perform detailed analyses of jobs. For example, in the surgical area at the medical center, one job requires a secretary to input a significant number of medical charges into a computer. This is an extremely stressful job, because if items are omitted or input incorrectly, the medical center loses revenue. The secretary is required to perform several other tasks simultaneously, which contributes to the overall difficulty of the job. In doing its analysis of this job, SOCHS evaluated not only the physical characteristics of the workstation (work surface and chair height), but also the environment (noise and other distracting influences) and the numerous additional required tasks to determine whether any of these tasks could be eliminated or altered to reduce the stress of the position and increase the efficiency of the data input process. Once the WorkMed staff have completed the workstation evaluation, they work with the employee who performs the job, in-house engineering staff, or others to “brainstorm” possible suggestions for eliminating the identified hazard. Officials said that often the employees themselves have suggestions for what controls to make. WorkMed officials said that when developing controls, they try to do those things that are easy to accomplish or fairly inexpensive. Additionally, for the duration of its participation in the Maine 200, SOCHS obtained ideas for controls from the compliance officer who had been assigned to it. Because of her familiarity with SOCHS and because she also had been assigned to similar employers in the health care industry, she was able to suggest ideas for controls that had worked for other employers. WorkMed incorporates these suggestions into its evaluation summary—a two- to three-page memorandum that is provided to the director of risk management and the employee’s supervisor. The director of risk management evaluates the suggestions; determines how much implementing them will cost; and forwards them, along with their costs, to the cognizant department head for review and approval. For example, WorkMed recently suggested controls to alleviate employee discomfort in the shoulders and neck from excessive phone use, and back and arm discomfort from inappropriate computer workstation design in the medical center’s reception area. WorkMed suggested buying headsets for the employees; putting monitors on articulating risers so they could be placed at appropriate heights for numerous users; and buying ergonomic chairs, among other suggested controls. These controls will cost about $4,000. In many cases, controls have been developed by in-house engineering staff. For example, an in-house engineer created an adjustable, slanted wooden surface that can be used as a mouse pad. A patent is currently pending on this item. In another instance, in-house engineers designed a wood computer monitor riser that elevates monitors to the appropriate height. Facility officials agreed that analyzing problem jobs and developing controls must be a long-term effort, and the key is to look for continuous improvement. Accordingly, WorkMed or the ergonomics staff follows up after a workstation evaluation is performed if problems persist. Officials also mentioned that not all problems can be fixed immediately, since the ability to implement controls is often dependent upon available resources. For example, the ideal way to adequately address the hazards on the surgical secretary job mentioned above would be to implement a computer system that would allow employees to input the medical charges as they are accrued, thereby reducing the amount of keying required by the secretary. However, this type of computer system could cost over $200,000. Until the facility is able to afford this control or comes up with another alternative, SOCHS is trying other methods, such as rotating workers through the position on a part-time basis, in order to relieve the pressure of this job. SOCHS has implemented a mix of controls equally distributed between engineering controls (such as buying equipment), which alleviate or reduce hazards, and administrative controls, which encourage proper work techniques. Officials said that most of both types of controls have been inexpensive. Perhaps the single greatest identifiable investment made by SOCHS on engineering controls has been for automatic lifts for the nursing home, which cost about $60,000 (see the detailed discussion about these lifts in app. II). SOCHS has instituted a variety of other types of engineering controls in the laboratory area at the medical center. Employees who work in this area use computers, phones, and microscopes extensively. Because of the former configuration of lab counters and chairs, employees often had to use awkward postures to input data or use the microscopes. As a result, employees were experiencing shoulder, neck, and hand discomfort, as well as some injuries. SOCHS lowered the countertops, bought adjustable ergonomic chairs, placed the monitors on articulating monitor risers to accommodate multiple users, raised the microscopes, and put glare screens on the computers. In the laundry room area, SOCHS has also placed false bottoms in laundry bins that rise as the load becomes lighter so employee bending and reaching are minimized. SOCHS has also used administrative controls. For example, smaller laundry bags that hold only a limited amount of laundry are now used so employees’ lifting requirements are lessened. SOCHS has also purchased antifatigue mats for its employees who stand while working. SOCHS has also offered body mechanics training and increased staffing to better manage high workloads in some work areas. WorkMed officials emphasized that quite often controls involve telling employees how to use better work practices. For example, recently a laboratory employee was experiencing a great deal of wrist pain resulting from the practice of dropping liquid from an eyedropper into a test tube. After watching the employee perform the job, it was found that she was flicking her wrist back after she dropped the liquid in the test tube. In this case, the control was a recommendation that she not flick her wrist. In the medical center’s medical records area, employees were also experiencing wrist and hand pain from shoving copies of patient records onto shelves. In response, SOCHS instituted work policies that employees are supposed to follow for handling these records: They are supposed to leave space between each of the records to avoid using a pinch grip to pull out or push in the records. SOCHS has provided general ergonomics training as a part of mandatory safety training. The class is offered twice a month for 4-1/2 hours at a time, about 3 hours of which focus on body mechanics (for example, correct positioning for various activities, such as lifting) and proper use of video display terminals. If employees do not attend this training, they will not receive their performance ratings. SOCHS officials said this training is required by several OSHA standards, Maine’s accreditation committee for health care organizations, and a state law that requires training for employees who work in front of video display terminals for at least 4 hours a day. Other general awareness education for ergonomics has been provided through an employee newsletter and advice from a “safety mascot.” The officials said that it is not feasible to require employees to attend training for more than 4 hours at a time or more than once a year. In the past, they said, they were unable to get people to stay in training when it was longer. Additionally, so much training is already required for health care organizations that any additional training must be reasonable and directly related to employees’ tasks. Given these concerns, SOCHS provides specialized ergonomics training for employees on the basis of the risks they are exposed to and their job requirements. For example, newly hired CNAs and other staff are given training on how to use the automatic lifts. The ergonomics committee leaders have also received training on how to identify and prioritize hazards. SOCHS also provides back training to all new employees working in areas where a significant amount of lifting takes place. For the last 4 years, supervisors have also received training on the procedures they must follow to investigate accidents and ensure injured workers are provided treatment, as well as how to identify hazards. The ergonomics program has strong links with medical management staff to ensure early reporting and prompt evaluation. The officials emphasized that having WorkMed, the on-site occupational health clinic, has helped SOCHS encourage employees and managers to report all incidents early. This is done through the Report of Employee Incident form as well as by employees’ directly contacting WorkMed for an evaluation. WorkMed is generally able to treat all injured employees. Because WorkMed conducts workstation evaluations, it is also able to suggest controls to reduce hazards and injuries and work with the engineering and facilities staff to apply ergonomic principles to equipment purchase and design. SOCHS has also used restricted- and transitional-duty assignments in an effort to return injured employees to work. Officials said this was a major emphasis for them, since the large number of workers’ compensation claims with lost workdays was a basis for their inclusion in Maine 200. In fact, when SOCHS began this program, a number of employees were out on disability, and SOCHS immediately tried to get them back to work on restricted duty. To control the number of days employees are out, officials maintain contact with injured employees, and the risk management coordinator sends calendars to cognizant supervisors to help them track the number of days their employees are out or on restricted duty. WorkMed follows up with these employees once they are back at work. After each physical examination it performs, WorkMed determines whether an employee needs any type of restriction. If so, WorkMed completes a “Patient Instruction Form,” which documents the recommended treatment for the injury or reported discomfort and highlights the activities the employee can do and for how long. Through the workstation evaluations, WorkMed ensures that the employee’s workstation supports these restrictions. Officials said that because SOCHS is so large, finding these types of jobs for injured employees is not difficult. The medical center has developed several light-duty positions, such as answering the telephone for lifeline calls or doing research on the library computer. The nursing home has established an area in its laundry room where employees can be assigned during recovery time. The officials said the individual departments carry the charges for these jobs, so they have an incentive to return employees to full performance as soon as possible. Despite this, officials did say that some employees in the system were so badly restricted that ensuring that they are productive has been difficult. SOCHS officials said they were generally satisfied with the results of their program because of (1) the reductions in injuries and their associated workers’ compensation costs and (2) an improved safety and health record, as evidenced by both facilities’ “graduation” from Maine 200 in 1996. Eligibility for graduation from the Maine 200 program was determined by OSHA on the basis of the extent to which it believed the facilities had implemented the goals of the Safety and Health Program Management Guidelines, not on whether the facilities met specific targeted reductions in injuries, claims, or costs. After working with SOCHS for this 2-year period, reviewing SOCHS’ quarterly progress reports, and conducting several on-site monitoring visits, OSHA determined that SOCHS had made sufficient progress in implementing its safety and health program. Despite this success, officials said a number of factors needed to be considered when reviewing these results that often complicated their ability to tie results directly to their efforts. As figure VI.I shows, the medical center and nursing home together reduced workers’ compensation costs for MSDs by about 35 percent between 1994 and 1996 (from $100,000 to about $70,000). To capture MSDs, SOCHS tracks “cumulative trauma disorders” (for example, “carpal tunnel syndrome” and “overuse syndrome”); “tendinitis”; “epicondylitis”; and “back injuries.” However, the average cost for MSD workers’ compensation claims for both facilities combined increased slightly, from about $2,500 in 1994 to over $3,000 in 1996 (see fig. 3). Total Dollars for MSD Claims (in Thousands) SOCHS officials said other evidence of success has been the reduction in the amount needed to fund SOCHS’ workers’ compensation trust. After the first year of being self-insured, SOCHS has been allowed to set aside decreasing amounts of funds and can now set aside funds as it believes are necessary. If the trust becomes larger than SOCHS believes is required, it can withdraw any excess funds. In 1996, SOCHS withdrew $800,000. Nonetheless, the officials said a number of issues need to be considered when evaluating these data. First, when SOCHS implemented its program, officials found the existing systems were inadequate to track injury and claim experience, so SOCHS developed two databases—one based on the Report of Employee Incident form and the other based on workers’ compensation claim experience. These databases help SOCHS officials monitor injuries and claims, but officials said they do not typically isolate injuries that would be categorized as MSDs because SOCHS has sought to reduce all types of injuries and their associated costs. Officials said it could be difficult to isolate MSDs from other injuries, since doing so would require that all Report of Employee Incident forms be reviewed to fully understand the circumstances of the incidents and, thereby, determine whether the injuries resulted from ergonomic hazards. Officials also said costs can be significantly affected by one or two large claims. For example, in 1996, the medical center had a total of 179 lost workdays, 157 of which resulted from one claim. Thus, this one claim was in large part responsible for the increase in average MSD cost discussed above. Officials also said the number of incidents is likely to increase because early reporting is being encouraged. Moreover, officials said it was difficult to know how much of a reduction in injuries, illnesses, and associated costs is appropriate. They agreed that it was appropriate for OSHA not to impose specific performance goals, such as a certain percentage reduction in workers’ compensation costs, given the newness of the program. The officials said program results must be viewed over the long term, because they believed the key was to look for a process that improves from year to year. The OSHA 200 log data are instructive because they illustrate a facility’s general experience with injuries and illnesses, and these data are used by OSHA in its inspection efforts. According to data for the medical center and nursing home combined for 1993 through 1996, the number of injuries and illnesses for every 100 employees (the incidence rate) declined from 14.7 to 12.3 (see fig. 4). The experience between the two was uneven, however, with the nursing home experiencing an increase in injuries and illnesses over this period. But the significant reductions at the medical center enabled SOCHS, as a whole, to realize a reduction in the incidence rate. And, for 1995, the last year for which industry comparison data are available, the nursing home’s incidence rate of 17.3 was lower than the industry average for nursing and personal care facilities of 18.2, and the medical center rate of 8.6 was below the industry average for hospitals of 10.1. Additionally, while the facilities together were able to reduce the number of lost workdays for every 100 employees by 35, the number of restricted days for every 100 employees for both facilities combined actually increased by 45 (see fig. 2). The officials said reduction of lost workdays was important for them because the medical center and the nursing home were selected for inclusion in the Maine 200 program because of their large number of claims with lost days. As a result, officials said the increase in the number of restricted days reflects their efforts to keep injured employees at work on restricted work assignments or to return employees to work as soon as possible. Also, as evidence of its return-to-work policy, last year, SOCHS did not have to pay any workers’ compensation for nursing home employees’ salary or benefits while they were out of work. Officials said they do not primarily use the OSHA 200 log to track program progress. In fact, they said they had to develop other systems when they first began the program because the OSHA 200 log data were piecemeal and, in some cases, inaccurate. Moreover, officials said OSHA 200 did not allow for sufficient information to be entered about the cause of the injury or illness. SOCHS officials believed that their emphasis on ergonomics, and safety and health in general, had contributed to an improved work environment, but evidence of this was largely anecdotal. Officials believed that the program had contributed to reduced turnover and absenteeism, and the better work environment has meant that SOCHS can attract the best employees away from competitors. In some cases, ergonomic improvements have also contributed to increased efficiency and effectiveness; for example, some of the equipment redesigns have eliminated duplication in the processes SOCHS uses to enter data. Officials also said that employee morale has improved, as evidenced by employees’ appreciation and use of the automatic lifts. In response to employees’ demands, SOCHS is now buying additional automatic lifts for use in other areas. This is significant, given that there was some resistance when the lifts were first instituted. Texas Instruments, which began operation in 1951, is a manufacturer of semiconductor devices; electronic sensors; and radar, navigation, and missile guidance systems. TI has about 55,000 employees worldwide in about 150 locations. The Lewisville, Texas, facility of TI, which began operation in 1978, serves as the headquarters of the Defense Systems and Electronics Group (Systems Group) for TI. The Systems Group, which includes Lewisville and four other nearby facilities, produces the “smarts,” or electronics, for weaponry. About 2,800 employees are employed at Lewisville, with engineers composing about two-thirds of the staff. Other occupations at Lewisville include electrical assemblers, machinists, manufacturing aides, and equipment technicians. None of the workforce is unionized. TI’s corporate culture, which reflects quality management principles, affects TI’s ergonomics efforts. Beginning in the early 1990s, TI adopted a team-based organizational structure. Many different teams have been formed at the facility level, the Systems Group level, and the corporate level to address a wide range of production and other issues, including safety and health. TI drives its activities by setting corporationwide goals and providing considerable flexibility at the various levels of the organization to achieve these goals. The overall goals and targets are set through a negotiation process between corporate management and these teams. As consistent with quality management principles, TI has encouraged the diffusion of best practices across sites. The Systems Group Ergonomics Council was formed in 1993 to facilitate sharing of information across the Systems Group. Also, a Global Ergonomic Leadership Team was formed at the corporate level to build a corporate communication strategy. TI also participates in an informal consortium of Texas companies called the North Texas Ergonomics Consortium. The industry type and product line also affect TI’s ergonomics efforts. The Lewisville facility was described as a “lean and agile” operation that undergoes rapid changes in production activity. For example, as production in some work areas is “ramping up,” in others, it is “ramping down.” A recent consolidation resulted in some staff and operations from other facilities being transferred to Lewisville. These constant changes can be challenging to teams trying to reduce ergonomic hazards. In addition, the federal government is a major customer for the products at Lewisville, which places some constraints on the flexibility the facility has to modify its production practices. Also, because of Lewisville’s dependence on federal contract dollars, the facility underwent some downsizing between 1992, when it had about 3,700 employees, and 1996, when approximately 2,800 employees were employed at this facility. The ergonomics program at Lewisville was fully implemented in 1992, the year after workers’ compensation costs for MSDs exceeded $2 million, causing considerable alarm among facility management. TI’s ergonomics efforts, including those at Lewisville, appear to have evolved, however, with some activities dating back to the 1980s. An extensive ergonomics awareness training effort was initiated by the site safety engineer at Lewisville in the 1980s. The next site safety engineer, who still holds this position, specializes in ergonomics. In 1989, an ergonomics thrust was proposed by the Lewisville Site Safety Council. Special corrective action teams (CAT) were formed to address specific ergonomic problems, such as replacing worn hand tools and redesigning totes for material handling that would cause less strain. Although the individual CATs attacked some special problems, each was dissolved once a solution was proposed. In 1991, a standing ergonomics team, Lewisville’s ergonomics team, was formed, and a second wave of ergonomics training was initiated throughout the manufacturing work areas. “ERGO Days”—special days on which participatory, educational displays were set up throughout the facility to foster awareness of ergonomic issues and during which employees’ personal workstation measurements were taken—were begun in 1992. The ergonomics team also conducted incident evaluations when injuries occurred and started an effort to adjust administrative workstations. However, because the team was staffed by Lewisville employees who volunteered to do this in addition to their other duties, it was limited in what it could accomplish. In some cases, considerable delays occurred between when an injury was reported and when team members could find time to conduct an evaluation. When a full-time ergonomics specialist position was created in 1995, the ergonomics team began to address the MSD problem more aggressively, according to the current team leader. A facility team of program managers—referred to as the Site Safety Quality Improvement Team (QIT)—had agreed to create this position because the ergonomics team had successfully argued that its inability to follow through on reports of injuries was a barrier to the facility’s reaching its safety and health goals. In 1996, the ergonomics team was reorganized to include a cross section of facility employees. The heart of the ergonomics program at Lewisville is its ergonomics team, to which the full-time ergonomics specialist and the site safety engineer provide support. Other teams formed for broader objectives within the Lewisville facility, across the entire Systems Group, and throughout the corporation provide guidance and direction to the ergonomics team. These teams, including the Site Safety QIT, which is composed of program managers, communicate focus and strategy to the Lewisville Site Safety Council, of which the ergonomics team is a subteam. The Systems Group Ergonomics Council communicates focus and overall direction on ergonomic activities across the Systems Group. It reports upward to two teams that support numerous ergonomic activities and also operate across the Systems Group: the Systems Group Environmental, Safety, and Health Leadership Team and the Systems Group Human Resources Leadership Team. These teams in turn feed into the Systems Group Leadership Team. At the corporate level, there are the Corporate Environmental Safety and Health Leadership Team and its subteam specific to ergonomics, the Global Ergonomics Leadership Team, which was formed just a year ago. The activities of the Global Ergonomics Leadership Team include building a better communication strategy that is truly global (since TI has facilities worldwide). Also at the corporate level is the staff office for Corporate Environmental Safety and Health. Management commitment to the ergonomics program at Lewisville is demonstrated in a number of ways. Primary among them is the assignment of staff, including the ergonomics team and a full-time ergonomics specialist hired in 1995 to help the team achieve its objectives. The site safety engineer said that the facility probably waited “too long” to hire the ergonomics specialist, which delayed implementation of the ergonomics program since neither the members of the ergonomics team nor the site safety engineer could respond quickly enough to problems. Corporationwide accountability mechanisms are reflected in the corporate strategic goal, which all facilities are expected to contribute toward achieving. This overall goal is to eliminate all preventable occupational and nonoccupational injuries and illnesses by the year 2005. To do so, since 1996, facilities have strived for a 20-percent reduction from the previous year in the injury and illness incident rate and the lost or restricted day rate. In addition, a corporationwide audit is conducted by the Corporate Environmental Safety and Health office at each facility once every 3 years. Through these audits, TI tries to ensure that each facility is following practices consistent with the company’s Ergonomic Process Management Standard, which lays out minimum requirements for the core elements of an ergonomics program that each facility must meet. Each facility also conducts a self-audit every year using these same guidelines. Ergonomic principles are also integrated into purchasing and design. For example, a future project of the ergonomics team, the Integrated Product Development Process, will involve working with facilities staff, product designers, and assemblers to see how ergonomics can be better integrated into product development. In addition, the ergonomics team, working with other teams across the Systems Group, has undertaken various projects for the design or purchase of ergonomic tools. For example, another facility within the Systems Group has developed an Ergonomic Hand Tool Catalog from which employees from any Systems Group facility can select tools that meet preset standards and that have been widely tested within the facility itself. Resources are also made available for the ergonomics program. Suggestions for controlling problem jobs that are submitted by the ergonomics team are typically accepted by facility management. Because the cost center managers are also members of the Site Safety QIT (which can approve most expenditures directly), formal cost justifications are rarely required for capital investments to control ergonomic hazards. A written cost justification is required only if a control costs more than $1,500. In fact, any of these larger capital investments must also be approved by the site safety engineer to ensure that no safety or health (including ergonomic) concerns are associated with it. The facility has also established mechanisms for ensuring that middle management support is sustained. The Site Safety QIT is composed of program managers who provide overall focus and strategy to the ergonomics team and also approve most capital investments to improve ergonomic conditions. Also, in recognition of the importance of middle management buy-in, two “Ergonomic Management Seminars” were sponsored in 1996. Some of the managers had been skeptical of the need for the ergonomics program, perhaps since they had never experienced an MSD—and they may be less likely to, since their job responsibilities tend not to pose the same risks. Yet the ergonomics team considered buy-in from these middle managers critical, since they often controlled the cost centers toward which any ergonomic investments would be charged. These management seminars demonstrated how ergonomic losses affect the bottom line by discussing the cost of injuries and the impact of MSDs on productivity. TI’s Ergonomic Site Policies and Procedures lays out specific responsibilities of various teams and facility staff for implementing the core elements of the ergonomics program. For example, this document requires the Site Safety QIT to continue to demonstrate visible support for the ergonomics program. Similarly, production engineering department staff are required to document ergonomic analysis for all future workstations and serve as ergonomic incident investigators for work areas they support. But this document is not viewed by corporate or facility staff as key to program operations, and team members said they rarely refer to it. Employee involvement is illustrated by the central role the ergonomics team plays in all ergonomic activities at the facility. This team is composed of a cross section of staff from the engineering, warehouse, space planning, and medical departments as well as from TI’s fitness club. There are more engineers on Lewisville’s ergonomics team than there are on some other TI ergonomics teams, which, according to the team leader, reflects Lewisville’s emphasis on developing controls specifically tailored to the needs of individual production units. In addition, the team leader is also a manager in the production engineering department. The team oversees the ergonomic program and the activities of the ergonomics specialist, and can make capital requests. Participation on the team is voluntary and involves a 2-hour meeting every 2 weeks and perhaps 1 hour of “homework” every week. However, it is the ergonomics specialist who is responsible for the day-to-day activities of identifying problem jobs and developing controls. Employees are involved in an ad hoc fashion as well. They are encouraged to go directly to the ergonomics specialist or production engineering department to identify potential controls for their own jobs when they believe ergonomic hazards exist. Solutions or controls proposed by the ergonomics specialist or the ergonomics team are also critiqued by assembly and other employees who work on the problem job. Procedures have been established so that employees can directly access ergonomic services. An employee can request an administrative or manufacturing workstation evaluation either in person, by phone, or via electronic message. The employee is then automatically visited by the ergonomics specialist, who administers a one-page “Ergonomics Evaluation Report” (one version for administrative workstations and another version for manufacturing workstations). Once measurements are taken by the ergonomics specialist, they are entered into a database so that any workstation the employee moves to within this or another TI facility is properly adjusted to meet that employee’s personal requirements. Lewisville also conducts a number of awareness campaigns, including its “wing-by-wing” measurement campaign, in which employees are measured and their workstations adjusted. This is particularly helpful for employees who may be experiencing problems but have not yet requested services. As part of this campaign, ergonomic accessories are suggested to individual employees and ordered, and the ergonomics team works with cost center managers to purchase equipment or anything else that the employee needs. In addition, Lewisville offers a wide range of training and awareness activities, which are catalysts for effective participatory ergonomics, according to the facility’s ergonomics training coordinator. (These training and awareness activities are described below). There are several ways in which the ergonomics team and the ergonomics specialist learn that a job might be a problem. Incidence-based methods for identifying problem jobs, that is, methods that rely on employee reports of injury or discomfort or employee requests for assistance, follow: When an accident occurs or an employee reports an injury or illness to the health center, the supervisor or “safety starpoint” must investigate the incident and complete an “Injury/Illness “Investigation Report.” This report, which is submitted to the Accident Review Board of the safety department, is intended to identify root cause in order to prevent another employee from being injured in the same way. The employee is evaluated and treated at the health center. If the injury involves “body stress” or “repetitive motion,” the ergonomics specialist is notified and is required to conduct a job or worksite analysis within 3 working days. Any employee who is experiencing discomfort can request either an administrative or manufacturing workstation evaluation simply by sending an electronic message to the ergonomics specialist. Jobs in all “at-risk” job classifications—that is, jobs with a high number of recordable injuries or illnesses—are identified through a review of the injury and illness data in the facility’s workers’ compensation database. Among the at-risk jobs identified were production helper, optical fabricator, parts finisher, and electrical assembler. The following methods for identifying problem jobs on a proactive basis—to avoid injuries on jobs at which there was evidence that hazards existed—were used: A “wing-by-wing” measurement campaign was instituted to measure employees and adjust their workstations as a way of identifying employees who might be experiencing problems. This campaign offers one-on-one educational opportunities to employees who otherwise may not have sought out help, according to a member of the Site Safety QIT. An administrative workstation adjustment campaign was implemented in recognition of the facility’s need to shift its focus from hazards at the manufacturing workstation—many of which the company had already addressed—to potential hazards at administrative workstations. Many employees at Lewisville use both types of workstations. Prioritizing problem jobs is done by the ergonomics team on the basis of jobs, or job classifications, where injuries have already occurred. In other words, the ergonomics team has focused first on jobs in which an employee, who has reported to the health center, is found to have an MSD or related symptoms. A second priority has been addressing at-risk job classifications with the help of a consultant. Facility officials described analyzing problem jobs and developing controls as generally an “informal” process. The ergonomics specialist referred to many of his activities as workstation evaluations as opposed to job analyses because these activities focused on increasing the employee’s comfort in relation to his or her workstation but did not involve major changes to the job or operations. Sometimes, however, more detailed analysis is conducted, particularly for at-risk jobs, and this facility has used the services of a consultant to help develop controls. The ergonomics specialist said that developing controls is an “iterative” process, but that typically something can be done to reduce ergonomic hazards, even if it is just talking to the employee to identify work practices that may be contributing to the problem. Many of the controls implemented could be described as “low-tech” engineering controls, such as purchasing adjustable-height workstations and “ergoscopes” (ergonomic microscopes) to improve employees’ comfort while they manually touch up or rework circuit boards. So even though some jobs required more detailed job analyses, the controls implemented were still relatively simple. To analyze a problem job, the ergonomics specialist administers the one-page Ergonomics Evaluation Report whenever an employee requests that his or her workstation be evaluated. The employee can make the request to the ergonomics specialist by electronic message or face to face, since the ergonomics specialist often walks the floor of the facility so that he is accessible to all staff. Both the administrative and manufacturing workstation versions of the form ask for personal measurements and workstation descriptions and provide space for short- and long-term recommendations; the manufacturing workstation form also asks for risk factors. Once the employee measurements are taken, they are entered into a database so that any workstation the employee moves to within this or another TI facility can be properly adjusted to his or her personal requirements. If an injury is reported to the health center, more information is collected by the health center staff and the ergonomics specialist. The “Ergonomic Evaluation Questionnaire” is several pages long and captures information on the frequency of tool or equipment use, the types of tasks performed, characteristics of the workstation if a computer is used, the types of physical activities the worker performs, the type of pain experienced, and activities outside of work that may be contributing to the problem. All of this information is provided by either the employee or the ergonomics specialist. Health center staff complete the part of the questionnaire that asks for the employee’s basic medical history, results of various ergonomic-related medical tests, and nursing interventions or treatment. For the more extreme at-risk jobs, this facility provides a more detailed job analysis, which involves videotaping the job and collecting additional documentation. For example, the ergonomics specialist worked with a consultant to analyze and develop controls for the manual electronic assembly job, the job classification in which workers have experienced the highest injury rates. This job was videotaped in order to identify the source of the problem. However, the controls ultimately developed for such jobs are not necessarily complex even if they required more detailed analysis (see app. II). In addition, the consultant made a number of recommendations regarding Lewisville’s manufacturing and warehousing operations. Because recommendations for these controls came from the consultant, the ergonomics team found it was easier to get management buy-in for necessary job changes. Controls are typically developed informally by the ergonomics specialist, who “brainstorms” with other staff. First, the ergonomics specialist discusses the problem with the employee and the employee team assigned to the job. The ergonomics specialist also consults with the line supervisor (who is also the cost center manager for that particular work area) to get additional ideas for controls as well as buy-in for any changes to a problem job. The cost center manager can typically approve any capital expenditures within that work area. Lewisville makes significant use of its in-house resources in developing controls. The ergonomics team comprises mostly engineers, which, according to the team leader, reflects an emphasis on developing controls specifically tailored to the needs of individual production units. Staff from the production engineering department are brought in to consult on more complex or technical jobs. Although the ergonomics team is not responsible for actually developing controls for specific problem jobs, the team does contribute to the selection of equipment, including personal protective equipment, and makes suggestions about workstation design and job rotation. Individual team members might be called in to advise on how to control a specific problem job. The ergonomics team is now trying to capture information on best practices and make this accessible to all employees and facilities through an Internet home page created for ergonomics issues. Once problem jobs are identified, no specific threshold is used to determine whether or not a control must be put in place. The ergonomics specialist explained that some action is typically taken for each and every job where there is a problem. In fact, the ergonomics specialist said there is value even in just talking to the employee on the problem job because the ergonomics specialist can sometimes identify bad work practices that are contributing toward his or her discomfort. To ensure that controls are effective over the long term, the facility also has developed a database that contains the results of administrative workstation evaluations. This information is used when an administrative employee relocates (which happens frequently) to ensure that the employee’s new workstation is set up right the first time. The process is really “never finished” and involves continuous monitoring, according to the team leader and the ergonomics specialist. Regular walk-throughs of the facility are conducted by the ergonomics specialist to enhance awareness and increase accessibility of ergonomic assistance to employees. Both the health center staff and the ergonomics specialist follow up on employees who have reported injuries or symptoms to the health center. Employees on the job, and other assembly and engineering staff, also provide feedback on how well controls are working. Illustrating the iterative nature of developing controls, when an adjustable-height workstation design was tested on the production floor, employee feedback revealed that this design was unstable and allowed products to fall off. Using this feedback and working with a vendor, the ergonomics team and specialist developed a new design. The result was an adjustable table, referred to as “Big Joe,” which was essentially a fork lift with its wheels removed. This design proved to be much more stable. In some cases, the ergonomic hazard cannot be totally eliminated. One job that has been difficult for Lewisville to control involves the need for employees to fit wire harnesses into small openings of a potting mold in order to protect connectors from vibration inside the missile. This job requires considerable force, since the hand must be used as a clamp to fit the wiring into place. While the ergonomics specialist has experimented by having employees use pliers and different connectors and has asked tooling engineers to look at the job, no satisfactory engineering control has yet been developed. Lewisville has discovered that sometimes minor changes in product design can have a major impact on reducing ergonomic hazards. An example of this involved the task of painting the inside of a particular type of missile. Employees were getting injured and experiencing discomfort from twisting and turning their wrists to paint in this confined space. After discussing the problem with the government contracting officers, Lewisville officials learned that the customer did not really need this product to be painted—that this had been required by military specifications that were now outdated. As a result of these discussions, this task was eliminated, significantly reducing the ergonomic hazards associated with the job. Investments in technological advances in the electronics industry that have improved productivity or product quality have also led to ergonomic improvements—even though this was not necessarily the objective of these investments. By automating many of the steps in circuit board assembly over the last decade, Lewisville has eliminated much of the manual assembly work and, thereby, the associated ergonomic risks. For example, a stainless steel stencil is now laser-etched onto the board, an automated squeegee applies the paste to the board, and the boards are then fed into a machine that loads components via feeder reels and chip shooters. In these highly automated work areas, there are few ergonomic hazards. A mix of controls is employed. However, priority is given to engineering controls over administrative controls, which are viewed as an “interim solution.” Many of the engineering solutions, however, are relatively simple or “low-tech,” involving, for example, modifications to workstations so they are more comfortable for the user. These low-tech engineering solutions include installation of adjustable-height workstations, replacing older microscopes with more comfortable “ergoscopes,” placing padding along the edges of the workstation, and raising the circuit boards with foam for hand-intensive work. Hoists are used to load multiple circuit boards (which can weigh up to 60 pounds) into a vapor system machine to be primed and coated. Many of the “low-tech” controls are also low cost. Average cost estimates developed by the ergonomics team for the Site Safety QIT are $15 to $20 for changes to administrative workstations and $50 to $1,000 for changes to manufacturing workstations. Only if a special tool is required (which is not often, according to the ergonomics team leader) to address a problem at a manufacturing workstation are costs significantly greater. Virtually every workstation improvement can be made without going through the facility’s capital approval cycle, which is required for investments over $1,500. “High-tech” engineering controls, however, are sometimes necessary. For example, the production engineering department developed a laser welder to eliminate some of the hand soldering required in the production of microwave circuit boards. Removing the coating around components to fix a faulty circuit board has also been automated with the use of a “microblaster.” Before the microblaster, workers had to pick off the coating using tweezers. Administrative controls are also used, particularly when it is not economical or feasible to implement engineering controls. For example, Lewisville is currently “ramping down” its production of one type of missile. Therefore, job rotation is being used on problem jobs related to the production of this missile to minimize employees’ exposure to hazards. Another type of administrative control used at Lewisville is its “stretch program.” Currently, employees in most of the work areas take 10- to 15-minute stretch breaks twice a day. The purpose of the stretch breaks is to reduce both the physical and psychosocial stress of repetitive work and exposure to other ergonomic hazards. In addition, stretch breaks have sometimes led to employees’ asking to have an ergonomics team member look at a work process or workstation and help them find a more comfortable solution, although, according to the ergonomics training coordinator, some managers at first felt that the stretch program was “a waste of time.” However, since implementing this program, participants have reported that they feel better and are less fatigued, and some of the managers who were previously skeptical have been pleased by these results. One at-risk work area—where the majority of all injuries and illnesses at the facility had previously been recorded—found that MSDs dropped dramatically after instituting stretch breaks, which has contributed to an improved injury and illness incidence rate for the facility as a whole. All employees at the Lewisville facility are required to take a general ergonomics awareness course. Each employee must take at least 1 hour of this training every 3 years. Although training staff had initially proposed that this course be longer and offered annually, facility management was concerned that this was too much of a time commitment. As a result, the awareness training requirement was reduced. Lewisville also offers a wide range of both general awareness activities and targeted ergonomics training. “ERGO Days,” for example, is an annual 3-day event sponsored by the ergonomics team. Team members develop participatory, educational displays set up throughout the facility featuring best ergonomic practices for work and the home, computer accessories, tool demonstrations, and ergonomic workstation adjustments. Similarly, the “wing-by-wing” measurement campaign and the administrative workstation adjustment campaign spread awareness and include a one-on-one educational component. The ergonomics team also sponsors hand tool demonstrations for engineers, technicians, assemblers, and purchasers. These demonstrations are educational in nature in that they discuss, for example, the importance of replacing worn tools. In addition, Lewisville staff can access an Internet ergonomics home page. Finally, the Lewisville facility publishes an environmental, safety, and health newsletter that often features articles about ergonomics. Training opportunities provided to employees are (1) site specific, so instruction is relevant to the employee (for example, photos and videotapes of work areas are taken to facilitate class discussion, and training is conducted within a team’s work area); (2) interactive and often team based, with emphasis on problem solving and practical solutions (the courses focus on problems employees are experiencing on their jobs, sometimes without disruption to the production cycle); and (3) results oriented, in that training staff and management plan courses together, so specific goals and expectations are agreed upon. Courses offered at Lewisville include “Ergonomics for Computer Users” for all employees (including assembly workers if they also use computers) and “Ergonomic Audit for Computer Users” for all employees who spend more than 4 hours per day using a computer. The course “Factory Ergonomics Awareness” is designed to teach individuals how to identify and correct ergonomically unsound workplace conditions and activities. This course encourages the actual development and implementation of controls, with examples taken from participants’ own work areas. At least 95 percent of staff have taken this class. “Advanced Ergonomics for Electronic Assemblers” is specifically tailored to employees who work in this at-risk job classification, and team-based instruction is used. Assembly teams are taught how to identify risks and to be self directed in addressing problems. “Advanced Ergonomics for Teams that Handle Materials” is another team-based course for an at-risk job classification, which includes on-the-job training as well as classroom training. In this course, the ergonomic specialist helps the team identify a problem and develop and implement controls. A “Back Injury Prevention” course is offered to all personnel who lift as part of their jobs. Strong links between Lewisville’s ergonomics program and medical management staff have been established to ensure early reporting and prompt evaluation. Lewisville (like every other facility within the Systems Group) has a health center staffed by two contract nurses. A senior nurse serves all four facilities within the Systems Group. Additional medical management staff include the disability coordinator (who is also a nurse) and the lost-time intervention manager. Medical management staff participate on all facility teams for safety, ergonomics, and lost-time intervention. These links were established because medical management staff recognized that, to have an impact on reducing injury and illness rates and their associated economic costs, they needed to participate on various teams to provide input into the facility’s ergonomic activities. The medical management process was described as follows. First, the employee reports to the health center and a physical assessment is made and a medical history is taken. If symptoms or diagnosis of an MSD is involved, the employee is asked to fill out a portion of the Ergonomic Evaluation Questionnaire, which is then sent to the ergonomics specialist. In addition, an Injury/Illness Investigation Report is prepared for the Accident Review Board of the safety department. The ergonomics specialist is supposed to respond within 3 workdays by conducting a job analysis. Follow-up on the employee is done by medical management staff every week, and if there is no improvement, the health center recommends the employee see a doctor. The disability coordinator is responsible for developing a relationship with local health care providers and a list of doctors who are conservative in their treatment approach, are familiar with the work at Lewisville, and understand the facility’s return-to-work program. Because state law precludes the health center from recommending a specific doctor, a list of doctors is provided to employees only if they request it. TI also has a list of preferred providers for hand surgeries if such treatment is called for. Identifying doctors and developing relationships with them have been challenging tasks at Lewisville, given the multitude of doctors in the surrounding Dallas metropolitan area. If the employee is out for 6 days or more, a special evaluation of the job is performed to help the doctor determine how the injured employee should be accommodated. If a determination is made that this MSD is a workers’ compensation case, regular follow-up is conducted by health center staff and the ergonomics specialist. Lewisville also uses its lost-time intervention program to return employees to transitional or restricted-duty work. This is key to cost savings, according to the manager of this program, because the company is insured through a third-party administrator, and TI pays out of pocket if an employee stays at home. In addition to cost savings, Lewisville’s return-to-work program also offers other benefits, according to medical management staff: communication between the employee and the facility is maintained, and the employee feels more valued, which can accelerate the healing process. Under Lewisville’s return-to-work program, the lost-time intervention manager and other medical management staff begin to track employees who are absent from work because of an injury or illness, whether or not it was related to work. These employees are encouraged to return to work. The lost-time intervention manager assists the medical management staff to communicate with the doctor, the workers’ compensation office, and the insurance office, as necessary. In 1995 alone, Lewisville’s return-to-work activities resulted in 81 employees coming back to work. A corporate safety official said that before implementing this program, employees could easily become “lost in the system.” Once they are back at work, employees’ conditions are monitored. Typically, injured employees can be accommodated within their home work area on a restricted basis. Several things have been done to facilitate these placements, including developing a database of available jobs for workers on restriction and creating a special account that covers the payroll costs of employees on light duty (so the costs are not charged to that home work area’s budget). If the limitations are permanent and prohibit the employee from performing essential job functions with reasonable accommodation, the employee is referred to the TI placement center for job search and other placement assistance. Since 1995, a total of only four employees from the several facilities composing the Systems Group have been transferred to TI’s placement center because they could not be accommodated. Corporate safety and health officials at TI strongly believed in the success of Lewisville’s ergonomics program, citing the reductions in injuries, illnesses, and associated costs. In fact, because the program has already achieved major reductions in injuries and illnesses, officials have now set their sights on improving productivity and other performance-related goals. Officials said Lewisville has also begun to measure its progress in implementing particular initiatives and awards bonuses to members of the ergonomics team—which can total $300 to $500 a person—on the basis of progress achieved. For example, the facility uses a “productivity matrix,” which assigns points on the basis of the accomplishment of particular tasks for individual ergonomic projects, to assess its progress on its administrative workstation adjustment campaign. Lewisville also tracks the progress toward other targets, such as implementing at least 10 special projects (“ERGO Days” was one of these), developing an action plan to respond to the corporationwide safety audit within 5 days, and providing 1 hour of awareness training to 90 percent of the employees at the facility. Using the productivity matrix, Lewisville compares its performance with that of other facilities across TI and other companies participating in the North Texas Ergonomics Consortium. Corporate safety officials said that TI is probably in a better position than most companies to measure its progress in reducing MSDs because it is a “data-rich” company. Nonetheless, officials mentioned several factors that affected their ability to measure program performance. Workers’ compensation data provide evidence that the ergonomic efforts at Lewisville are helping to reduce costs associated with MSDs. To capture MSDs, Lewisville tracks “repetitive motion” and “body stress.” “Body stress” includes all strains and sprains and actually represents two categories from the workers’ compensation database: “strains and sprains associated with manual material handling” and “all other strains and sprains.” As figure VII.1 shows, Lewisville achieved a 91-percent reduction in workers’ compensation costs for MSDs—from $2.6 million in 1991 to $224,000 in 1996. Additionally, the average cost for each MSD claim declined from $21,946 in 1991 to $5,322 in 1996 (see fig. 3). Total Dollars for MSD Claims (in Thousands) Corporate officials said that increased awareness of ergonomics can lead to higher reporting of MSDs and, consequently, higher workers’ compensation claims and costs. The officials said the high cost of MSDs in 1991 can be attributed to the efforts the facility made to increase awareness in the late 1980s; similarly, the spike in 1994 can be attributed to heavy awareness training in the early 1990s, as well as a notification sent to all employees in 1993 of a possible program shutdown due to cutbacks in federal contracts (the shutdown was ultimately averted). Officials said employees are more likely to report injuries before a shutdown in order that they might collect workers’ compensation benefits should they be laid off. Officials also said they could not estimate total program costs or determine whether the reductions in MSD costs and other outcomes exceeded program expenditures. A facility official said it would be difficult to distinguish between those investments made for ergonomic reasons and those made for other purposes, such as to enhance productivity. Trends in overall injuries and illnesses reported in the OSHA 200 log are important because MSDs account for a significant portion of all injuries and illnesses at our case study facilities and because these data are what OSHA looks at when inspecting a facility. Furthermore, OSHA 200 data are key to how TI measures safety and health performance. In fact, using OSHA 200 data, Lewisville was able to demonstrate that it had achieved in 1996 its yearly target of a 20-percent reduction in the overall incidence rate and the lost or restricted workday rate. Meeting this goal contributed to the corporationwide goal of eliminating all preventable occupational and nonoccupational injuries and illnesses by the year 2005. The facility’s incidence rate—the number of injuries and illnesses per 100 employees—for all injuries and illnesses recorded in its OSHA 200 log declined from 5.5 in 1991 to 1.5 in 1996 (see fig. 4). The 1995 incidence rate of about 2.1 was below the industry average of 3.8 for other manufacturers of semiconductors and related devices in 1995, the most recent year for which these data are available. Additionally, between 1991 and 1996, Lewisville reduced the number of lost and restricted days for every 100 employees by 66 days and 15 days, respectively (see fig. 2). While TI relies on OSHA 200 log data to track corporate performance in safety and health, facility officials said it is important that the right OSHA data be tracked. For example, officials said it is more meaningful to track whether or not an injury or illness involved any lost or restricted days in the first place than to track the actual number of lost and restricted days. Corporate and facility officials told us that, since Lewisville has already achieved major reductions in injury and illness rates, the facility is looking for new ways to measure progress made in productivity. However, they also said they are just beginning to consider how productivity gains through ergonomic improvements might be documented. These officials believe that productivity gains will be more difficult to demonstrate than injury and illness reduction, because most of the “low-hanging fruit” (that is, problem jobs that are easier to identify and control) has already been addressed at Lewisville. Currently, Lewisville is piloting productivity studies. For example, the ergonomics team will be examining production bottlenecks to which ergonomic hazards might be contributing. The team refers to these efforts as its Continuous Flow Manufacturing Program. Recent efforts to improve hand tools are part of this initiative. In addition, the Systems Group Ergonomics Council recommended that Lewisville and other Systems Group facilities and their respective ergonomics teams begin to compare the productivity of operations at workstations that have adjustable-height equipment with the productivity of operations at workstations that do not have this equipment. Productivity changes will be measured in terms of cycle time, output, and ergonomic gains. In addition, to document any productivity changes, the ergonomics specialist plans to videotape these jobs before and after the introduction of the adjustable-height workstations. Evidence regarding morale improvement was largely anecdotal. However, corporate and facility staff emphasized that the ergonomics efforts at TI were consistent with quality management principles and that employee participation and empowerment are key to employee satisfaction. Medical management staff said that medical management and return-to-work efforts have benefited morale because they help demonstrate to employees that they are valued. Other significant contributors to this report included Robert Crystal, Senior Attorney, who reviewed the legal implications of our findings; Benjamin Ross, Evaluator, who obtained information from state-operated programs on efforts to encourage employers to reduce MSDs; George Erhart, Senior Evaluator, who helped conduct and analyze the results of the case studies; Ann McDermott, who developed the graphics used in this report; Nancy Crothers, who edited and processed this report; and Bill Tacy, Special Assistant to the Director, Office of Security and Safety, and Joe Kile, Supervisory Economist, who contributed valuable comments and feedback during the planning and implementation of this review. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on ergonomics programs to reduce work-related musculoskeletal disorders (MSDs), focusing on: (1) the core elements of effective ergonomics programs and how these elements are operationalized at the facility level; (2) whether these programs have proven beneficial to the employers and employees that have implemented them; and (3) the implications of these employers' experiences for other employers and the Occupational Safety and Health Administration (OSHA). GAO noted that: (1) experts, research literature, and officials at GAO's case study facilities generally agreed that effective ergonomics programs must have the following core set of elements to ensure that ergonomic hazards are identified and controlled to protect workers: (a) management commitment; (b) employee involvement; (c) identification of problem jobs; (d) development of solutions (that is, controls) for problem jobs; (e) training and education for employees; and (f) appropriate medical management; (2) although the ergonomics programs at all of the case study facilities displayed each of these elements, there was often significant variety in how they were implemented; (3) this variety typically resulted from factors such as differences in the facilities' industries and product line, corporate culture, and experiences during the programs' evolution; (4) the processes used by the case study facilities to identify and control problem jobs were typically informal and simple and generally involved a lower level of effort than was reflected in the literature; (5) controls did not typically require significant investment or resources and did not drastically change the job or operation; (6) officials at all the facilities GAO visited believed their ergonomics programs yielded benefits, including reductions in workers' compensation costs associated with MSDs; (7) these facilities could also show reductions in overall injuries and illnesses as well as in the number of days injured employees were out of work; in some cases, however, the number of restricted workdays increased as a result of an increased emphasis on bringing employees back to work; (8) facility officials also reported improved worker morale, productivity, and product quality, although evidence of this was often anecdotal; (9) demonstrating overall program performance was complicated by uncertainties associated with determining what types of injuries should be considered MSDs and analyzing the program's effect on injuries in light of other complicating factors, such as limited information collected by employers on the costs to implement the programs; (10) GAO's work revealed that positive results can be achieved through an approach incorporating certain core elements that are implemented in a simple, informal, site-specific manner; and (11) federal and state-operated OSHA programs have undertaken a number of initiatives that can provide employers flexibility, consistent with these case study experiences; however, questions remain as to whether these efforts alone are sufficient to protect employees from ergonomic hazards. |
China became the 143rd member of the WTO on December 11, 2001, after almost 15 years of negotiations. These negotiations resulted in China’s commitments to open and liberalize its economy and offer a more predictable environment for trade and foreign investment in accordance with WTO rules. The United States and other WTO members have stated that China’s membership in the WTO provides increased opportunities for foreign companies seeking access to China’s market. The United States is one of the largest sources of foreign investment in China, and total merchandise trade between China and the United States exceeded $145 billion in 2002, according to U.S. trade data. However, the United States still maintains a trade deficit with China: Imports from China totaled $124.8 billion, while exports totaled $20.6 billion in 2002. Through the first half of 2003, exports to and imports from China grew about 25 percent compared to same period in the previous year. The U.S. government’s efforts to ensure China’s compliance with its WTO commitments are part of an overall U.S. structure to monitor and enforce foreign governments’ compliance with existing trade agreements. At least 17 federal agencies, led by the Office of the U.S. Trade Representative (USTR), are involved in these overall monitoring and enforcement activities. USTR and the departments of Agriculture (USDA), Commerce, and State have relatively broad roles and primary responsibilities with respect to trade agreement monitoring and enforcement. Other agencies, such as the departments of the Treasury and Labor, play more specialized roles. Federal monitoring and enforcement efforts are coordinated through an interagency mechanism comprising several management- and staff-level committees and subcommittees. The congressional structure for funding and overseeing federal monitoring and enforcement activities is similarly complex, because it involves multiple committees of jurisdiction. Congressional agencies, including GAO, and commissions also support Congress’s oversight on China-WTO trade issues. In addition to the executive branch and congressional structures, multiple private sector advisory committees exist to provide federal agencies with policy and technical advice on trade matters, including trade agreement monitoring and enforcement. China’s accession agreement is the most comprehensive of any WTO member’s to date, and, as such, verifying China’s WTO compliance is a challenging undertaking for two main reasons. The first reason is the scope of the agreement: The more than 800-page document spans eight broad areas and sets forth hundreds of individual commitments on how China’s trade regime will adhere to the organization’s agreements, principles, and rules and allow greater market access for foreign goods and services. The second reason is the complexity of the agreement: Interrelated parts of the agreement will be phased in at different times, and some commitments are so general in nature that it will not be immediately clear whether China has fully complied with its obligations in some cases. The comprehensive scope of China’s WTO accession agreement represents a challenge for the U.S. government’s compliance efforts. The commitments cover eight broad areas of China’s trade regime, including import regulations, agriculture, services, and intellectual property rights. Within these eight broad areas, we identified nearly 700 individual commitments that China must implement to comply with its WTO obligations. China has also committed to lower a variety of market access barriers to foreign goods. These obligations include commitments to reduce or eliminate tariffs on more than 7,000 products and eliminate nontariff barriers on about 600 of these products. Additionally, China made commitments to allow greater market access in 9 of 12 general services sectors, including banking, insurance, and telecommunications. The scope of compliance problems raised in the first year of China’s membership reflects the scope of the agreement itself. Although the executive branch’s first-year assessment of China’s implementation of its WTO commitments acknowledged China’s effort and progress in some areas, the assessment also noted compliance problems in all eight broad areas of China’s trade regime. In particular, the executive branch emphasized problems in agriculture, services, and intellectual property rights, as well as a crosscutting concern about transparency. Some preliminary assessments of China’s second-year implementation from the private sector suggest that many of those problems persist and that concern about the number and scope of compliance issues continues to increase. While many of China’s commitments were due to be phased in upon China’s accession to the WTO in 2001, a number of interrelated commitments are scheduled to be implemented over extended time frames. For example, commitments on trading rights and distribution are not scheduled to be fully phased in until the end of 2004 and 2006, respectively. As a result, foreign businesses will be unable to fully integrate import, export, and distribution systems until that time. Additionally, although market access for most goods and services will be phased in by 2007, some tariffs will not be fully liberalized until 2010. (See fig. 1.) The varying nature of China’s commitments also complicates U.S. government compliance efforts. On the one hand, some of China’s WTO obligations require specific actions from China, such as reporting particular information to the WTO, or lowering a tariff on a product. Assessing compliance with these specific types of commitments is relatively easy. On the other hand, a significant number of commitments are more general in nature and relate to systemic changes in China’s trade regime. For example, some commitments of this type require China to adhere to general WTO principles of nondiscrimination and transparency. Determining compliance with these more general types of commitments is more difficult and can complicate the dialogue over achieving compliance. It is useful to note that many private sector representatives told us that implementing these general types of commitments, such as those that relate to the rule of law, was relatively more important than carrying out specific commitments to increase market access and liberalize foreign investment in China. Specifically, China’s commitments in the areas of transparency of laws, regulations, and practices; intellectual property rights; and consistent application of laws, regulations, and practices emerged as the most important areas of China’s accession agreement in our September 2002 survey of and interviews with U.S. companies operating in China. However, private sector representatives also indicated that they thought these rule-of-law-related commitments would be the most difficult for China to implement. Because China is such an important trading partner, ensuring China’s compliance with it s commitments is essential and requires a sustained effort on the part of the executive branch, Congress, the private sector, and the WTO and its other members. (See fig. 2.) For example, the executive branch has extensive involvement in monitoring and enforcing China’s commitments, and additional resources and new structures have been applied to these tasks. However, the U.S.’s first-year experience showed that it takes time to organize these structures to effectively carry out their functions and that progress on the issues can be slow. In addition to the executive branch’s efforts, Congress has enacted legislation, provided resources, and established new entities to increase oversight of China’s compliance. The private sector also has undertaken a wide range of efforts that provide on-the-ground information on the status of China’s compliance efforts and input to the executive branch and to Congress on priorities for compliance efforts. Finally, the WTO has existing mechanisms as well as a new, China-specific mechanism created as a means for WTO members to annually review China’s implementation of its commitments. Nonetheless, despite the involvement of all of these players in the first year, the United States will need a sustained—and cohesive— approach to successfully carry out this endeavor. China’s accession to the WTO has led to increased monitoring and enforcement responsibilities and challenges for the U.S. government. In response to these increased responsibilities, USTR and the departments of Commerce, Agriculture, and State have undertaken various efforts to enhance their ability to monitor China’s compliance with its WTO commitments. Agencies have reorganized or established intra-agency teams to improve coordination of their monitoring and enforcement efforts. Additionally, the agencies have added staff in Washington, D.C., and overseas in China to carry out these efforts. For example, estimated full-time equivalent staff in key units that are involved in China monitoring and enforcement activities across the four agencies increased from about 28 to 53 from fiscal years 2000 to 2002, with the largest increases at the Department of Commerce. On a broader level, USTR has established an interagency group to coordinate U.S. government compliance activities. The interagency group, which utilizes the private sector to support its efforts, was very active in monitoring and responding to issues during the first year of China’s membership. Nevertheless, it took some time for agencies to work out their respective roles and responsibilities in the interagency group. Monitoring and enforcing compliance with WTO requirements is a complex and challenging task, as shown by our 2002 assessment of the U.S. government’s efforts to ensure China’s compliance with commitments regarding administration of tariff-rate quotas (TRQ) for certain bulk agricultural commodities. TRQ implementation problems in 2002 included concerns about Chinese authorities missing deadlines for issuing TRQs on certain bulk agricultural commodities; disagreement over whether China’s interpretation of its commitments met WTO requirements; and questions about whether China’s administrative practices were in keeping with its obligations. The United States has undertaken both bilateral and multilateral efforts to settle these complex issues. The large number of U.S. government activities on these issues alone, which still are not fully resolved, included at least monthly engagements with China and illustrates the extensive effort agencies must undertake to identify problems, gather and analyze information, and respond to some issues. Congress has had an active role in overseeing trade relations between the United States and China and in setting expectations for vigilant monitoring and enforcement of China’s WTO commitments. In the U.S.-China Relations Act of 2000, Congress found that for the trade benefits with China to be fully realized, the U.S. government must effectively monitor and enforce its rights under China’s WTO agreements. To accomplish this, Congress authorized additional resources at USTR and the departments of called for an annual review of China’s compliance in the WTO; established the Congressional-Executive Commission on the People’s Republic of China to monitor China’s compliance with human rights and the development of the rule of law in China; established a Task Force on the Prohibition of Importation of Products of Forced Prison Labor from China; authorized a program to conduct rule of law training and technical assistance in China; and enacted legislation implementing China’s WTO commitment allowing WTO members to apply a product-specific safeguard when increases in Chinese imports threaten or cause injury to domestic industry. Congress also required that the executive branch issue several China trade-related reports to assist its continuing oversight. These requirements included USTR’s annual report on China’s compliance, which is based in part on input from the general public. In addition, this Committee, together with the Senate Finance Committee (on a bipartisan basis), requested that we continue our work on China-WTO issues and report on China’s compliance, executive branch efforts, and U.S. business views over 4 years. Finally, congressional committees and commissions have held at least 35 China-focused hearings since 2001—a further indication of congressional involvement in U.S.-China issues. U.S. businesses operating in China provide valuable assistance in monitoring the status of China’s implementation of its WTO commitments, and, as such, effective coordination between the U.S. government and the private sector is essential. For example, industry-specific expertise and input from within the private sector are indispensable components for determining whether the scores of highly technical laws and regulations that the Chinese government issues are WTO compliant and being implemented. Further, private sector industry and business associations are active in conducting their own analyses and issuing reports on China’s WTO compliance, providing input to congressional committees and commissions, engaging the Chinese on specific WTO issues, and representing their members’ interests to the U.S. government in order to inform the U.S.’s compliance priorities. The WTO’s framework of more than 20 multilateral agreements covers various aspects of international trade and sets forth the rules by which China and other members must abide. Notably, the WTO’s dispute settlement mechanism is intended to give all WTO members access to a formal mechanism for pursuing and resolving WTO-related compliance issues with other members, including China. Thus far, no WTO member has initiated a dispute settlement case against China, although some Members of Congress and private sector groups have urged the U.S. government to initiate a case related to China’s administration of TRQs. Another WTO mechanism relates specifically to China. China’s accession commitments created a Transitional Review Mechanism (TRM), as a means for WTO members to annually review China’s implementation of its commitments for 8 years, with a final review in the 10th year following China’s accession. Just as establishing the TRM was one of the more challenging issues to negotiate with China, implementing the TRM process during the first year (2002) also proved challenging. Disagreement among WTO members, including China, over the form, timing, and thoroughness of the TRM led to a limited initial review of China’s trade practices. The review did not meet U.S. expectations and illustrated the challenges of gaining consensus with China and other members within this multilateral forum over implementation issues. Although U.S. officials cited benefits from participating in the initial review, they expressed disappointment over the first-year results. U.S. officials are hopeful that future reviews will be more comprehensive. The second-year TRM is under way, but it is still too early to determine if the current review will meet U.S. and other WTO members’ expectations. In assessing China’s first-year implementation efforts, the executive branch, other WTO member government officials, and many private sector representatives observed that, despite several first-year compliance problems, China had demonstrated a willingness to implement its WTO commitments. For example, the executive branch noted China’s progress in revising the framework of laws and regulations governing various aspects of China’s trade regime. In the second year of China’s membership, however, concerns about the number of compliance problems have grown, as well as the number of events that have potentially interfered with China’s implementation of its commitments. Specifically, some observers have noted events such as changes in China’s central government leadership, reconfigurations of key ministries, a growing concern about unemployment and labor unrest, and the SARS outbreak as possibly temporarily interrupting progress on implementation. In closing, Mr. Chairman, the theme of my testimony is that a cohesive and sustained approach is necessary to monitor and enforce China’s commitments to the WTO. I believe that this hearing that focuses on the key elements of the U.S.-China economic relationship and brings together three of the key players is exactly the kind of oversight that is necessary to ensure that a cohesive and sustained approach is actually carried out. Mr. Chairman and Members of the Committee, this concludes my prepared statement. I would be happy to answer any questions on my testimony that you may have. For further information regarding this testimony, please contact Adam Cowles at (202) 512-9637. Matthew Helm, Rona Mendelsohn, Richard Seldin, and Kim Siegal also made key contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | China's accession to the World Trade Organization (WTO) in December 2001 created substantial opportunities for U.S. companies seeking to expand into China's market. In joining the WTO, China agreed to liberalize its trade regime and open its markets to foreign goods and services. However, the U.S. government has become concerned about ensuring that China honors its commitments to offer a more predictable environment for trade. GAO was asked to describe (1) the monitoring of compliance challenges associated with the scope and complexity of China's WTO commitments and (2) the efforts to date of the key players involved in ensuring China's compliance: the executive branch, Congress, the private sector, the WTO and its other members. GAO's observations are based on its prior analysis of China's WTO commitments, its previous survey of and interviews with private sector representatives, and its examination of first-year efforts to ensure China's WTO compliance. The scope and complexity of China's WTO commitments present two main challenges to verifying China's compliance with its WTO accession agreement. First, the agreement is very broad: It encompasses more than 800 pages, spans eight broad areas, and sets forth hundreds of individual commitments on how China's trade regime will adhere to the WTO's agreements, principles, and rules and allow greater market access. Second, the agreement is complicated: Interrelated parts will be phased in at different times, and some commitments are so general in nature that it may not be immediately clear whether China has fully complied with its obligations. Each of the key players involved in ensuring China's compliance--the executive branch, Congress, the private sector, and the WTO and its members--has made efforts to ensure China's compliance. However, the first-year experience in this regard has demonstrated that these efforts will need to be sustained over a long period. The executive branch has applied additional resources and new intra-agency teams to these efforts, but it takes time to organize these activities. Congress has enacted legislation and established new entities to increase oversight of China's compliance. The private sector also has provided information to the executive branch and to Congress on the status of China's compliance efforts. And, within the WTO, a China-specific mechanism was established as a means for WTO members to annually review China's implementation of its commitments. Nonetheless, GAO's analysis indicates that a sustained approach is needed to ensure China's compliance. |
The Government Performance and Results Act of 1993 is a key component of a statutory framework that the Congress put in place during the 1990s to promote a new focus on results. Finding that waste and inefficiency in federal programs were undermining confidence in government, the Congress sought to hold federal agencies accountable for the results of federal spending through regular and systematic performance planning, measurement, and reporting. Among its several purposes, the act is designed to improve congressional decision-making by providing more objective information on the relative effectiveness and efficiency of federal programs and spending. That is, with regard to spending decisions, the act aims for a closer and clearer link between the process of allocating resources and the expected results to be achieved with those resources. The concept of linking performance information with the budget is commonly known as performance budgeting. Within the past 50 years, initiatives at the local, state, and federal levels of government as well as in other nations have sought to link performance expectations with specific budget amounts. In essence, the concept of performance budgeting assumes that a systematic presentation of performance information alongside budget amounts will improve budget decision-making by focusing funding choices on program results. Specifically, performance budgeting seeks to shift the focus of attention from detailed items of expense—such as salaries and travel—to the allocation of resources based on program goals and measured results. In this sense, the Results Act is the most recent of a series of federal initiatives embodying concepts of performance budgeting. At the federal level and elsewhere, performance budgeting initiatives have encountered many challenges. Key challenges include a lack of credible and useful performance information, difficulties in achieving consensus on goals and measures, dissimilarities in program and fund reporting structures, and limitations of information and accounting systems. For example, some prior initiatives used new and unfamiliar formats that were layered onto existing budget and appropriations processes, compromising the goal of integrating performance information into the budget process. Specifically in the federal government, past performance budgeting initiatives resulted in unique and often voluminous presentations unconnected to the structures and processes used in congressional budget decision-making. When viewed collectively, these past initiatives suggest three common themes. First, any effort to link plans and budgets must explicitly involve the executive and legislative branches of our government. Past initiatives often faltered because the executive branch developed plans and performance measures in isolation from congressional oversight and resource allocation processes. Second, the concept of performance budgeting will likely continue to evolve. Past initiatives demonstrated that there is no single definition of performance budgeting that encompasses the range of needs and interests of federal decisionmakers. Third and perhaps most importantly, past initiatives showed that performance budgeting cannot be viewed in simplistic terms—that is, resource allocation cannot be mechanically linked to results. The process of budgeting is inherently an exercise in political choice—allocating scarce resources among competing needs—in which performance information can be one, but not the only, factor underlying decisions. Ultimately, the promise of any performance budgeting initiative, including the Results Act, lies in its potential to more explicitly infuse performance information into budgetary deliberations, thereby changing the terms of debate from simple inputs to expected and actual results. The Results Act differs from earlier performance budgeting initiatives in several key respects but can be viewed as a continued evolution of the concept. At its most basic level, the act requires agencies’ annual performance plans to directly link performance goals and the program activities of their budget requests. Testifying on the Results Act before its passage, the Director of OMB characterized this linkage as a “limited—but very useful—form of performance budgeting . . . .” The act also requires that another form of performance budgeting be tested during performance budgeting pilots. The Results Act requires an agency’s annual performance plan to cover each program activity in the President’s budget request for that agency. Subject to clearance by OMB and generally resulting from negotiations between agencies and appropriations subcommittees, program activity structures are intended to provide a meaningful representation of the operations financed by a specific budget account. Typically, the President’s annual budget submission encompasses over 1,000 accounts and over 3,000 program activities. As the committee report accompanying the act noted, however, the program activity structure is not consistent across the federal government but rather is tailored to individual accounts. The committee report further cautioned that agencies’ annual plans should not be voluminous presentations that overwhelm rather than inform the reader. Accordingly, the act gives agencies the flexibility to consolidate, aggregate, or disaggregate program activities, as illustrated in figure 1, so long as no major function or operation of the agency is omitted or minimized. In addition to this flexibility, agencies also have the option to propose changing their budget structures, subject to concurrence from OMB and the Congress. OMB’s guidance regarding this provision of the act set forth an additional criterion: plans should display, generally by program activity, the funding level being applied to achieve performance goals. That is, OMB expected performance plans to show how amounts from the agency’s budget request would be allocated to the performance goals displayed in the plan. In addition to mandating a linkage between budget requests and performance plans, the act required that pilot projects be used to test another approach to performance budgeting. OMB, in consultation with the head of each agency, was required to designate for fiscal years 1998 and 1999 at least five agencies to prepare budgets that “present, for one or more of the major functions and operations of the agency, the varying levels of performance, including outcome-related performance, that would result from different budgeted amounts.” While the act required agencies to define goals consistent with the level of funding requested in the President’s budget, the act’s pilots would also show how performance would change if the agency received more or less than requested. OMB was to include these pilot performance budgets as an alternative presentation in the President’s budget for fiscal year 1999 and to transmit a report to the President and to the Congress no later than March 31, 2001, on the feasibility and advisability of including a performance budget as part of the President’s budget. This report is also to recommend whether legislation requiring performance budgets should be proposed. The performance budgeting pilots were scheduled to start in fiscal year 1998 “so that they would begin only after agencies had sufficient experience in preparing strategic and performance plans, and several years of collecting performance data.” In this context, and recognizing the importance of concentrating on governmentwide implementation in 1998, OMB announced on May 20, 1997, that the pilots would be delayed for at least a year. OMB stated that the performance budgeting pilots would require the ability to calculate the effects on performance of marginal changes in cost and funding. According to OMB, very few agencies had this capability, and the delay would give time for its development. In September 1998, OMB solicited agencies’ comments on these pilots, but no agencies have been designated as pilots. At present, OMB has no definite plans for proceeding with the performance budgeting pilots. The complexities that are the hallmark of today’s federal budget account structure and the diverse planning structures associated with broad federal missions were reflected in the approaches used to link budget and planning structures. The fiscal year 1999 performance plans we reviewed frequently depicted complex and imprecise relationships between these structures. Using the Results Act’s flexibility to aggregate, consolidate, and disaggregate existing program activities, most agencies generally linked their program activities to some level of a frequently complex planning hierarchy. These linkages were often presented in a performance plan that was separable from the agency’s budget justification. As a result, plans frequently depicted a relationship between program activities and performance goals that, while consistent with the act’s charge to cover program activities, proved difficult to translate into budgetary terms. However, 14, or 40 percent of the agencies we reviewed, built on these relationships to show how the funding from program activities would be allocated to achieve discrete sets of performance goals. These agencies, in effect, took the first step toward defining the performance consequences of budget decisions. As we have previously reported, the federal government’s budget account structure was not created as a single, integrated framework but rather generally developed over time to respond to specific needs. As a result, budget accounts and program activities within the accounts vary from agency to agency. This complexity was evident for agencies included in this review. For example, the number of budget accounts associated with a given agency’s performance plan ranged from a low of 1 to a high of 118 for the agencies we reviewed, while the number of program activities to be covered by the plan ranged from 6 to 465. The median number of accounts for agencies that we reviewed was 9, and the median number of program activities was 32. Typically, program activity structures were unique not only to each of the agencies in our review, but also to each of the budget accounts within an agency. In only 2 agencies—EPA and the Department of Defense—were the same program activity titles repeated across all or groups of an agency’s budget accounts. Agencies’ planning structures were similarly complex, comprised of widely varying numbers and layers of goals. The Results Act and OMB guidance give agencies flexibility in structuring strategic and annual performance goals. Annual performance goals are expected to measure progress in achieving longer term strategic goals. There is no required format or structure and no limitation on the number of goals that can be included in a performance plan. Our sample of performance plans presented a variety of planning structures and many different terms to describe those structures. Twenty-one of the 35 agencies we reviewed used a cascading hierarchy of goals to put their fiscal year 1999 performance in context—that is, these plans placed one or more layers of goals between their strategic and annual performance goals. For example, the Bureau of Land Management (BLM) plan contained a complex hierarchy of goals: 5 “General Goals,” supported by 17 “Strategic Goals,” that were supported by 43 “Performance Goals,” 47 “Long-Term Goals,” and, finally, 64 fiscal year 1999 “Annual Goals.” Figure 2 shows the numbers and layers of goals associated with one of BLM’s five general goals. The remaining 14 agencies, however, had fewer layers of goals. For example, the U.S. Agency for International Development’s (USAID) six strategic goals were supported directly by performance goals with fiscal year 1999 targets. Figure 3 presents the performance goals associated with one of USAID’s strategic goals. To analyze these disparate planning structures, we developed and used a common framework that defined two layers of goals above the annual performance goals stated in the performance plans: (1) strategic goals, to reflect the first goal layer under an agency’s mission statement and (2) strategic objectives, to reflect the next subordinate level of goals under strategic goals. We found that, quantifying just these two layers, most agency plans involved relatively complex presentations. The number of strategic goals in plans we reviewed ranged from 1 to 47, with a median of 5. Just over half of the agencies in our review placed an intervening layer of strategic objectives between strategic and performance goals. In plans where strategic objectives were used, the number of strategic objectives ranged from 5 to 122. Across our sample of agency plans, the median number of strategic objectives used was nine. The variety and complexity of agencies’ planning and budget structures necessarily resulted in a range of approaches to present and link this information. However, across this range, two approaches predominated: 29 performance plans were presented separately from congressional budget requests, and 22 plans established complex linkages of multiple program activities related to multiple performance goals. Although performance plans and budget requests—commonly referred to as justifications of estimates—are both transmitted to the Congress following the President’s budget submission in February, most agencies kept their plans physically separated from their budget submissions, either as entirely separate documents or as separate components appended to the justifications. Only 6 of the 35 plans we reviewed had been fully integrated into the agency’s budget justification (see appendix II). In addition to separating plans and budgets, almost all agencies retained the budget structure they had used in previous budget submissions. Only three agencies in our review—EPA, the Customs Service, and the Nuclear Regulatory Commission (NRC)—substantially changed their program activity structures, and all cited the Results Act as a factor in this change. Some others—such as the Bureau of Indian Affairs (BIA) and the Department of Veterans Affairs (VA)—noted that such changes were under consideration but proposed no changes in fiscal year 1999. For example, BIA stated that it would work with OMB and congressional committees to “simplify the budget format to mirror the strategic plan.” However, agencies very frequently took advantage of the act’s flexibility to modify—that is, to aggregate, consolidate, or disaggregate—program activities in order to show linkages with performance goals. The extent to which this flexibility was used varied greatly. For example, figure 4 illustrates how the Administration for Children and Families (ACF) consolidated program activities before relating them to performance goals. Figure 5 illustrates how the Health Resources and Services Administration (HRSA) linked some performance goals to disaggregated program activities. In addition to modifying or proposing changes to program activities, agencies in our review used three principal strategies to meet the Results Act’s expectation that annual plans would “establish performance goals to define the level of performance to be achieved by a program activity.” About half (13) of the agencies that made linkages between performance goals and program activities in their performance plans established this connection at the relatively high level of strategic goals or objectives. For example, as shown in figure 4, ACF linked consolidated program activities to strategic objectives, which, in turn, were subsequently associated with performance goals. About the same number of agencies (14) defined direct linkages to performance goals. Figure 5 illustrates how HRSA linked disaggregated program activities directly to its performance goals. Three agencies—the National Aeronautics and Space Administration, the National Science Foundation, and VA—linked program activities to something other than a statement of strategic goals, strategic objectives, or performance goals. For example, the VA plan contained 17 strategic goals and 25 strategic objectives. However, instead of linking program activities directly to these goals and objectives, VA linked both its program activities and performance goals to its 10 business lines. The business lines generally represented different agency functions such as medical care and education. As the above discussion indicates, regardless of the strategy used, most agencies ended with the same basic relationship—many program activities were related to many performance goals (see appendix II). These imprecise, “many-to-many” relationships frequently resulted from agencies’ linking aggregated or consolidated program activities with strategic goals or other groupings of performance goals. For example, in figure 4, ACF consolidated program activities and linked them to the group of performance goals associated with a strategic objective, creating a many- to-many relationship between program activities and performance goals. In contrast, eight plans used a more direct and simple approach—typically linking a single program activity with multiple performance goals. Figure 5 shows that HRSA linked a single disaggregated program activity to multiple performance goals. While 30 of 35 plans defined some relationship between program activities and performance goals, 16 of these did not build on that relationship by showing how the funding from program activities would be allocated to discrete sets of performance goals. Ten of these 16 plans did not present funding levels for any set of performance goals in their performance plans. Six of these 16 plans presented funding levels for some set of performance goals without explaining how those funding levels had been derived from program activities in their budget requests. However, 14 agency plans, or 40 percent of those included in our review, both identified the funding for program activities and explained how that funding would be allocated to a discrete set of performance goals. In effect, these 14 plans, which will be discussed more fully in the next section, took the first step in defining the performance consequences of budget decisions. Our review of selected fiscal year 1999 performance plans indicated that agencies with budget and planning structures of widely varying complexity were able to develop approaches toward achieving a fundamental purpose of the Results Act—clarifying the relationship between resources and results. Figure 6 lists the 14 agencies in our review that allocated program activity funding to performance goals in their performance plans. We found that some of the approaches that these agencies used, alone or in combination, were more frequently associated with plans that linked program activity funding to performance goals. For example, agencies that (1) established simple relationships between program activities and performance goals or (2) fully integrated budget justifications and performance plans were significantly more likely to allocate program activity funding to performance goals. In addition, each of the three agencies that changed their budget structures to align them with their planning structures made these allocations. Conversely, plans characterized by imprecise, many-to-many relationships between performance goals and program activities and plans presented separately from budget justifications generally did not present such allocations. Our review generally found few differences in the budget structures of agencies that allocated program activity funding to performance goals and those that did not. For example, the number of an agency’s accounts or program activities was not significantly related to whether a plan presented funding allocations for performance goals. As table 1 shows, for example, the median number of accounts and program activities was nearly the same for agencies that did allocate program activity funding to performance goals and for those that did not. We also considered whether plans that allocated program activity funding to performance goals were more frequently associated with agencies having spending concentrated in one account. To determine if concentration of spending was a significant factor, we calculated the number of agencies for which 75 percent or more of requested spending was associated with a single account. Again, no significant relationship was observed (see figure 7). Four, or 36 percent, of the agencies with spending concentrated in one account allocated program activity funding to performance goals. Ten, or 42 percent, of the agencies with spending concentrated in multiple accounts allocated program activity funding to performance goals. Similarly, the complexity of an agency’s program activity structure was not a significant factor in allocating funding to performance goals. We considered whether a simpler program activity structure—that is, one in which program activity titles were repeated across budget accounts—was more frequently associated with allocating funding to performance goals. We found no significant difference in the number of agencies allocating program activity funding to performance goals. Thirteen, or 39 percent of the 33 agencies without common program activity structures, allocated program activity funding to performance goals (see figure 8). Although only two agencies—EPA and the Department of Defense—exhibited common program activity structures, EPA presented such an allocation and DOD did not. Although a particular account and program activity structure was generally not associated with allocating funding to performance goals, there was one significant but perhaps unsurprising exception. All three of the agencies that proposed changing their program activities to be consistent with their planning structures—EPA, NRC, and the Customs Service—allocated funding to performance goals (see figure 9). For example, EPA proposed a uniform program activity structure across all of its accounts in which each program activity represented one of its strategic goals. Figure 10 illustrates how EPA used consolidation to allocate program activity funding to strategic objectives and their supporting performance goals. Fourteen agencies in our review that showed how program activities were allocated to performance goals did not appear to have any common structural elements in their performance plans. Differences in plan complexity—defined in terms of number and layers of goals—were not significant between agencies that allocated program activity funding to performance goals and those that did not. As indicated in table 2, the median number of strategic goals and objectives was the same or similar between these two groups of plans. Although the complexity of an agency’s budget and planning structures generally was not significantly related to whether it linked program activity funding to performance goals, two approaches to linking these structures were. These approaches were (1) establishing simpler relationships between program activities and performance goals and (2) fully integrating budget justifications and performance plans. Whether used alone or in combination, these approaches were more frequently associated with agencies that were able to show the performance consequences of budget decisions. A significant difference in allocating program activity funding to performance goals existed between agencies in which the performance plan was integrated with the agency’s budget justification and agencies in which the performance plan was not integrated. Agencies whose plans were fully integrated with the budget justification nearly always (five of six such plans) allocated program activity funding to performance goals (see figure 11). Conversely, about two-thirds, or 20 of the 29 agencies whose plans could be physically separated from the budget justification, did not allocate program activity funding to performance goals. For example, figure 12 shows how information traditionally contained in a budget justification, such as descriptions of accounts and their funding, was combined with performance information in the Internal Revenue Service’s (IRS) integrated budget justification and performance plan. We also found that plans showing a simpler relationship between program activities and performance goals were significantly more likely to show how funding was allocated to performance goals (see figure 13). That is, in all agencies in which the relationship between program activities and performance goals could be characterized as one-to-many or many-to-one, program activity funding was allocated to performance goals. For example, figure 14 shows that the allocation of funding to performance goals in the NRC plan was essentially automatic because each of the agency’s program activities generally align with a strategic goal and its supporting performance goals. However, where relationships were less precise—that is, when multiple program activities were related to multiple performance goals—allocations of program activity funding to performance goals were less common. As indicated by the agencies profiled in the figures above, approaches that were individually significant in allocating funding were often used in combination. In fact, 6 of the 14 plans that allocated program activity funding to performance goals used two or more of the approaches identified above. For example, of the three agencies that changed their program activity structures, two either presented simple relationships between program activities and performance goals or fully integrated their performance plans and budget justifications. The 21 plans in our review that did not allocate funding to discrete sets of performance goals were also generally characterized by common features. These plans (1) did not reflect any significant change in the agency’s account or program activity structures, (2) generally were separable from the justification of estimates, and (3) presented either no explicit relationship or a many-to-many relationship between performance goals and program activities. These features were a hallmark of plans that did not inform users of the performance consequences of budget decisions. Our review of selected fiscal year 1999 performance plans presents a mixed picture. Certainly, some agencies were able to develop informative approaches to connect budgetary resources to results. These approaches are addressing some of the challenges that have plagued performance budgeting efforts prior to the Results Act. They can also be seen as the first step toward achieving a key objective of the act—a clearer understanding of what is being achieved for what is being spent. Paralleling these changes is growth in the Congress’ interest in performance information in its resource allocation and other oversight processes. Nevertheless, our review, as well as the delay of the performance budgeting pilots required by the act, indicates continuing challenges for achieving a clearer relationship between budgetary resources and results. The fiscal year 1999 budget process marked an important beginning in more clearly showing the performance consequences of budget decisions. As indicated in the previous section, executive branch agencies developed a variety of approaches to link their performance plans and budget requests. But equally important, the Congress also showed an awareness of Results Act implementation efforts and a clear interest in obtaining credible performance information during its appropriations and oversight processes. Executive agencies have demonstrated that linking complex budget and planning structures demands adaptive approaches. The scope of the federal government’s missions, the variety of its organizational models, and the breadth of its processes—all subject to a multifaceted congressional oversight environment—suggest that many approaches will be developed to more clearly allocate requested funding levels to performance goals. Our review found that agencies reflecting the heterogeneity of the federal government—from direct service agencies (e.g., IRS) to agencies principally involved in grant or loan making (e.g., USAID) to regulatory agencies (e.g., NRC)—began to link budgetary resources and results. In the fiscal year 1999 performance plans, the agencies we reviewed developed several approaches to overcome a common problem of previous performance budgeting initiatives—planning structures and presentations that were unconnected to budget structures and presentations. These approaches include: changing budget structures to more closely align with performance plans. Three agencies proposed new program activities within existing budget accounts to generally reflect the strategic goals of their performance plan. These proposals sometimes facilitated a relatively simple relationship between program activities and performance goals that helped make connections clear. integrating performance information with budget justifications. In some cases, this took the form of fully integrating the performance plan with the agency’s budget justification, as in the IRS, the Customs Service, and the Federal Bureau of Investigation. Where plans were not fully integrated with budget justifications, some agencies used the justification to provide more detailed information on goals contained in their performance plan. For example USAID’s fiscal year 1999 budget justification contained “strategic support objectives” and “special objectives.” These objectives appear to further describe and support the performance goals expressed in USAID’s separate performance plan. crosswalking performance plans with budget structures. For example, ACF devised a crosswalk to identify the contribution of its over 60 program activities to its 10 strategic objectives (see figure 4 for an excerpt from this crosswalk). The crosswalk identifies funding for accounts or program activities and—using consolidation and aggregation—relates an account or program activity to a strategic objective, and consequently, to the objective’s set of discrete performance goals. Account or program activity funding levels are summed to provide a proposed funding level for each strategic objective. As executive agencies developed these approaches and presented their fiscal year 1999 budget submissions, the Congress also indicated an increasing interest in credible performance information to inform the resource allocation process. We reviewed fiscal year 1999 appropriations hearings and reports for the agencies in our review that allocated program activity funding to performance goals and observed that members of the Congress often made specific reference to the performance information contained in the agency’s justification and/or performance plan. Some notable examples include the following. ACF officials were questioned as to whether a 4 percent increase in children exiting foster care through reunification justified the appropriation being sought for these activities. U.S. Customs Service officials were asked how the appropriations committee should evaluate performance and resource requirements for Customs’ marine mission, given an apparent lack of measures for its marine enforcement program. NRC officials were asked what performance measures would be used to justify U.S. participation and funding in international nuclear safety programs and how requested budget increases were related to NRC’s mission. Food and Nutrition Service officials presented data on the number of meals being served in the school lunch and breakfast programs and were asked how much additional budgetary resources would be needed to serve all eligible children. The Conference Report on HUD’s Fiscal Year 1999 Appropriations directed the agency to revise its performance plan to incorporate measurable goals and outcomes for providing housing vouchers and certificates to assist families in transitioning from welfare to work. A House Subcommittee on Appropriations was unwilling to recommend funding for a request for community-based technology centers in part because specific performance measures for this new program were not presented. The Treasury and General Government Appropriations Act of 1999 stated that the Office of National Drug Control Policy could not obligate funds provided to continue its national media campaign until it submitted the evaluation and results of the campaign. Deliberations on agencies’ appropriations also indicate that making effective linkages between budget program activities and performance goals is one of many challenges that need to be addressed for performance information to be used in the budget process. Members of the Congress also questioned agencies about why goals were not more results-oriented and what steps were being taken to coordinate activities with other agencies. For example, Office of Personnel Management officials were asked if the agency’s performance plan could identify measures to help determine the agency’s progress toward a result of recruiting and retaining the federal workforce required for the 21st century. Members of the Congress were also concerned about agencies’ use of program evaluation and other techniques to ensure the validity and reliability of performance data. ACF officials were asked about their approach for evaluating the academic success of Head Start preschool participants after those students leave the program. These concerns demonstrate that agencies also need to adopt a broader agenda for improving performance plans that includes focusing on results, defining clear strategies, and improving their capacity to gather and use performance data. Translating the use of agency resources into concrete and measurable results will be a continual challenge that will require both time and effort on the part of the agency. The uneven pace of progress across government is not surprising; agencies are in the early years of undertaking the changes that performance-based management entails. Although some agencies, as indicated in our review, began to show the performance consequences of budget decisions, improvements can be made in the clarity and completeness of linkages between program activities and performance goals. Agencies must also balance the scope and precision of funding estimates for performance goals with the usefulness of such estimates for resource allocation decisions. In addition, we believe weaknesses in the performance measurement systems described in agencies’ fiscal year 1999 performance plans need to be addressed. Linkages between plans and budgets must be supported by results-oriented and credible performance data to be useful. Our assessment of agencies’ fiscal year 1999 performance plans found most goals were focused on outputs, not outcomes. This presents a dilemma for future performance budgeting efforts. While outcome information is clearly useful for measuring performance, it may be more difficult to allocate funding to outcomes that are far removed from the inputs that drive costs. Allocating funding to outcomes presumes that inputs, outputs, and outcomes can be clearly defined and definitively linked. For some agencies, these linkages are unclear or unknown. For example, agencies that work with state or local governments to achieve performance may have difficulty specifying how each of multiple agencies’ funding contributes to an outcome. In addition to understanding how actions affect outcomes, allocating funding to outcomes also requires an ability to understand how costs are related to outcomes. Agencies are noting the importance of cost accounting and other management systems for success in allocating funding to performance. For example, the Department of Housing and Urban Development’s (HUD) fiscal year 1999 performance plan acknowledged that the agency “has no mechanism for tracking resources as they are applied to performance measures.” The plan noted that HUD intends to develop a system that “will allow the Department to identify, justify, and match resource requirements for effective and efficient program administration and management.” Agencies are expected to develop such systems as they implement managerial cost accounting standards developed by the Federal Accounting Standards Advisory Board (FASAB). These standards require that agencies develop and implement cost accounting systems that can be used to relate the full costs of various programs and activities to performance outputs. Although these standards were originally to become effective for fiscal year 1997, the Chief Financial Officers (CFO) Council—an interagency council of the CFOs of major agencies— requested the effective date be delayed for 2 years due to shortfalls in cost accounting systems. As FASAB recommended, the effective date was extended by 1 year, to fiscal year 1998, with a clear expectation that there would be no further delays. However, developing the necessary approach to gather and analyze needed program and activity-level cost information will be a substantial undertaking. While there is a broad recognition of the importance of doing so, for the most part agencies have just begun this effort. As discussed earlier in this report, agencies’ difficulties in developing performance planning and measurement and cost accounting systems were cited by OMB in its 1997 decision to delay the performance budgeting pilots required by the Results Act. To further discussion of those pilots, OMB recently suggested possible formats and time frames for the pilots in a September 1998 paper sent to federal agencies. In that discussion paper, OMB noted that pilot projects would not be designated unless they could “fairly test the concept of performance budgeting,” which it described as “the application of multi-variate or optimization analysis to budgeting.” The paper described three analytical alternatives that could be tested involving performance tradeoffs (1) in the same program with changes in program funding, (2) in the same program with no change in total program funding, or (3) in several programs with shifts in intra-agency funding between these programs. At present, OMB has no definitive plans for proceeding with the performance budgeting pilots. OMB solicited agencies’ comments on the discussion paper and on their capability to produce the alternative budgets suggested in the committee report accompanying the Results Act. According to OMB, no agency has as yet volunteered to participate in the pilots. In its discussion paper, OMB stated that “the absence of designated pilots or having fewer designations than required would be an indication of agency readiness to do performance budgeting, and would be discussed in the OMB report to Congress.” These developments reflect some of the broader tensions involved in linking planning and budgeting structures. On one hand, performance plans need to be broad and wide-ranging if they are to articulate the missions and outcomes that agencies seek to influence. Often these plans will include goals that the agency can only influence indirectly because of responsibilities assigned to other actors, such as state and local governments. On the other hand, budget structures have evolved to help the Congress control and monitor agency activities and spending and, as such, are geared more to fostering accountability for inputs and outputs within the control of agencies. Performance budgeting poses the daunting task to agencies of discovering ways to address these competing values that are mutually reinforcing, not mutually exclusive. Strategies for bringing planning and budgeting structures together must balance both sets of values. For example, some agencies might let their planning structures be the starting point for making connections and seek to crosswalk broad overarching goals to their many program activities. This approach is consistent with a results orientation but can obscure the impact of specific funding decisions. Other agencies might decide to use the budget structure as their starting point and integrate performance information into formats familiar and useful for congressional oversight. Although this approach may be helpful in focusing on the performance consequences of budget decisions, such strategies risk confining performance information to structures that may be too limited to fully address the broad mission and outcomes of the agency. The fiscal year 1999 annual performance plans—the first called for under the Results Act—showed potential for providing valuable information that can be used to better program performance. However, the linkage between requested funding and performance goals is just one of many elements that need improvement for these plans to be useful for improving program performance. Top management within agencies must provide the consistent leadership necessary to direct the needed management changes and to ensure momentum is maintained. Ultimately, performance-based management should become an integral part of an agency’s culture. The transition process must include proven “change management” approaches to be successful and sustained. In addition, congressional use of results- oriented program performance and cost information in its decision-making about federal policies and programs will also spur agencies’ efforts to implement the statutory framework by sending the unmistakable message to the agencies that the Congress remains serious about performance- based management and accountability. As we have noted in a previous report, one of the Results Act’s most conspicuous and useful features is its reliance on experimentation. This is certainly true regarding performance budgeting. The act calls for one form of performance budgeting by requiring that performance goals from an agency’s annual performance plan cover the program activities of the agency’s budget request while giving agencies flexibility in how this linkage is made. This requirement, when coupled with OMB’s guidance that plans reflect the funding levels being applied to achieve performance goals, constitutes the first governmentwide expectation to directly associate budgetary resources with expected results. In addition, the act also requires pilot projects to test a specific form of performance budgeting that presents varying levels of funding for varying levels of performance. The committee report accompanying the act noted that “this pilot approach is best because, while performance budgeting promises to link program performance information with specific budget requests, it is unclear how best to present that information and what the results will be.” The fiscal year 1999 performance planning and budgeting cycle produced a useful experimentation in connecting planning and budgeting structures that accommodated unique federal missions and structures. Some, but not all of the agencies we reviewed, began to develop useful linkages. Moreover, challenges in performance planning and measurement and deficiencies in cost accounting systems continue to confront federal agencies. OMB has already cited these problems as the reasons why performance budgeting pilot projects were not being initiated. The progress that has been made and the challenges that persist underscore the importance of developing a specific agenda to ensure continued progress in better showing the performance consequences of budgetary decisions. The original goal for the act’s performance budgeting pilot projects was twofold: to allow OMB and agencies to develop experience and capabilities towards realizing the potential of performance budgeting, and to provide OMB with a basis for reporting to the Congress on next steps and needed changes. Although OMB stated in 1997 that agencies’ financial management systems were not capable of the specific form of performance budgeting called for in the act, our review demonstrates that some agencies were able to develop approaches to make perhaps a more basic, but still useful, connection between proposed spending and performance. These fiscal year 1999 efforts to link performance goals and program activity funding essentially constitute a first step toward achieving the intent of the performance budgeting pilots. They also provide a baseline from which OMB could assess future progress and determine what changes, if any, may be needed to the act and federal budget processes. In addition to its longstanding responsibilities regarding the formulation, review, and presentation of the President’s annual budget requests, OMB is the lead agency for overseeing a framework of recently enacted financial, information resources, and performance planning and measurement reforms designed to improve the effectiveness and responsiveness of federal agencies. As such, OMB is well-situated to assess (1) the practicality of performance budgeting pilots as currently defined in the law, (2) agency approaches and continuing challenges to linking budgetary resources and performance goals, and (3) options to encourage progress in subsequent planning and budgeting cycles. In light of the indefinite delay of the performance budgeting pilots required by the Results Act and the experiences of agencies during the fiscal year 1999 performance planning and budgeting cycle, we recommend that the Director of OMB assess the approaches agencies use to link performance goals and program activities in the fiscal year 2000 performance plans. OMB’s analysis, building on our review of fiscal year 1999 performance plans, should develop a better understanding of promising approaches and remaining challenges with respect to the concept of performance budgeting within the federal government. OMB’s analysis should address, for example: the extent of agencies’ progress in associating funding with specific or sets of performance goals, how linkages between budgetary resources and results can be made more useful to the Congress and to OMB, what types of pilot projects might be practical and beneficial, and when and how those pilot projects would take place. On the basis of this analysis, we recommend that OMB work with agencies and the Congress to develop a constructive and practical agenda to further clarify the relationship between budgetary resources and results, beginning with specific guidance for the preparation of agencies’ fiscal year 2001 plans. We further recommend that this analysis and the resulting agenda become the foundation for OMB’s report to the Congress in March 2001, as currently required by the Results Act, on the feasibility and advisability of including a performance budget as part of the President’s budget and on any other needed changes to the requirements of the act. On February 10, 1999, we met with the Deputy Director for Management and other OMB officials to discuss this report and our recommendations; on February 19, 1999, we provided a draft of this report to OMB for comment. At both our meeting and subsequently, OMB provided technical comments orally, which we have incorporated as appropriate. On March 26, 1999, OMB informed us they would have no written comments on this draft. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to Senator Joseph Lieberman, the Ranking Minority Member of your committee; other appropriate congressional committees; and The Honorable Jacob Lew, Director, Office of Management and Budget. We will also make copies available to others on request. Major contributors to this report are listed in appendix III. Please contact me on (202) 512-9573 if you or your staff have any questions. Our September 1998 report on agencies’ first performance plans establishes an agenda for improving several elements of agency plans, including showing the performance consequences of budget decisions. Following the issuance of that report, the Chairman of the Senate Committee on Governmental Affairs asked us to examine in more detail how the plans linked expected performance with budget requests. To do this, our objectives were to describe agencies’ approaches to linking performance goals and examine characteristics that might be associated with different approaches to linkage, and identify implications for future efforts to clarify the relationship between budgetary resources and results. To address our objectives, we selected 35 fiscal year 1999 performance plans for review from departments and agencies covered by the CFO Act. We generally focused on bureau-level plans for each department but limited our review to the three largest bureaus with discretionary spending over $1 billion and/or the two largest bureaus. Table I.1 lists the agencies whose plans we reviewed. To describe agencies’ approaches to linking performance goals and budgetary resources, we identified 12 characteristics that could be used to describe agencies’ planning and budgeting structures and the linkages between them. For each characteristic, we developed a classification framework for differentiating between plans based on that characteristic. These classification frameworks involved either straightforward counts of plan components (e.g., number of strategic goals) or judgments based on the content and structure of the plan. One staff member reviewed each plan and classified the plan on each characteristic. To ensure consistency in judgments, another staff member also independently reviewed the plans and the assessment on each characteristic. Differences in judgments were addressed by having staff members jointly reevaluate the coding of the characteristic to resolve the difference. We then compiled our assessments on plan characteristics into a database that was used to profile agencies’ first-year approaches to linking budgetary resources with results. These characteristics generally fell into two groups: characteristics describing agencies’ budget and planning structures (numbers 1 through 7 in table I.2), and characteristics describing agencies’ approaches to linking these structures (numbers 8 through 12 in table I.2). Table I.2 presents additional detail on the characteristics used in this review. The number of budget accounts in the Appendix of the Budget of the United States Government, Fiscal Year 1999 from which agencies proposed to make obligations in fiscal year 1999. a) Single account—Agencies were classified as having a single account if they proposed to make 75 percent or more of their proposed fiscal year 1999 obligations from one budget account. b) Multiple accounts—Agencies were classified as having multiple accounts if the amount of gross obligations proposed for each account was less than 75 percent of the agency’s total proposed gross obligations. The number of program activities shown in the Appendix of the Budget of the United States Government, Fiscal Year 1999 from which the agency proposed to make gross obligations in fiscal year 1999. When the Appendix does not list any program activities under a particular account, the account was deemed to have one program activity, reflecting the entire budget account. (4) Common program activity structure A common program activity structure means that most of the agency’s accounts contain program activities with the same titles. a) Changed structure—Agencies in this category proposed to substantially change their account and/or program activity structures in the Appendix of the Budget of the United States Government, Fiscal Year 1999 from those used in the previous year’s Appendix.b) Used fiscal year 1998 structure—Agencies in this category generally used the same account and program activity structures as they did in the Appendix of the Budget of the United States Government, Fiscal Year 1998. The number of goals that are related directly to the agency’s mission without any intervening plan elements. According to the Results Act, agencies’ strategic plans are to contain general goals and objectives that elaborate on the agency mission statement and provide a set of programmatic objectives for the major functions and operations of the agencies. We defined the first layer of goals under the agency’s mission statement as “strategic goals” regardless of how they were labeled in the plan. The number of strategic objectives, that is, goals that the plan related directly to the agency’s “strategic goals” as defined in (6) above. As in (6), we defined this layer of goals as “strategic objectives” regardless of how they were labeled in the plan. B. Characteristics describing how agencies linked program activities with performance goals a) Full Integration—An agency embedded its performance plans in its justification of estimates such that the justification could not logically or readily be detached from the performance plan.b) Separable—The agency’s plan was either (1) a separable component of the agency’s justification of estimates (e.g., an appendix) such that justification readers could either turn to or skip over the performance plan, but the plan would appear in the justification’s table of contents; or (2) an entirely separate document that may or may not have been transmitted at the same time as the justification. A user would need to read the plan, as opposed to the justification, in order to understand how the agency addressed Results Act requirements. Agencies were placed in one of five categories depending on the lowest performance planning structure to which the plan linked program activities. (See figure I.1 for an illustration.) a) Strategic goal—Plan related program activities to strategic goals as defined in (6) above. b) Strategic objective—Plan related program activities to goals that the plan related directly to the agency’s “strategic goals” as defined in (6) above. These goals may or may not have been labeled “strategic objectives” by the agency. c) Performance goal—Plan related program activities to performance goals and/or measures. As defined in the Results Act, a performance goal means a target level of performance expressed as a tangible, measurable objective against which actual achievement can be compared, including a goal expressed as a quantitative standard, value, or rate. For plans in this category, a program activity was associated with each individual performance goal/ measure. d) Other—Plan related program activities to a structure other than its strategic goals, strategic objectives, or performance goals. In many cases, this structure was a business line or some other unit for which a single goal statement was not expressed. e) None—Agencies were placed in this category if the performance plan did not relate program activities to any of the above structures (strategic goal, strategic objective, performance goal, or other) in their performance plans. Agencies were placed in one of six categories depending on the lowest performance planning structure for which the plan presented dollar amounts. (See figure I.2 for an illustration.) a) Strategic goal—Plan presented dollar amounts for each strategic goal. b) Strategic objective—Plan presented dollar amounts for each strategic objective. c) Set of discrete performance goals—Plan presented dollar amounts for any set of performance goals presented in some combination other than as described in (a), (b), (d), or (e). d) Performance goal and/or measure—Plan presented dollar amounts for each performance goal and/or measure. e) Other—Plan presented dollar amounts for a unit of analysis other than strategic goals, strategic objectives, or performance goals. f) None—Plan did not present dollar amounts for any performance planning or other type of structure. a) One program activity to one goal—Agencies were placed in this category if a program activity was linked to one performance goal. b) One program activity to many goals—Agencies were placed in this category if a program activity was linked to more than one performance goal. c) Many program activities to one goal—Agencies were placed in this category if more than one program activity was linked to one performance goal. d) Many program activities to many goals—Agencies were placed in this category if more than one program activity was linked to more than one performance goal. e) Could not be determined—Agencies were placed in this category if the plan did not convey how program activities were related to performance goals. Plans that allocated funding to a discrete set of goals (a) generally showed how program activities and their requested funding were allocated among performance goals/measures or sets of performance goals/measures (plans met this criteria even if only discretionary funding was allocated) and (b) used sets of goals/measures that were unique (i.e., a single performance goal/measure is related to only one strategic objective or strategic goal). While we quantified strategic objectives as defined, the plan may have contained other goal layers between strategic objectives and annual performance goals. We reviewed linkages between the program activities and performance goals presented in the plan. This characteristic did not assess whether all program activities were listed and covered in the performance plan. If accounts were linked to the planning structure, the underlying program activities were also presumed to be linked to this structure. When a plan contained performance goals distinct from performance measures, reviewers considered whether this assessment would change if the word “measure(s)” was substituted for the word “goal(s).” If so, measure(s) were used as the unit of analysis instead of goal(s). If agencies linked program activities to a structure other than performance goals (i.e., an intervening structure such as a strategic goal or objective), the plan reviewer determined this relationship by examining how program activities were related to the intervening structure and how the intervening structure was related to performance goals. The set of goals ranged in size and scope. For example, some of these plans presented allocations of funding to strategic goals or objectives, which represent discrete sets of performance goals. Figure I.1: Illustration of Characteristic 9 Example 9(a) Example 9(b) Example 9(c) Figure I.2: Illustration of Characteristic 10 Example 10(a) Example 10(b) Example 10(c) Example 10(d) To determine which characteristics of agency planning and budgeting structures were associated with linkages that showed an allocation of budgetary resources to results, we prepared contingency tables depicting the relationship between characteristic 12 in table I.2, “Funding allocated to a discrete set of goals or measures,” and each of the other characteristics. To assess whether the relationships in the tables were statistically significant, we performed two statistical techniques. When one characteristic in the table contained numeric values (e.g., characteristic 6, which measured the number of strategic goals), we used logistic regression techniques. The logistic regression technique involved regressing the odds on funding being allocated to a discrete set of performance goals on each characteristic and determining—using likelihood ratio chi square tests— whether the characteristic was associated with significant differences in those odds. When a characteristic had nonnumeric values (e.g., characteristic 11, which had five discrete categories), we used standard contingency tables to analyze the data. The contingency table techniques involved calculating the percentages of agencies that allocated funding to a discrete set of goals across the categories of the other characteristic and computing likelihood- ratio chi square statistics to determine whether differences in those percentages were statistically significant. In addition to computing the likelihood ratio chi square statistic, we also computed Fisher’s exact test to assess whether a characteristic was significantly related to characteristic 12. Fisher’s exact test was used to confirm the likelihood ratio chi square results because of the small number of observations in many of our tables. In most of our analyses, the likelihood ratio chi square and Fisher’s exact test yielded similar conclusions. Where they did not, the differences appeared negligible given our sample size. When a statistically significant association was identified in a table where one or both characteristics being analyzed had more than two categories, we conducted a series of additional chi square tests before and after grouping various categories of those characteristics to discern whether our description of the relationship of that characteristic with characteristic 12 could be simplified. Appendix II presents a summary of our analysis. The intent of our statistical analyses was to quantitatively identify and explore associations of various plan characteristics with plans that did or did not allocate funding to a discrete set of goals and/or measures. However, the following qualifications apply to our analysis. Although the results of our analyses apply to the plans we reviewed, our plan selection procedures preclude generalizing the results to agency plans not included in our population. For those characteristics we identified as having a significant association with characteristic 12, it is possible that this result occurred because of the close association of that characteristic with one or more of the other characteristics that were related to characteristic 12. The small number of plans reviewed precluded our use of statistical approaches that would enable us to assess the relationship of two or more characteristics simultaneously on characteristic 12. To provide some rudimentary insight into the extent to which characteristics significantly related to characteristic 12 were related to each other, we examined whether there were significant associations among those characteristics using the same procedures described above. Significant associations were found between (1) characteristics 5 and 8, (2) characteristics 8 and 10, (3) characteristics 8 and 11, and (4) characteristics 10 and 11. Aspects of agencies’ linkages not specifically mentioned in table I.2 were not assessed. For example, we did not assess whether all agency program activities were listed and covered in the performance plan. This assessment was made in our September 1998 report. Our analysis focused on linkages between performance goals and program activities in performance plans. We did not assess other elements of the performance plan, such as the quality of any goals presented in the plan. We also did not independently verify the funding amounts that agencies allocated to performance goals. We did not systematically assess other documents, such as agency budget justifications. Our review focused on the 12 characteristics mentioned above. However, there may be other characteristics that might be associated with agencies’ linkages. For example, although our previous work has noted that account and program activity orientation may be important factors in making linkages, the subjective nature of this characteristic prevented its inclusion in our analysis. Finally, to identify implications for future performance budgeting efforts, we gathered information on congressional perspectives. In addition to discussing the plans’ linkages between budgetary resources and results with selected appropriations staff, we also reviewed the House Committee on Appropriations’ hearing records on agencies’ fiscal year 1999 appropriations, giving special attention to how lawmakers reacted to the performance information presented in performance plans and budget justifications. We discussed the status of performance budgeting pilots with OMB. We requested comments on a draft of this report from the Director of OMB or his designee and incorporated OMB’s comments as appropriate. We conducted our work in accordance with generally accepted government auditing standards from August 1998 to February 1999. Spending concentrated in one account Budget structure used in the performance plan Fully integrated with budget justification Relationship of program activities to performance goals 22 Although the probabilities for this characteristic were 0.05 or less, our analysis revealed the source of statistical significance was the difference between the “none” category and all of the other categories shown for this characteristic combined. Thomas M. Beall, Senior Social Science Analyst Douglas M. Sloane, Supervisory Social Science Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary, VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed selected fiscal year (FY) 1999 federal agency performance plans, focusing on: (1) agencies' approaches to linking performance goals and budgetary resources; (2) characteristics that might be associated with different approaches to linking performance goals and budgetary resources; and (3) implications for future efforts to clarify the relationship between budgetary resources and results. GAO noted that: (1) in their first Government Performance and Results Act of 1993 performance plans, agencies experimented with a variety of approaches to connect budget requests with anticipated results; (2) although most agencies reviewed (30 of 35) defined some type of relationship between the program activities of their proposed budgets and the performance goals of their plans, far fewer (14 or 40 percent of the plans reviewed) translated these relationships into budgetary terms--that is, most plans did not explain how funding would be allocated to achieve performance goals; (3) such allocations are a critical first step in defining the performance consequences of budgetary decisions; (4) GAO found that agencies with budget and planning structures of widely varying complexity made these allocations, but some common approaches were used; (5) agencies were significantly more likely to have allocated funding to program activities if they: (a) showed simple, clear relationships between program activities and performance goals (as illustrated by eight agencies in GAO's review); (b) fully integrated performance plans into congressional budget justifications (as illustrated by five agencies); or (c) had changed their program activity structures to reflect their goal structures (as illustrated by three agencies); (6) agencies' first-year experiences show progress in bringing planning and budgeting structures and presentations closer together, but much remains to be done if performance information is to be more useful for budget decisionmaking; (7) continued efforts are needed to: (a) clarify and strengthen links between planning and budgeting structures and presentations; and (b) address persistent challenges in performance planning and measurement and cost accounting; and (8) the progress that has been made, the challenges that persist--including the indefinite delay in the performance budgeting pilots called for by the act--and Congress' interest in having credible, results-oriented information underscore the importance of developing an agenda to ensure continued improvement in showing the performance consequences of budgetary decisions. |
In July 1987, the Congress responded to the problems of homelessness by enacting several laws addressing different aspects of the problem. The most comprehensive of these was the Stewart B. McKinney Homeless Assistance Act (P.L. 100-77). Combined, the more than 20 McKinney Act grant programs funded activities that provided homeless men, women, and children with supportive services such as emergency food and shelter, surplus goods and property, transitional housing, primary health care services, and mental health care. The remaining McKinney Act grant programs and authorities are administered by five different departments—Education, HHS, HUD, Labor, and VA—and one agency, the Federal Emergency Management Agency. Since fiscal year 1987, federal funding for targeted homeless assistance has increased dramatically, from $490 million to more than $1.2 billion in fiscal year 1997. Veterans constitute about one-third of the homeless adult population in the United States on any given day. They form a heterogeneous group and are likely to have multiple needs. For example, VA estimates that approximately one-half of homeless veterans have a substance abuse problem, approximately one-third have a serious mental illness (of those, about half also have a substance abuse problem), and many have other medical problems. Some homeless veterans need assistance in obtaining benefits, managing their finances, resolving legal matters, developing work skills, or obtaining employment. Many require some form of transitional housing before a more permanent housing arrangement can be achieved. For some homeless veterans, independent housing and economic self-support are reasonable goals. But for others, including many seriously mentally ill homeless persons, neither full-time work nor independent housing may be feasible. Instead, for these individuals, relative stability in a supportive environment such as a group home may be the most reasonable outcome. Thus, efforts to assist the homeless require a range of housing options (including emergency shelter as well as transitional and permanent housing); treatment for medical, mental health, and substance abuse problems; and supportive services such as transportation and case management. This spectrum of options is referred to as the continuum of care. Homeless veterans are eligible for health care through the VA by virtue of their status as veterans, but in addition, VA has established programs specifically for homeless veterans. Two major VA homeless programs, Health Care for Homeless Veterans (HCHV) and Domiciliary Care for Homeless Veterans (DCHV), were created as a result of legislative actions taken during 1987 to address the needs of homeless veterans. The goal of these programs is to outreach and identify homeless veterans, assess their needs, and link them with VA or community-based programs for services, as appropriate. The HCHV and DCHV programs are both managed by the Veterans Health Administration (VHA) but under the auspices of different health care groups within VHA. HCHV programs are under the jurisdiction of VHA’s Strategic Health Care Group for Mental Health Services; the DCHV program is directed by VHA’s Geriatrics and Extended Care Strategic Health Care Group. VA’s annual obligations for its targeted homeless programs increased from $10 million in fiscal year 1987 to approximately $84 million in fiscal year 1997. During this period, VA has obligated over $640 million for its targeted homeless programs. Since the inception of VA’s homeless programs, VA has served over 250,000 veterans. VA’s NEPEC monitors and evaluates VA’s homeless programs using data it collects and analyzes from program sites. NEPEC generally issues annual reports for VA’s homeless programs that include some outcome measures such as whether a veteran is housed or employed upon leaving a program. With the reorganization of VHA into networks in 1995, headquarters oversight has been decentralized, and control of oversight and funding of the homeless programs has shifted to the local level. Specifically, VA organized its health care system to give greater authority and control to 22 Veterans Integrated Service Networks (VISN) and medical center managers. Headquarters program officials have now assumed a largely consultative role. Currently, all 22 VISNs participate in a Council of Network Homeless Coordinators to advise VA headquarters and VISN directors on issues related to the delivery and evaluation of homeless services to veterans. In its fiscal year 2000 budget request, VA revised its strategic planning and performance measurement processes under the Government Performance and Results Act of 1993 by adding performance measures related to outcomes for veterans served by its homeless programs. These outcome measures, which are already monitored by NEPEC, address the percentage of veterans who have independent living arrangements and employment upon their discharge from VA or from community-based contract residential care programs. Beyond these outcome measures, VA has three process goals: to increase (1) the number of community-based beds for homeless veterans, (2) VA facilities’ efforts to coordinate with other providers of homeless services, and (3) the number of homeless veterans treated in VA’s health care system. VA provides services to homeless veterans through targeted homeless programs across the United States. VA also provides medical, mental health, substance abuse, and social services to homeless veterans through its mainstream health care programs. VA’s homeless efforts include services such as outreach activities to identify homeless veterans, residential treatment programs to address clinical disorders, and job counseling and placement assistance to veterans seeking work. However, realizing that it does not have the resources to address all the needs of homelessness alone, VA is working more closely with community-based providers and other organizations to create a continuum of care to improve services for homeless veterans. Since establishing its first homeless programs in 1987, VA has expanded its efforts to provide an array of services to homeless veterans. VA initially funded 43 HCHV program sites to contract with community-based providers for residential treatment and rehabilitation of mentally ill (including substance abusing) veterans. VA currently operates 73 HCHV program sites, 62 of which offer residential treatment for the homeless chronically mentally ill (HCMI), generally for less than 6 months. HCHV staff conduct outreach at community-based homeless service providers such as shelters, soup kitchens, and other places frequented by the homeless. HCHV staff also serve as case managers for homeless veterans. Case management services are provided to maintain continuity of care and assist veterans in obtaining needed services by referring them to VA and non-VA sources that can address their needs for medical and psychiatric treatment, social and work rehabilitation, income support, housing, and other services. In addition, HCHV staff are responsible for monitoring the services provided each veteran participating in the residential treatment component of the program. During fiscal year 1997, the 73 HCHV program sites served 35,059 homeless veterans. The HCMI program is the core homeless program under the HCHV umbrella. The service delivery arrangements and treatment received by veterans participating in the HCMI program vary across sites. VA headquarters allows each site some flexibility in operating its program. Arrangements for using community-based residential treatment facilities for care and rehabilitation vary, in part, as a function of the availability of VA and community resources. Accordingly, the HCMI per diem rates paid by VA vary across community-based providers, depending on the type of services and the geographic location. In fiscal year 1997, veterans received treatment for an average of 73 days; the HCMI per diems ranged from approximately $15 to over $70, and the average daily rate was $38.58. Unlike the HCMI program, which was designed to rely on community-based residential treatment facilities, the DCHV program is primarily housed on the grounds of VA medical centers. Most DCHV program sites are located in existing VA domiciliaries. DCHV is a hospital-based program that uses interdisciplinary treatment to provide services to homeless veterans with varying medical, substance abuse, and mental health problems. The number of DCHV sites has increased from 20 to 35 in the 12 years since the program’s inception in 1987. The DCHV program focuses on rehabilitation. Basic services provided by the DCHV program include (1) outreach at some sites to identify underserved homeless veterans, (2) time-limited residential treatment that offers medical and psychiatric services, and job counseling and placement services, and (3) postdischarge community support and aftercare. In fiscal year 1997, the DCHV program discharged 4,619 homeless veterans from treatment in its 1,587 beds nationwide. Veterans received treatment for an average of about 116 days at a cost to VA of approximately $70 per day. The locations of the HCMI and DCHV sites are shown in figure 1. (See app. II for a summary of HCHV and DCHV program locations.) Over time, VA has developed new programs and approaches to complement the HCMI and DCHV programs and provide services that are more integrated, longer term, and more intensive (see table 1). For example, homeless veterans participating in VA’s Supported Housing program are provided on-going case management services by HCHV staff for an extended period. Moreover, these efforts involve partnerships with other federal agencies to assist homeless veterans in obtaining housing and other benefits. (See app. III for more information about these homeless assistance and treatment programs and the other approaches VA uses to assist homeless veterans.) While VA has expanded its homeless programs and community partnerships, it continues to be a provider of medical, mental health, and substance abuse services to homeless veterans through its general health care programs. Although VA does not know the extent to which its annual health care appropriations are spent on medical care and other treatment services for homeless veterans, recent estimates suggest that the amount spent on these health care services far exceeds the approximately $84 million VA used for its targeted homeless programs. NEPEC estimated that in fiscal year 1995 VA spent $404 million on inpatient general psychiatry and substance abuse services for homeless veterans, representing approximately 26 percent of all inpatient VA mental health expenditures. Cost estimates are unavailable for other health care expenditures, but NEPEC estimated that homeless veterans occupied 5 percent of the inpatient medical and surgical beds during fiscal year 1996. Moreover, these estimates do not account for primary care and other outpatient medical services rendered to homeless veterans at VA’s 173 hospitals and over 400 outpatient clinics nationwide. Although VA has developed a number of programs to assist homeless veterans, VA acknowledges that it alone cannot meet all their needs. These programs are not available in all locations and, where available, capacity for residential treatment is limited. VA’s homeless programs are available at selected locations. HCMI and DCHV homeless program sites were established on a voluntary basis; interested medical centers submitted proposals and those ranked highest by VA headquarters were initially funded. VA’s homeless programs vary dramatically in terms of the number of sites available to treat homeless veterans. For example, VISN 3 is the only network to have at least one site for each of VA’s homeless programs. As shown in table 2, the number of sites provided by each of VA’s programs ranges from 4 to 62. In those locations that have an HCMI or DCHV program, residential capacity is limited. For example, the HCHV site in Washington, D.C.—a city with a homeless veteran population ranging from an estimated 3,300 to 6,700—served 963 homeless veterans during fiscal year 1997, of whom 31 were treated in the HCMI residential component. Of the 30,857 homeless veterans contacted nationwide at the 62 HCHV sites with an HCMI residential treatment program, only 4,317 were placed in VA contracted residential treatment during fiscal year 1997—an average of 70 homeless veterans per site. Similarly, the DCHV program has limited inpatient capacity. For example, VISN 14, which covers parts of five states, including most of Iowa and Nebraska, has one homeless program: a homeless domiciliary at the Des Moines VA hospital with 20 beds that served 56 veterans during fiscal year 1997. In another instance, VISN 11, which includes urban cities such as Detroit, Mich., and Indianapolis, Ind., has no DCHV beds. In sum, the 35 DCHV program sites operated 1,587 beds and discharged 4,619 veterans from treatment in fiscal year 1997. On average, each DCHV site provided residential care to approximately 132 homeless veterans. Over the past 5 years, VA has expanded its commitment to partnering with community-based organizations. This commitment to community-based providers is reflected in VA’s long-range strategic planning. One such goal under the Results Act is to maximize participation in Community Homelessness Assessment, Local Education and Networking Groups (CHALENG) by increasing VA medical facility participation to 100 percent by fiscal year 2001. In response to the requirement to encourage coordination in Veterans’ Medical Programs Amendments of 1992 (P.L. 102-405), VA homeless staff began holding annual CHALENG meetings to better coordinate with other homeless providers and organizations. For example, in 1997, nearly 2,000 service providers attended CHALENG meetings nationwide and completed surveys about the extent to which specific needs were being met. Once local needs are prioritized, VA collaborates with community providers to resolve any community resource problems. This collaborative effort provides a forum for VA to work with its non-VA partners to assess, plan for, and address the needs of homeless veterans. Since the inception of the CHALENG initiative in fiscal year 1994, most medical centers have participated in the process. In fiscal year 1998, VA reported that 88 percent of its medical facilities conducted their annual CHALENG meetings. Also, the Congress authorized VA to establish alternative housing programs for homeless veterans through partnerships with nonprofit or local government agencies. As a result, VA created the Homeless Providers Grant and per Diem (GPD) program to award grants and per diem payments to public and nonprofit organizations that establish and operate new supportive housing and services for homeless veterans. Between fiscal years 1994 and 1998, 127 grants were awarded to 103 nonprofit and state or local government agencies, providing in excess of $26 million. Grant moneys have been awarded to recipients in 39 states and the District of Columbia; all 22 VISNs have at least one GPD recipient in their jurisdiction. Once grants awarded during the first 5 years become fully operational, VA estimates that over 2,700 new community-based transitional housing beds will be available for homeless veterans. Finally, in the Veterans Programs Enhancement Act of 1998, VA received authority to make $100 million in guaranteed loans over a 3-year period to qualified organizations. Most loans will be awarded to construct, rehabilitate, or acquire land for the purpose of providing multifamily transitional housing projects for homeless veterans. Although NEPEC collects extensive data, VA has little information about the effectiveness of its homeless programs. Homeless program sites submit primarily descriptive data about veterans and program characteristics. In addition, some outcome data are collected on program participants at discharge. (Outcome data are measures of a veteran’s status upon discharge from a homeless program, including housing, employment, and changes in substance abuse and mental health.) These data are of limited use in assessing program effectiveness, however, because no follow-up information is obtained after a veteran is discharged from a residential or DCHV treatment program. As a result, VA does not know whether veterans served by its homeless programs remain employed or stably housed. NEPEC collects and analyzes extensive descriptive information regarding program structure, veteran characteristics, program processes, and status at discharge for specific sites. Program managers use this information to monitor and compare program sites. For all measures except those involving status at discharge, the HCHV and DCHV programs use the average performance for all of their respective sites as the norm for evaluating each site. To account for homeless veterans who are particularly difficult to treat, data regarding status at discharge are adjusted for patient characteristics that influence treatment results, such as age or number of medical problems. Our analyses focused on the DCHV and HCHV programs because they are the two main components of VA’s homeless programs. NEPEC monitors the 62 HCMI sites that contract with community-based programs to provide residential treatment to homeless veterans. NEPEC collects data obtained upon initial contact with homeless veterans and at the conclusion of a veteran’s participation in the HCMI program. From these data, 32 indicators have been selected as “critical monitors” of site performance. These measures reflect four different categories of information about sites: (1) program structure (for example, the average number of days veterans spend in residential treatment and the average number of unique veterans served by each clinical staff member); (2) patient characteristics (for example, the percentage of veterans served who were not literally homeless at the time of intake and the percentage of veterans served who were diagnosed with a serious mental illness or substance abuse disorder); (3) program process measures which indicate how the program operates (for example, percentage of veterans served who were contacted by outreach and the percentage of veterans inappropriately placed in residential treatment); and (4) status at discharge (for example, percentages of veterans who report being housed and employed at discharge). Appendix IV contains a complete list of the 32 HCHV critical monitors. In fiscal year 1997, 35,059 veterans were served through HCHV programs. Of the 3,883 veterans discharged from residential treatment facilities in fiscal year 1997, 52 percent were considered to have successfully completed the program (that is, the veteran and clinician agreed that program goals had been met); 39 percent reported having their own apartment, room, or house at discharge; 43 percent reported having full- or part-time employment at discharge; 73 percent were rated as showing improvement in drug problems; and 74 percent were rated as showing improvement in mental health problems. Under most circumstances, NEPEC data regarding status at discharge are obtained from veterans who have completed residential treatment. In some cases, however, HCMI pays for only part of a veteran’s residential treatment program, and the veteran remains in treatment after discharge from the HCMI program. In these instances, the veteran’s status upon completion of residential treatment (which may occur some time later) is not captured in the NEPEC data. NEPEC also monitors the performance of the 35 DCHV sites using data gathered when veterans are admitted to the program and their status at the time of discharge. These measures reflect four different categories of information about the DCHV sites: (1) program structure (assessed solely by the annual turnover rate); (2) veteran characteristics (for example, the percentage of veterans who entered the program from the community and the percentage of veterans who were living outdoors or in a shelter prior to admission); (3) program participation (for example, the average length of stay and the percentage of veterans who completed the program); and (4) status at discharge (for example, percentages of veterans who are housed and employed at discharge). The 20 DCHV critical monitors are contained in appendix V. In fiscal year 1997, the DCHV program discharged 4,619 veterans after an average length of stay of about 116 days. NEPEC reported that 62 percent successfully completed the program, 57 percent were housed at discharge, 52 percent had full- or part-time employment at discharge, 79 percent were rated as improved in alcohol problems, 79 percent were rated as improved in drug problems, and 75 percent were rated as improved in mental health problems. Because information is not obtained after veterans leave treatment, VA cannot determine whether its homeless programs are effective over the long term. Moreover, NEPEC has only limited information about what aspects of its programs are most beneficial for certain veterans. Finally, NEPEC has little information about whether its programs are more beneficial than other strategies for helping the homeless. Evaluation research (including follow-up) is difficult and expensive to conduct on this hard-to-serve population. However, VA’s fiscal year 2000 budget request contains an additional $50 million to expand VA’s homeless programs and monitoring and evaluation efforts. VA has acknowledged the need for program evaluation and now includes a plan for program evaluation in its strategic plan. However, NEPEC officials told us that their primary emphasis is to monitor the performance of program sites, rather than to evaluate the effectiveness of treatments or programs. These monitoring activities provide information about program operations. As a result, NEPEC does not typically examine outcomes in a way that clarifies what aspects of treatment are associated with positive results for different clinical groups (for example, those with serious mental illnesses or those with a substance abuse disorder). NEPEC officials periodically supplement their data files with additional information (for example, about treatment approaches) and then conduct analyses that distinguish clinical subgroups. These findings are often published in academic journals. For example, one study looked at outcomes for dually diagnosed veterans (that is, those with both a serious psychiatric disorder and a substance abuse problem), comparing those in programs that specialize in substance abuse treatment with those treated in integrated programs that simultaneously address both psychiatric and substance abuse problems. Although differences between the two types of programs were modest, results suggested that those in integrated treatment programs were more likely than those in the substance abuse programs to be discharged to housing in the community rather than to an institutional setting. Currently, NEPEC does not conduct follow-up of veterans who have left the DCHV or HCMI programs. Follow-up is needed to determine whether veterans are still employed, housed, or successfully dealing with substance abuse or mental health problems after program completion and thereby to estimate the duration of any positive effects. Other research efforts involving the homeless that have included follow-up data suggest that positive outcomes observed at discharge are not necessarily sustained. Between 1987 and 1990, in order to evaluate the benefits associated with program participation, NEPEC conducted pilot follow-up projects at nine HCMI and three DCHV sites. NEPEC reported that veterans were substantially better off 3 months after discharge from DCHV treatment than when they were admitted to the program. Improvements were noted in housing, income, employment, substance abuse, and psychiatric functioning. Similarly, veterans who participated in the HCMI study exhibited improvements on follow-up (assessed from 1 month to 2 years after intake, with an average of 8.3 months) compared with intake in housing, employment, psychiatric problems, and substance abuse. For example, 73 percent of the veterans reported that they had spent no days homeless during the 90 days prior to their interview. The HCMI study stated that veterans derived substantial benefit from their participation in this program. While these follow-up studies were a major undertaking, NEPEC reports on these studies cite two major shortcomings. First, interview data were not collected from a fully representative sample. Of veterans who agreed to participate in these studies, follow-up interviews were conducted with 67 percent in the DCHV study and 72 percent in the HCMI study. Although the status of those veterans who were not reinterviewed is not known, it cannot be ruled out that the veterans who were doing the poorest were also the least likely to be reinterviewed. As a result, the data from those who were reinterviewed could suggest more positive outcomes than is true for the program as a whole. Second, no control or comparison groups were studied. Data from such groups would allow an estimate of the degree of improvement attributable to the DCHV or HCMI programs. In other words, it is possible that some of the improvements noted among those veterans who were reinterviewed would have occurred in the absence of DCHV or HCMI treatment. Research suggests that some improvement over time is likely among the homeless even in the absence of intensive treatment. Without data from an appropriate comparison group of veterans who were not served through VA’s homeless programs, VA cannot determine how much additional benefit the veterans derived from those programs. NEPEC officials stated that they have not conducted additional follow-up studies on the HCMI and DCHV programs because such information is difficult and expensive to obtain on this hard-to-serve population. A NEPEC official estimated that if they were to conduct another follow-up study for the HCMI program, the cost would be about $60,000 per site with an approximate annual total cost of $600,000. Approaches to homelessness vary with the needs (for example, medical, mental health, substance abuse, or other problems) of the subgroup being served. Although many questions about how to help the homeless remain unanswered, a series of research initiatives launched in 1982 and funded primarily by HHS have begun to shed light on the issues; and initial findings from a few projects are promising. These efforts suggest that effective interventions for the homeless involve comprehensive, integrated treatments. These initiatives also suggest that a range of housing, treatment, and supportive-service options need to be included within a continuum of care for the homeless. As early as 1982, but particularly in response to the McKinney Act in 1987, HHS funded several major research initiatives to learn more about homelessness in general and about treatments for the mentally ill or substance abusing homeless in particular. These efforts involved epidemiological studies to identify the homeless and their needs, demonstration projects to explore promising strategies for helping the homeless, and outcome evaluations to assess the effectiveness of selected programs. Cross-site analyses addressed overarching questions; and procedures for sharing information, such as conferences and an information clearinghouse, were established. Many questions remain unanswered, but several broad themes have emerged from these efforts. In addition, these research programs indicate that although it can be difficult to study homeless populations, such research can be done and can include follow-ups. This body of research indicates that effective treatment for the homeless requires comprehensive, integrated services. Although meeting the most basic needs of a homeless person for food, clothing, and shelter is a first step, it is rarely sufficient to enable a person to exit homelessness. Instead, progress in achieving housing stability requires comprehensive attention to the full range of a homeless person’s needs, addressing basic needs (such as shelter, food, and clothing), medical and mental health needs (including dental and eye care), and supportive services (such as transportation, assistance in obtaining benefits, and child care if necessary). Thus, as examples, untreated mental illness may interfere with a person’s ability to retain housing, and lack of transportation may limit access to medical appointments or job interviews. Moreover, research suggests that positive outcomes are promoted by integration of services. Attempts to address the needs of a homeless person one by one, or in parallel but without coordination, seem less effective than strategies that involve integrated efforts to address multiple needs. For example, homeless persons who have both a mental illness and a substance abuse problem seem to benefit more from integrated treatment programs than from programs that approach these problems separately. Similarly, the effectiveness of employment and training programs for the homeless is enhanced by linkage to housing assistance and supportive services. The importance of integration is attributable in part to fragmentation of the homeless service-delivery system, so that addressing a homeless person’s needs often requires multiple organizations. Case managers may facilitate integration by helping the homeless obtain services in ways that complement rather than conflict with one another. In addition, organizations that serve the homeless may collaborate to promote integrated, comprehensive service provision. At least one-third of homeless veterans have a serious mental illness. These disorders are more common among the homeless, and particularly among the episodically and chronically homeless, than among those who are domiciled. Disorders such as schizophrenia or severe depression can have markedly disabling effects on multiple aspects of a person’s life, including employment, housing stability, interpersonal relationships, and physical health. Specific psychiatric symptoms vary across disorders, but these illnesses often involve impairments in judgment, motivation, and cognitive and social skills, difficulties that not only contribute to housing instability but also limit the person’s ability to obtain treatment. Because of their impairments, the seriously mentally ill homeless may find it particularly difficult to negotiate the complexities of a fragmented service delivery system. Several researchers have focused on outreach and case management strategies for this homeless subgroup, finding that the seriously mentally ill homeless can be helped through such strategies. Some seriously mentally ill persons are able to function well, typically with the aid of psychiatric medication, but others face recurrent or persisting difficulties even with medication. Neither independent housing nor full-time work may be reasonable goals for some of these persons. Instead, a successful outcome might involve increased housing stability (perhaps in a group home), fewer and shorter psychiatric hospitalizations, and improved daily living skills. Thus, homeless services are often targeted to helping the homeless maximize self-sufficiency, which may or may not mean achieving economic or housing independence. About half of homeless veterans have a substance abuse problem, whether a cause or consequence of homelessness, which makes intervention more complicated. Several studies have suggested that housing and employment stability are impeded by ongoing substance use, and many housing options for the homeless require abstinence. On the other hand, many homeless substance abusers are initially unwilling to accept the goal of sobriety, although they may be willing to accept substance abuse treatment once some of their other needs are met. Thus, low-demand alternatives to the street (such as safe havens) have been advocated as a necessary part of a full continuum of care for the homeless. Although research has not yet determined what specific strategies are most effective with homeless substance abusers, initial findings suggest that drop-out rates are often high and the gains made by those who complete treatment programs are not necessarily maintained. Thus, ongoing contact may be necessary for long-term improvement. Too new to have been clearly evaluated, New Directions, associated with the West Los Angeles VA Medical Center, offers substance abuse treatment and job training/job placement services to medically stable substance abusers who do not have serious mental illnesses. Among the most difficult to treat homeless are those with both a serious mental illness and a substance abuse problem. About one-half of veterans with serious mental illness also have a substance abuse problem. Compared with other homeless persons, these dually diagnosed persons tend to have longer and more frequent episodes of homelessness, are harder to engage and retain in treatment, and require more services. Nonetheless, early research has indicated some promising approaches for the dually diagnosed homeless. For example, results of a randomized clinical trial of one case management strategy, Critical Time Intervention (CTI), suggested that homelessness was reduced among a group of seriously mentally ill men, many of whom were substance abusers.Compared with a control group of similar homeless men who received services as usual (for example, referrals), CTI was associated with a greater reduction in homelessness throughout a period that included a 9-month intervention phase and a 9-month follow-up phase. As another example, empirical evaluation of a program established by Vietnam Veterans of San Diego for substance abusing veterans, many of whom also suffered from PTSD or depression, yielded positive housing, employment, and substance abuse outcomes at a 6-month follow-up. Some veterans are referred to this program through the San Diego VA Medical Center. Long-term follow-up research with the dually diagnosed homeless suggests that set-backs are not uncommon, but that increases in residential and psychological stability are possible. Medical problems are also common among the homeless, with rates of illness and injury estimated at two to six times higher than among those who are housed. Typical conditions of homelessness—poor nutrition and hygiene; fatigue; and exposure to the elements, violence, and communicable diseases—contribute to poor health and make recovery from illnesses more difficult. Physical illnesses commonly reported among the homeless include respiratory infections, trauma (for example, lacerations, fractures, and burns), hypertension, skin disorders, gastrointestinal diseases, peripheral vascular disease, musculoskeletal problems, and dental and visual problems. Rates of tuberculosis and human immunodeficiency virus (HIV) are higher among the homeless than among the housed. It has been reported that the homeless end up using expensive health care alternatives, including emergency and inpatient services, and mortality rates among the homeless have been estimated to be three to four times higher than in the general population. Lack of adequate housing can exacerbate illnesses among the homeless. To illustrate this issue, persons with homes can typically deal with acute respiratory infections or chronic disorders such as hypertension or diabetes through a combination of medications, diet, and rest. Those living on the street or in shelters, however, may lack access to appropriate meals, safe storage facilities for medications and medical supplies, or the opportunity for adequate rest. As a result, health may deteriorate, and resultant long-term medical complications may further interfere with the person’s ability to exit homelessness. Convalescent care facilities, such as Christ House, a residential treatment facility with which the Washington, D.C., VA Medical Center contracts for services, provide medical care for homeless persons who do not warrant (and are not being considered for) inpatient medical treatment, but whose medical conditions are likely to worsen without proper attention in a stable environment. Haven II, affiliated with the West Los Angeles VA Medical Center, provides short-term housing for veterans who have been discharged from an inpatient medical unit but are still recuperating. Once medically stabilized, homeless persons served by these facilities can be referred to other housing options. For those homeless individuals who are able to work, research on job training suggests some promising strategies. Services have been provided through the Department of Labor’s Job Training for the Homeless Demonstration Program to over 45,000 homeless persons since 1988. More than a third obtained jobs, and half of those were employed 13 weeks later. Results suggest that ongoing case management, work readiness training, assistance in locating work, and postplacement support are among the elements that contribute to obtaining and maintaining employment. The Welfare-to-Work program at L.A. Vets, associated with the West Los Angeles VA Medical Center, incorporates many of these components. Experts agree that the continuum of care for the homeless must include a range of housing and treatment options, and that flexibility is needed to match homeless persons to appropriate services. Housing options should include emergency shelter, transitional housing, and permanent housing, all linked to supportive services. Housing and residential treatment programs should include options suitable for mentally ill, substance abusing, dually diagnosed, and convalescent persons. Although relatively few programs for the homeless have been empirically evaluated, the available research includes some promising approaches. Experts also note that attention to the individual’s preferences is important, and that failure to acknowledge those choices may reduce the effectiveness of intervention. Because the homeless have diverse needs and local resources vary, flexibility is needed in serving individuals and in arranging partnerships among organizations. As VA facilities attempt to develop a continuum of care for homeless veterans, variations in local needs and resources will result in different patterns of involvement for VA and its partners. Because homeless veterans differ from one another in their needs, no single treatment program can serve all veterans with equal effectiveness. Recent federally funded research projects suggest there are beneficial long-term effects attributable to certain strategies for serving mentally ill and substance abusing homeless persons, which VA could replicate. Local programs designed to serve these groups are likely to be important components of any continuum of care for the homeless. To maximize the effectiveness of its homeless dollars, VA should direct its resources to those programs and partnerships that show the greatest potential for increasing housing stability and reducing the risk of reentry into homelessness. Research on program effectiveness can provide the information needed to make decisions about how to direct these resources. To better understand the effects of VA’s homeless programs and ways to improve or enhance its programs, a series of program evaluation studies should be conducted to address long-term effects, processes associated with positive outcomes, and program impact. Thus, VA could design follow-up studies to examine the stability of housing and employment in the year or 2 after program discharge. VA could also undertake outcome evaluations designed to assess program processes to better understand how desirable outcomes are produced. Such studies could identify aspects of treatment that are associated with positive outcomes for veterans with different conditions. Finally, VA could estimate how program outcomes differ from outcomes that would be likely in the absence of the program. For example, results observed for a sample of homeless veterans who received a particular kind of treatment could be compared to a comparable group who did not receive that treatment. In its fiscal year 2000 budget, VA requested an additional $50 million for its homeless programs and indicated its desire to invest some of those funds in evaluating its homeless programs. Even though evaluation research can be difficult and expensive to conduct, such studies are necessary to ensure that VA directs its resources to those efforts with the greatest potential for beneficial effects. We recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health and the Assistant Secretary for Planning and Analysis to collaborate on conducting a series of program evaluation studies to clarify the effectiveness of VA’s core homeless programs and provide information about how to improve those programs. Where appropriate, VA should make decisions about these studies (including the type of data needed and the methods to be used) in coordination with other federal agencies with homeless programs, including HHS, HUD, and Labor. In commenting on a draft of this report, VA generally agreed with our findings and the thrust of our recommendation. VA suggested, however, that our recommendation be modified to recognize the role of the Assistant Secretary for Planning and Analysis in coordinating the Department’s program evaluations under the Results Act. We made this change. VA also identified several recent initiatives and planned actions to evaluate VA’s homeless program efforts, which we incorporated into the report. Finally, VA provided other comments regarding technical aspects of the report, which we incorporated as appropriate. (See app. VII for VA’s comments.) Copies of this report are being sent to the Honorable Togo West, the Secretary of Veterans Affairs; Senator John D. Rockefeller IV, Ranking Minority Member, Senate Veterans’ Affairs Committee; other interested congressional committees; and interested parties. Copies will be made available to others upon request. Please contact me on (202) 512-7111 if you have any questions about this report. Other GAO contacts and staff acknowledgments are listed in appendix VIII. In conducting our review, we interviewed officials at VA headquarters, Veterans Integrated Service Networks, VA’s Northeast Program Evaluation Center, researchers who study homeless issues, and representatives of veterans service organizations. We visited homeless programs at VA medical centers and community-based providers with whom they have partnerships; these sites were in Little Rock, Ark.; Denver, Colo.; Washington, D.C.; Los Angeles, Calif.; and San Diego, Calif. We also visited a community-based program in New York, N.Y., that is not affiliated with VA, and we attended a VA Community Homeless Assessment, Local Education and Networking Groups meeting. We analyzed annual NEPEC reports and other reports and documents relating to VA’s homeless programs. To describe the programs and approaches used by VA to assist homeless veterans, we obtained documents from VA headquarters and NEPEC that identified and provided detailed information about VA’s homeless efforts. To determine what VA knows about the effectiveness of its homeless programs, we reviewed NEPEC reports issued since the inception of VA’s homeless programs. NEPEC generally issues annual reports on its two major homeless programs, the Homeless Chronically Mentally Ill (HCMI) and Domiciliary Care for Homeless Veterans (DCHV). We discussed these reports and program effectiveness issues including performance indicators and outcome data with NEPEC staff and VA headquarters officials to better understand how the information is used to monitor and evaluate VA’s homeless programs. In addition, as part of our review of NEPEC’s reporting system, we evaluated the reliability of NEPEC’s data by testing a random sample of 5 percent of 1,059 intake and discharge forms collected during our site visits. We found, based on our limited reliability testing of the data, an error rate of less than one percent. To identify options or approaches for addressing the needs of specific groups of homeless veterans that VA might replicate we conducted a literature review to clarify issues involving homelessness and identify strategies associated with effective treatment. The data bases scanned included PsycINFO and several bibliographies regarding homelessness (Federally-Sponsored Research Findings on Homelessness and Mental Illness prepared by the National Resource Center on Homelessness and Mental Illness, HHS Publications Related to Homelessness from the Department of Health and Human Services, the National Institute on Alcohol Abuse and Alcoholism’s ETOH data base, and relevant bibliographies available through Policy Research Associates, Inc.). The focus of this literature review was on federally-funded research into interventions for homelessness. We also spoke with experts and visited community-based programs in New York, Los Angeles, San Diego, and Washington, D.C., that serve different subgroups of the homeless. We reviewed VA’s strategic plan for fiscal years 1998 through 2003 and its homeless performance measures in the FY 1997 Performance Measures for VA Homeless Veterans Treatment & Assistance Programs and VHA Directive 96-051, Veterans Health Administration Special Emphasis Programs. HCHV Outreach. This initiative is similar to the HCMI program, except that the 11 sites included in this program do not offer the residential treatment component. Moreover, these HCHV outreach sites generally do not provide the array of VA homeless programs typically found at HCHV locations with the HCMI program. Under this initiative, HCHV staff perform outreach activities at locations where the homeless congregate, conduct initial intake assessments, and link clients with appropriate and available VA and non-VA homeless service providers. In fiscal year 1997, the number of veterans served by each outreach site varied between 129 and 680. Homeless Compensated Work Therapy (CWT). CWT, also known as Veterans Industries, is a work program that provides veterans with job skills development and a source of income. Work is used as a therapeutic tool to help homeless veterans improve their work habits and mental health. While participating in this program, veterans may receive individual or group therapy and follow-up medical care on an outpatient basis. Currently, 19 homeless CWT program locations exist nationwide supported by VA medical centers. In fiscal year 1997, 1,371 homeless veterans were discharged from these programs. Homeless Compensated Work Therapy/Transitional Residence (CWT/TR). At selected locations, homeless veterans reside in transitional residences while participating in the CWT work program. The transitional residences are community-based group homes; and veterans are required to use a portion of their income from the CWT work program to pay rent, utilities, and food costs. VA owns 15 houses at 9 HCHV program sites which have 142 beds available for homeless veterans while they participate in the CWT/TR program. In addition, VA has contracted with one facility in Washington, D.C., to house 10 veterans. In fiscal year 1997, 132 homeless veterans were admitted to the program, and VA obligated about $3.6 million. Homeless Providers Grant and per Diem (GPD). This program offers grant moneys, through a competitive process, to homeless providers who construct or renovate facilities for transitional housing or other supportive services to homeless veterans. Over a 5-year period, 127 grants have been awarded, and total VA funding for these projects exceeds $26 million. Upon completion of these projects, over 2,700 new community-based transitional housing beds will be available for homeless veterans. Housing and Urban Development-VA Supported Housing (HUD-VASH). This interagency housing program combines the resources of HUD and VA to provide homeless veterans with permanent, subsidized housing. Through local housing authorities nationwide, HUD allocates section 8 vouchers for use by homeless veterans. Veterans are required to pay a portion of their income for rent; those without income receive fully subsidized housing. In general, veterans who do not exceed the maximum allowable income can remain in their section 8 housing. Prior to accepting section 8 housing, veterans agree to intensive case management services from VA staff and long-term commitment to treatment and rehabilitation. HUD allocated 1,805 vouchers to local housing authorities; as of September 1998, 1,383 were being used to house former homeless veterans. In fiscal year 1997, VA’s cost to support this program was approximately $5 million. Social Security Administration-VA Joint Outreach Initiative (SSA-VA). This outreach initiative involves the Social Security Administration and VA: staff from both agencies work collaboratively to identify homeless veterans who are eligible for social security benefits but not receiving them. Once veterans are identified, SSA and VA staff take action to expeditiously prepare and process claims so qualified veterans can obtain their benefits as quickly as possible. The SSA-VA initiative currently operates at four HCHV program locations. In fiscal year 1997, 372 applications were filed on behalf of homeless veterans, and 56 awards were received. Supported Housing. This multifaceted program offers a variety of services that vary among sites. In general, staff provide case management services and assist homeless veterans in locating either affordable permanent or transitional housing. In addition, staff offer practical services to homeless veterans to help them relearn daily living skills such as budgeting, shopping, and cleaning. They also assist veterans with job hunting and developing and maintaining good relationships with family members, neighbors, or others. These staff also serve as a link between homeless veterans and VA. As such, they facilitate care by ensuring that veterans obtain whatever services they need to reintegrate into community living. By the end of fiscal year 1997, 26 supported housing sites existed, situated at 23 HCHV and 3 DCHV program locations. During fiscal year 1997, these 26 sites served 1,688 homeless veterans. Veterans Benefits Administration Outreach (VBA). VBA staff work with HCHV and DCHV staff to conduct joint outreach, provide counseling, and offer other activities to homeless veterans, for example, helping them apply for veterans benefits. One of the goals of this program is to expedite the process for benefit claims of homeless veterans. In fiscal year 1997, 2,893 contacts with homeless veterans were made, and as a result of these contacts, 734 were awarded new benefits. Acquired Property Sales for Homeless Providers. VA properties that are obtained through foreclosures on VA-insured mortgages are available for sale to homeless provider organizations at below fair market value. Some of these properties are also available for lease. Since the inception of this program, 120 properties have been sold or leased. Comprehensive Homeless Centers (CHC). This initiative is not a program that provides direct services but is rather an effort to develop an integrated and coordinated system of treatment services for homeless veterans. Generally, CHC staff seek to (1) organize and enhance communications and cooperation among the VA homeless programs; (2) cultivate relationships with community-based homeless service providers and organizations; and (3) work with other government entities, including local, state, and federal agencies in the area. These actions help VA and non-VA homeless providers work collaboratively to prevent or eliminate overlap and duplication of efforts, and to streamline the delivery of services to homeless veterans. Direct Leases With Service Providers on Medical Center Grounds. Where underutilized space exists, VA headquarters has encouraged medical centers to lease property on medical center grounds to homeless service providers. Drop-In Centers. These daytime centers offer various services in a safe environment. Veterans can generally receive food and have access to showers and washer/dryer facilities. In addition, veterans can participate in therapeutic and rehabilitative activities and receive information about topics such as HIV prevention and good nutrition. The drop-in centers also function as a point of entry for veterans into other VA homeless programs, including those that provide more intensive services. Psychiatric Residential Rehabilitation and Treatment Program. This program is a 24-hour-a-day therapeutic setting that provides professional support and treatment to chronically mentally ill homeless veterans in need of extended rehabilitation and treatment. There is one funded site in Anchorage, Alaska. VA Assistance to Stand Downs. Over the past 3 years, VA staff have participated in more than 200 community “stand downs” that serve the homeless. Stand downs are 1- to 3-day events that provide the homeless a safe and secure place to obtain a variety of services such as food, clothing, shelter, and other assistance—including VA provided health care, benefits certification, and linkages with other programs. VA Surplus/Excess Property for Homeless Veterans Initiative. With support from the General Services Administration and Department of Defense, VA searches for and obtains federal property such as hats, gloves, socks, boots, sleeping bags, furniture, and other items. These items are distributed to homeless veterans and programs that serve the homeless. Over the past 5 years, this initiative has distributed $42.6 million worth of surplus goods. Structural (quantity or intensity of services provided) Length of stay in residential treatment 1. Mean days in residential treatment. Trend in veterans treated 2. Unique veterans served per clinician. 3. Visits per clinician. Trend in veterans contacted 4. Difference from previous year of intakes. Residence at intake 5. Literally homeless intakes per clinician. Supported housing workload 6. Veterans treated per full-time equivalent employee in supported housing. Patient characteristics (key characteristics of target population) Residence at intake 7. Not strictly homeless. Length of homelessness 8. No time spent as homeless. Trend in length of homelessness 9. Difference from previous year of not strictly homeless. 10. Difference from previous year of homeless less than 1 month. Medical and psychiatric indicators 11. Percentage with serious psychiatric or substance abuse diagnosis. Trend in psychiatric indicators 12. Difference from previous year of serious psychiatric or substance abuse diagnosis. Supported housing: Homelessness at intake 13. Literally homeless veterans. (continued) Process (how the program operates) How contact was initiated 14. Contact through VA or special program outreach. Trend in outreach indicators 15. Difference from previous year in contact through outreach. Selection of veterans for residential treatment 16. Ratio of veterans with no residence placed in residential treatment versus those not placed. 17. Ratio of veterans with serious psychiatric or substance abuse problems placed in residential treatment versus those not placed. 18. Inappropriate residential treatment. 19. Veterans in hospital day before intake assessment to residential treatment. Supported housing: Percentage contacted by outreach 20. VA outreach. Supported housing: Status of discharges 21. Mean total days in program. Outcome (status at discharge from residential treatment) Deviation from median performance 22. Successful completion of residential treatment. 23. Domiciled at discharge. 24. Housed at discharge. 25. Employed at discharge. 26. Improved psychiatric symptoms. 27. Improved alcohol symptoms. 28. Follow-up planned at discharge. Supported housing: Change in problems at discharge 29. Improved alcohol problems at discharge. 30. Improved psychiatric problems at discharge. Supported housing: Status of discharges 31. Mutually agreed-on termination. Supported housing outcomes 32. Discharge to homeless or unknown housing. Turnover rate 1. Annual turnover rate. Method of program contact 2. Community entry (includes outreach initiated by VA staff and referrals by shelter staff or other non-VA staff). 3. VA inpatient and outpatient referrals (includes referrals from the HCHV program). Usual residence in month prior to admission to program 4. Outdoors/shelter. 5. Institution (includes health care facilities and prisons). 6. Own house, room, or apartment. Length of time homeless 7. At risk for homelessness (HCHV uses the term “no time homeless”). Appropriateness for admission 8. No medical/psychiatric diagnosis. Length of stay 9. Mean length of stay. Method of discharge 10. Completed program. 11. Asked to leave. 12. Left by choice. Deviation from median performance 13. Alcohol problems improved. 14. Drug problems improved. 15. Mental health problems improved. 16. Medical problems improved. 17. Housed at discharge. 18. Homeless at discharge. 19. Competitively employed or in VA’s CWT/TR at discharge. 20. Unemployed at discharge. Specific approaches within the continuum of care for homelessness vary with the needs of the subgroup being served. These needs may involve medical, mental health, substance abuse, or other problems; and different needs may predominate at different times during an episode of homelessness. We visited collaborative programs that target a range of different groups of the homeless (for example, homeless with substance abuse problems, homeless with serious mental illnesses), thus representing different possible elements in a continuum of care for homeless veterans. Each of the programs we reviewed has the potential to be replicated, and we included two projects that have been empirically evaluated. Convalescent Medical Care. Christ House (Washington, D.C.) and Haven II (Los Angeles, Calif.) address the need for convalescent medical care among homeless persons who do not warrant (and are not being considered for) inpatient medical treatment, but whose medical conditions are likely to worsen without continued attention in a stable environment. Christ House in Washington, D.C., is a 34-bed medical recovery facility with a staff that includes nurses, a nurse practitioner, and doctors. Care is provided to homeless persons with a variety of medical problems, such as postsurgical recovery, temporary instability associated with HIV or diabetes, or sickness from chemotherapy. Homeless veterans placed at Christ House through an HCMI contract may stay for several months, receiving medical attention, sobriety support, and social service support as necessary. Haven II, located on the West Los Angeles VA Medical Center grounds, is a 35-bed step-down care unit run by the Salvation Army. The Medical Center pays a per diem for up to 14 days for ambulatory veterans who have been discharged from an inpatient medical unit, but who are still recuperating and have not yet obtained other suitable housing. Veterans at Haven II receive their medical and mental health treatment through the VA Medical Center. L.A. Vets’ Westside Residence Hall. Targeting formerly homeless veterans who have achieved 90 days of sobriety and who appear ready to obtain and maintain employment, Westside Residence Hall provides housing and supportive services to veterans who are judged to be approaching the transition to permanent housing. A renovated dormitory, Westside Residence Hall is divided into suites, each with several single or double rooms. Meals are served through a food reprocessing and redistribution business that also employs and trains some of the residents, and the facility has an Economic Development Center, where residents can pursue employment opportunities. L.A. Vets is a joint venture between a for-profit corporation and a nonprofit one. Westside Residence Hall, Inc., the for-profit corporation, owns and manages the building, and is geared to generating enough cash to be self-sustaining and cover the core administrative costs of the nonprofit corporation, Los Angeles Veterans Initiative, Inc. To be eligible for Westside Residence Hall, veterans must have been homeless or precariously housed, be medically and psychiatrically stable, have achieved 90 days of sobriety, be willing to submit to random toxicology screening, be actively involved in ongoing sobriety support (if a history of substance abuse was involved), be judged able to function independently and to seek employment, and be able to pay rent. Current rents range from $255 through $400. Westside Residence Hall has two separate programs, a supported housing program and a welfare-to-work program. About 250 veterans are at Westside Residence Hall as part of the West Los Angeles VA Medical Center’s Supported Housing Program. They receive case management services through VA staff, who work part time at Westside Residence Hall. A VA psychologist also spends time at this facility, and veterans go to the Medical Center for other needed services. Preliminary analyses by the West Los Angeles VA Medical Center staff suggest that veterans stay at Westside for an average of 6 months and that placement at Westside Residence Hall may be associated with a reduced risk of inpatient hospitalization. This analysis also suggests that upon leaving, 54 percent report employment and 36 percent report having obtained both housing and employment; about 45 percent have relapsed at the time of exit. Westside Residence Hall’s welfare-to-work program provides up to 90 days of assistance in obtaining and maintaining employment. Begun in 1997 and funded in part by VA GPD funds, the program supports 100 beds. Sober veterans who appear able and motivated to reenter the job force must actively pursue work while in this program. They receive sobriety support, assistance in searching for employment, and services to help them maintain work once it is found. Although the Westside Residence Hall welfare-to-work program is too new to allow clear evaluation, research suggests that job assistance programs for the homeless are enhanced by provision of supportive services and postplacement assistance. Westside Residence Hall is thus designed to address needs that may arise toward the end of an episode of homelessness. According to L.A. Vets, projects such as Westside Residence Hall can be expected to serve at least 30 percent of homeless or precariously housed veterans. They suggest that replication of Westside Residence Hall is likely to require six conditions: (1) a large population of homeless veterans; (2) real estate suitable for adaptive reuse at an affordable cost; (3) geographic proximity to a VA medical center with expert staff committed to serving the homeless and the infrastructure to allow that involvement; (4) ready access to entry-level jobs; (5) willing for-profit and nonprofit partners, including a nonprofit service provider capable of planning and coordinating the project and an entrepreneur to spearhead efforts; and (6) long-term affordable financing. L.A. Vets is currently developing additional similar projects. New Directions. New Directions offers substance abuse treatment and job training/job placement services to medically stable substance abusers who do not have serious mental illnesses and who are not receiving medications for psychiatric conditions. In a renovated building it leases on the grounds of the West Los Angeles VA Medical Center, New Directions operates a long-term residential treatment program. Beginning, if necessary, with medication-free detoxification, residents enter a highly structured substance-abuse treatment program, which can take from 3 to 9 months, and then a vocational program, which can take up to 2 more years. Homeless program staff at the West Los Angeles VA Medical Center reported that as many as a third of their homeless veterans could be considered for placement at New Directions. New Directions receives a per diem rate through an HCMI contract for the first 30 days and through GPD funds for an additional 60 days. The facility has 24 detoxification beds, 64 long-term substance abuse treatment beds, and 40 beds for those in the vocational phase. It also has 24 shelter-plus-care beds, partially funded by HUD, for veterans who have completed the recovery phase of their treatment but have multiple disabilities. Residents with income are expected to pay a maximum of 25 percent of their income toward rent. In operation for just over 1 year, New Directions is too new to permit clear evaluation of its effectiveness. New Directions staff reported that about one-third of their residents are considered to have successfully completed the program, and about one-third drop out of treatment within the first 60 days. Long-term residential treatment for substance abuse has not been clearly shown by other research to be any more or less effective than other treatment approaches, and questions remain about what treatments are most effective for homeless substance abusers. Among the homeless, highly structured programs tend to have somewhat higher drop-out rates than other strategies. Veterans Rehabilitation Center, Vietnam Veterans of San Diego (VVSD). Empirical evaluation of VVSD’s Veterans Rehabilitation Center, which serves primarily substance abusing veterans with post-traumatic stress disorder (PTSD) or serious depression, suggested that it was associated with positive housing, employment, and substance abuse outcomes on 6-month follow-up. An 80-bed facility, the Veterans Rehabilitation Center provides treatment for substance abuse, PTSD, and other psychological disorders while also addressing preparation for employment. Some mental health needs are addressed in coordination with the VA or local Vet Center. If a dually diagnosed veteran is referred to the Veterans Rehabilitation Center through the San Diego VA Medical Center HCMI program, a per diem is paid for up to 90 days. Other veterans are partially supported by a contract with that medical center’s substance abuse treatment program. Residents are asked to pay rent of up to 30 percent of the income they receive during their stay, not to exceed $250 per month. The treatment program includes three phases, each of which typically requires at least 2 months. During the first phase, sobriety is emphasized. During the second phase, residents prepare for work by developing relevant skills. In the third phase, residents actively seek employment and prepare for the transition back into the community. The average length of stay is about 7 months, with a maximum of 1 year. The treatment program is described in a manual that could be used to replicate it. VVSD’s Veterans Rehabilitation Center was one of six promising treatment programs for homeless persons with co-occurring substance abuse or mental health problems that was selected for evaluation through a grant cosponsored by the Center for Substance Abuse Treatment and the Center for Mental Health Services. Data collected 3 and 6 months after veterans left the program suggested that program graduates spent fewer nights homeless and were more likely to be housed stably and independently, more likely to be employed, and less likely to be using alcohol or other substances than participants who left the program prior to completion. Moreover, data from the California Employment Development Department suggested that program participants were not only more likely to be employed, but were earning better wages than a comparison group of homeless veterans who did not participate in VVSD’s Veterans Rehabilitation Center. These results must be interpreted with some caution, as they reflect a single evaluation of the program with follow-up for only 6 months; also, participants were not randomly assigned to the VVSD program or control group. Nonetheless, this evaluation suggests that the VVSD Veterans Rehabilitation Center program offers a promising approach to the treatment of substance abusing veterans with PTSD or depression. NEPEC reports that about 10 percent of the homeless veterans served by the HCHV program have combat-related PTSD (the overall rate of PTSD among homeless veterans is likely to be higher because traumatization and victimization are more common among homeless people than in the general population), about 29 percent have a mood disorder, and about 72 percent have a substance abuse diagnosis. Thus, a substantial proportion of homeless veterans might benefit from this kind of program. Critical Time Intervention (CTI). Results of a randomized clinical trial that compared CTI (a case management strategy) to usual services only for seriously, chronically mentally ill (for example, schizophrenic) homeless persons indicated that CTI was associated with a greater reduction in homelessness throughout a period that included a 9-month intervention phase and a 9-month follow-up phase. (“Usual services” were those that the person would have received under normal circumstances, such as referrals to community agencies.) CTI differs from the other specific programs we visited, in that it is an approach to case management rather than a transitional housing or residential treatment program. CTI provides continuity of care during a homeless person’s transition from an institution or the street to a more permanent suitable housing arrangement. Designed to span 9 months, it aims to ease this transition and minimize the risk of relapse to homelessness. Specific goals include performing an ongoing assessment, forming an appropriate long-term plan, establishing linkages to community resources, fostering independent living skills, and ensuring efficient use of services. For those with a substance abuse history, abstinence is a goal rather than a prerequisite. (Although ongoing substance abuse makes intervention more difficult, it allows movement toward a goal of sobriety while other needs are being addressed.) In a study funded by the Center for Mental Health Services, 96 men with severe mental illness who had been placed in community housing were recruited for participation. Half were randomly assigned to receive CTI for 9 months, to be followed by 9 months of only usual services; half were randomly assigned to 18 months of usual services. Data were obtained from 94 of the 96 participants at the 18-month point. Results indicated that those provided with only usual services spent more nights homeless (91 on average) throughout the 18-month assessment interval than did those provided with CTI (30 on average). Moreover, the difference between these groups in the likelihood of spending a night homeless tended to become greater over time. (Research on homeless veterans has more typically indicated that treatment and comparison groups begin to converge rather than diverge after a program ends.) Similarly, fewer of those who had received CTI experienced prolonged periods of homelessness during the 18 months than those who received only usual services. These results are based on a single study, but suggest promising outcomes for seriously mentally ill homeless persons, a particularly hard-to-serve subgroup. To date, CTI has been used most extensively with some of the hardest-to-serve homeless in the New York City shelter system: seriously mentally ill (for example, schizophrenic) persons, many of whom have multiple psychiatric diagnoses, chronic and heavy substance abuse problems, serious medical problems, and long histories of homelessness. NEPEC estimates that about 45 percent of the homeless veterans served by the HCHV program have serious psychiatric problems. Moreover, CTI clinicians believe that their procedures should be appropriate for use with homeless persons with less severe disorders as well. VA is not currently using CTI, although VA officials have indicated their intention to begin a pilot CTI project. Materials are available for training in CTI. In addition to those named above, the following individuals made important contributions to this report: Jean Harker reviewed NEPEC’s reporting, monitoring, and evaluation systems for VA’s homeless programs; Kristen Anderson assisted with the NEPEC review and conducted a literature review of homeless issues focused on interventions for the homeless; Deborah Edwards assisted with designing the job and methodological approaches used to perform the work and acted as an adviser throughout the assignment; Ann McDermott provided technical support; and Robert DeRoy assisted with the reliability testing of NEPEC’s data. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the effectiveness of efforts to assist homeless veterans, focusing on: (1) describing the Department of Veterans Affairs (VA) homeless programs; (2) determining what VA knows about the effectiveness of its homeless programs; and (3) examining promising approaches aimed at different groups of homeless veterans. GAO noted that: (1) VA's homeless assistance and treatment programs address diverse needs of homeless veterans by providing services such as case management, employment assistance, and transitional housing; (2) VA also provides medical, mental health, substance abuse, and social services to homeless veterans through its hospitals, outpatient clinics, and other health care facilities; (3) because of resource constraints and legislative mandates, VA expanded its homeless veterans efforts by better aligning itself with other federal departments, state and local government agencies, and community-based organizations; (4) the goal of this effort is to develop a continuum of care for the homeless--that is, to identify or create options for addressing the full array of housing, health, and service needs of this population; (5) VA has little information about the effectiveness of its homeless programs; (6) VA has relied on the Northeast Program Evaluation Center (NEPEC) to gather and report information about its homeless programs; (7) each of VA's homeless program sites routinely submits extensive data, mostly related to client characteristics and operations at individual program sites; (8) these data are used primarily to provide program managers with information about service delivery and are of limited use in assessing program effectiveness; (9) to evaluate effectiveness, information must be gathered about intended program results; (10) the outcome measures that NEPEC uses focus on housing, employment, and changes in substance abuse and mental health at the time veterans are discharged from VA's homeless programs; (11) little is known about whether veterans served by VA's homeless programs remain housed or employed, or whether they instead relapse into homelessness; (12) many questions about how to treat homelessness remain unanswered; and (13) experts agree, however, that a comprehensive continuum of care for the homeless--such as that which VA is striving to achieve--should include a range of housing and service alternatives, with specific approaches at any one site reflecting local needs and local resources. |
Some federal grant programs—referred to as pass-through grants—are awarded with a specific requirement that a portion of grant funds be distributed by the initial grant recipient (such as a state or local government) to entities within that grantee’s jurisdiction to carry out services.distribute funds to entities within the jurisdiction, known as subrecipients. For purposes of this report, we focus on grants where state agencies are the prime recipient. Congress may establish a grant program as a pass- The initial recipients of the funds, known as prime recipients, through program in its authorizing legislation and there can be multiple benefits to this structure. Pass-through grants can balance federal interests with state flexibility, as well as leverage the financial and program resources of prime recipients and subrecipients. Pass-through grant funds flow to subrecipients in various ways. Pass- through grants are first awarded to prime recipients, such as states, local governments, or other entities, and then awarded to subrecipients. In some cases, entities that are subrecipients for certain grant programs can be prime recipients for other grant programs. Figure 1 broadly illustrates the flow of pass-through funds and shows how different types of entities can be the prime recipient. Federal grants passed through prime recipients are to complete the normal steps in a grant life cycle (see figure 2). In addition, prime recipients conduct their own granting process to award funds to subrecipients, mirroring many of the same steps as in the federal agency’s grant life cycle. This process typically involves subrecipients applying for grant funds and, if grant funds are awarded, entering into a grant agreement with the prime recipient. After prime recipients enter into agreements with subrecipients, funds may be distributed to the subrecipient. Funds for many grants to subrecipients are distributed on a reimbursable basis, with the subrecipient incurring an expense and then reporting that expense to the prime recipient who then reimburses the subrecipient. Significant gaps in the time or amount of funding provided could lead to financial instability of the subrecipient. States, as prime recipients, may exercise flexibility in many aspects of pass-through grant administration. In many cases, states are able to determine the funding priorities and set the award process. States may also set their own monitoring plans and schedules within federal requirements. Broadly, a subrecipient is accountable to the prime recipient for use of the federal funds provided by the pass-through grant, and therefore subrecipients send much of the reporting information to states. In general, federal agencies have requirements to monitor how the prime recipient monitors its subrecipients. There is no comprehensive estimate of the amount of federal grant funds awarded from prime recipients to subrecipients. USASpending.gov provides information reported by recipients on funds they awarded to subrecipients for grants greater than or equal to $25,000, but grants of less than $25,000 are not required to be included. According to USASpending.gov data for fiscal year 2012, $79.6 billionas being redistributed to subrecipients. All federal grant programs are subject to a common foundation of governing rules and government-wide requirements, and two are particularly relevant to entities that pass funds on to subrecipients. The Cash Management Improvement Act (CMIA) governs the exchange of funds between the federal government and the states, and is applicable to timeliness in the grant disbursement process, while OMB’s Circular No. A-133, Audits of States, Local Governments and Non-Profit Organizations, outlines the requirements for an annual audit in accordance with the Single Audit Act; such an audit encompasses compliance with requirements such as determining the extent of compliance with CMIA and policies on allowances for administrative expenses. CMIA provides the general rules for efficient transfer of federal financial assistance between the federal government and states. CMIA requires that state and federal granting agencies minimize the time elapsing between the transfer of funds from the U.S. Treasury and the state’s payout of funds for federal assistance program purposes. For pass- through grants, this means that the prime recipient is generally not allowed to draw down its grant funds and retain these funds. Rather, the grant funds must be drawn when a distribution to subrecipients is needed. States may enter into a “Treasury-State Agreement” with the Department of the Treasury (Treasury). This agreement outlines the draw-down and distribution practices for the states for selected large grant programs. OMB’s Circular No. A-133 provides general guidance on the roles and responsibilities of the federal awarding agencies and primary recipients of government funds regarding audit requirements of grantees and The circular sets forth guidance implementing the Single subrecipients. Audit Act, which requires certain entities receiving federal awards under more than one federal program to undergo a single audit, which is intended to promote the efficient and effective use of audit resources. Additionally, the circular sets forth standards for obtaining consistency and uniformity among federal agencies for the audits of states, local governments, and nonprofit organizations expending federal awards totaling $500,000 or more annually. Among other responsibilities, the circular gives: federal awarding agencies the responsibility to advise recipients of requirements imposed on them by federal laws, regulations, and the provisions of contracts or grants, and primary recipients the responsibility to identify subrecipient awards; advise subrecipients of requirements imposed on them by federal laws, regulations, and the provisions of contracts or grant agreements as well as any supplemental requirements; and monitor the implementation of the grants. Similarly, federal agencies monitor and oversee certain aspects of pass- through grants as part of their monitoring procedures. OMB is responsible for developing government-wide guidance to help ensure that grants are managed properly and that the funds are spent in accordance with applicable laws and regulations. OMB instructions or information issued to federal agencies are referred to as “circulars.” These circulars apply to recipients of federal pass-through awards and they give instructions to federal agencies on implementing policies within their purview, including those that apply to grant programs. In addition, these circulars lay out guidance applicable to grants management in the areas of administration, audits, and cost principles. As part of an entity’s single audit, the independent auditor is to test compliance with CMIA. The auditor examines compliance in areas such as whether the state minimized the time between receipt of federal funds and expenditure, whether there are internal controls in place to help ensure that timely payments are made, and whether any interest earned by the state was reported and remitted. The auditor is to test compliance by comparing a sample of the state’s reimbursement requests to determine if they conform to the procedures in the state’s Treasury-State Agreement. The independent auditors performing single audits may also review the allowances states withhold for administrative costs before passing funds on to subrecipients. OMB’s Circular A-133 single audit compliance supplement states that for programs where a maximum percentage or amount is allowed for the grantees’ administrative costs (such as the three programs we reviewed), auditors are to verify that administrative costs were accurately recorded and that these costs do not exceed the allowed amount. As a result of their work, auditors can identify issues with excess administrative funds withheld; however, generally, auditors do not test all transactions or programs, as they use a risk-based approach in their testing. In addition to government-wide requirements, program-specific requirements—found in a grant program’s authorizing legislation, appropriation, or implementing regulations—can provide specific requirements for individual grant programs related to disbursement of funds. For example, as with the three programs we reviewed, the authorizing legislation may contain statutory limits on the amount of administrative funds that states and local governments are allowed to withhold from the grant awards for their own administrative expenses. The three programs we reviewed did not set specific requirements for the timing of payments from prime recipients to subrecipients. OMB guidance states that the CMIA and Treasury-State Agreements would set these policies. The three programs we reviewed varied in the amount of grant funds the authorizing statute permits a prime recipient to use for its own administrative expenses prior to distributing funds to subrecipients: The Edward Byrne Memorial Justice Assistance Grant (Byrne JAG) program allows the prime recipient to withhold up to 10 percent for administrative expenses without regards to the award amount. The Community Services Block Grant (CSBG) program allows the prime recipient to withhold up to 5 percent—or $55,000, whichever is greater—for administrative expenses. The grant program also allows for an additional amount to be withheld for special projects, which the state may retain for activities such as training and technical assistance, but the state may also distribute these funds to existing recipients or other recipients to meet state goals, so long as the combined administrative expenses and special project funds do not exceed 10 percent of the total funds made available to the state. The state-administered Community Development Block Grant (CDBG) allows the prime recipient to withhold $100,000 plus 3 percent of the state’s CDBG grant plus program income for administrative expenses. The program requires that any funds spent for administration in excess of $100,000 must be matched by the state. The state may also opt to use up to 3 percent of the grant plus program income for technical assistance, but the combination of administrative and technical assistance cannot exceed $100,000 plus 3 percent of the amount granted to the state plus program income. As part of program monitoring procedures to help ensure states comply with federal requirements and agency regulations, federal agencies oversee aspects of pass-through grants related to the administrative funds states withhold and the timeliness of reimbursement practices. Federal agencies can establish their own monitoring procedures and develop their own monitoring tools, and our review of the three selected agencies’ procedures indicated the agencies address both of these issues in their monitoring tools and guidance: At HHS, monitoring plans allow for on-site monitoring to occur for the CSBG grant at approximately five states each year. Additional monitoring visits may be conducted if problems are identified during the course of program administration. Monitors review compliance with CMIA as well as the amount of administrative fees withheld. At HUD, grantees may receive on-site monitoring or desk reviews, off- site agency reviews of documents submitted by the grantee. Some states receive on-site monitoring annually, while other states have a longer gap between these site visits, determined by a risk-based scoring method. CDBG program monitoring protocols include requirements for reviewing the amount of administrative funds withheld. HUD also requires grantees to submit an annual performance report, including financial information, which is used to aid in identifying any potential payment issues. At DOJ, monitoring procedures provide for some on-site monitoring reviews in addition to annual desk reviews of its grantees. The frequency of on-site monitoring reviews is based on several factors, including risk assessment, resource availability, and whether the state has had a recent audit. DOJ monitoring reviews are to look at the states’ timing of its fund distributions, which is generally done by reviewing a sample of drawdown transactions. The three selected federal agencies also include a review of the results of states’ single audits as part of their monitoring procedures for periodic reviews. Reviewing single audit reports can help identify potential internal control issues or any program-specific issues from past audits. For example, DOJ financial monitoring protocols require the monitoring staff to review the most recent single audit to determine if there are findings related to department programs, which could identify cash disbursement delays or excess administrative funds withheld. Protocols specific to the CSBG program require monitoring staff to review the single audit report as well as financial documentation used to support that audit and documentation of any actions taken to resolve audit findings. Agency officials we spoke with said that reviews of single audits are helpful parts of the monitoring process, but they rarely see specific issues with delayed reimbursement or excess administrative funds withheld by the state identified in the single audit. Circular A-133 requires audited entities to respond to findings with actions they plan to take and deadlines for completing these actions. While states are required to meet the government-wide requirements and specific program requirements, they can set their own practices for distributing funds to subrecipients. For example, the reimbursement cycle—how frequently states reimburse subrecipients for grant expenses—can vary across and within states. States have some flexibility in determining whether a grant will be distributed on a reimbursement basis or through a cash advance. For the selected programs in the selected states, the frequency with which subrecipients were allowed to request a reimbursement ranged from whenever the subrecipient needed funds to once a quarter. For example, in Tennessee, all selected grant programs allowed subrecipients to request reimbursements once a month. In Massachusetts, CSBG subrecipients received a portion of their grant on a monthly basis, while recipients of the Byrne JAG program were generally allowed to request reimbursements once a quarter. Table 2 shows this variation across the selected states and grant programs we reviewed. The states’ procedures allow for subrecipients to receive grant reimbursements once an approved receipt of invoice or payment request is received. According to state officials we interviewed, subrecipients generally receive payment within 30 days, particularly since the payments are delivered through electronic fund transfer. However, according to these state officials, if reimbursement delays occur, they are generally related to a lack of documentation required for approval. States may withhold a portion of a grant to help defray the costs of managing the grant, and the states we reviewed exercised some flexibility in their use of these funds. Allowable administrative costs can generally include areas such as personnel or accounting costs. For pass-through grants, the state may be able to use these funds for expenses related to monitoring subrecipients. States may specify how they intend to use administrative funds in documents submitted to the granting agency. The states and programs we reviewed varied in how they described their use of administrative funds. For example, as noted earlier, according to HHS documents, up to 5 percent of a state’s CSBG funds can be reserved for administrative expenses, including monitoring activities. Furthermore, state plans may provide more specific information regarding the use of administrative funds. For instance, in the CSBG program for Massachusetts, a portion of the 5 percent allocated for CSBG administrative expenses and monitoring activities is earmarked for staff salaries and associated fringe benefits. Another portion is earmarked for direct administrative expenditures, such as office supplies, travel, and state overhead. Federal program officials we spoke with said most states use the maximum administrative funds allowed. They noted that they had seen some states in the past use less than the maximum—using these funds for service delivery instead—but they have seen these states begin to withhold the maximum amount. Most state program officials we spoke with said that they did withhold the full amount allowed for these costs. However, in Tennessee, CSBG granting officials said they did not always use the 5 percent allowed for special projects, but they plan to use the full 5 percent in the future. For the selected programs and states we reviewed, we found that states worked within the federal requirements of their grant programs and reimbursed the subrecipients within the time allowed in their grant agreements. State agency officials for the selected federal programs in each of the three selected states told us they had not received any complaints from their subrecipients regarding timeliness of grant reimbursements. Each of the three states had established procedures and used automated systems to reimburse its subrecipients. In addition, we learned that the states withheld administrative funds for federal grant programs appropriately, in accordance with the amounts set by the programs. Our review of the most recent federal agency monitoring reports for the three programs reviewed in Illinois, Massachusetts, and Tennessee showed that monitors found no issues related to excessive administrative allowances or delays in fund disbursement. Monitoring reports were completed for all selected states between 2008 through 2012. For the Byrne JAG and state-administered CDBG program, all selected states had at least one on-site review during this time period. For the CSBG program, HHS conducted one on-site review of our selected states and is currently drafting a report of the review. In addition, recent single audit results for these three states do not indicate significant noncompliance related to administrative allowances or fund disbursements. We reviewed single audit reports for fiscal years 2009 through 2011 and identified one finding where a state (Illinois) took longer to reimburse a subrecipient than allowed—the auditor identified three reimbursement payments that were three or fewer days late. We found no instances of excess administrative funds being withheld. In addition, single audit reports may identify noncompliance related to other aspects of the pass-through grants process, such as reporting and subrecipient monitoring. For example, concerning reporting, in Tennessee’s 2010 Single Audit report, the state’s Department of Human Services—the state agency that administers the CSBG program for HHS—either did not submit federally required financial reports or did not submit them on a timely basis. The federal government requires these financial reports to be filed as one method to monitor the programs funded by the CSBG program. Similarly, in the same report, for the state- administered CDBG program, the state’s Department of Economic and Community Development did not file quarterly reports to HUD in a timely manner. Concerning subrecipient monitoring, Tennessee’s 2011 Single Audit report indicated that the Department of Human Services did not have procedures in place to ensure that subrecipients were audited when required. Subrecipients in our focus groups did not report instances in which federal requirements related to reimbursement timeliness or administrative funds withheld were not followed, and therefore were not impacted by these requirements not being met; however, they did identify other grant management issues. Some subrecipients who commented on the timeliness of reimbursement said they were reimbursed in a timely manner. Other subrecipients we interviewed said they were aware of the requirements for administrative funds withheld by states for their pass- through grant program. Nevertheless, even though their states generally complied with federal regulations, some subrecipients we interviewed expressed concerns related to reimbursement timeliness and administrative funds withheld. While these concerns generally did not identify instances of noncompliance with federal requirements, they did illustrate how the pass-through grant process, subrecipients’ perceptions of the process, and state practices can potentially impact subrecipients. For instance, although their states withheld administrative funds within federal law, a few subrecipients we spoke with expressed frustration over the amounts withheld because they did not feel their organizations were being adequately reimbursed for their own administrative expenses. In 2010, we reported finding differences in the rate at which state and local governments reimburse nonprofit organizations in select states. In particular, we found that these differences, including whether nonprofit organizations are reimbursed at all, depend largely on the policies and procedures of the state and local governments that award federal funds to nonprofit organizations. In addition, states may have their own processes to manage the disbursement of state grant funds, which can affect subrecipients. For example, subrecipients we spoke to in Illinois cited a delay in receiving reimbursements of grant funds from their state general fund, which some subrecipients said negatively affected the services they provide. According to Illinois state officials, because the state has insufficient cash to meet all obligations and has set priorities for paying monies from the Illinois general fund, there can be up to a 9-month delay in disbursing state grant funds that originate from the state’s general fund. According to a quarterly report issued by the Illinois Comptroller’s Office, as of December 31, 2012, pending vouchers from the state’s general fund dated back to August 2012. While these funds are not federal grant funds, delays of this nature could affect a subrecipient’s ability to deliver services, particularly if the subrecipient is a smaller organization. As subrecipients, nonprofit organizations often receive grants from multiple sources to fund their services, and absent a sufficient safety net, such delays in funding could hinder a nonprofit organization’s ability to continue to effectively partner with the federal government to provide services to vulnerable populations. According to one subrecipient we interviewed, despite having foundation funds to help mitigate cash flow issues, his organization had to cut programs that served vulnerable populations— programs funded, in part, by pass-through grants—because of issues with the state funding for these services. In focus groups we held with subrecipients, several other concerns were raised that can be linked to the multiple layers involved in managing pass- through grants. For example, the award process for pass-through grants involves two steps—allocating funding to states and awarding funds to subrecipients—which subrecipients said could extend the time it takes to receive a grant and cause funding uncertainty for a subrecipient. Furthermore, although monitoring serves as an important tool for internal control, distinct federal and state monitoring requirements may lead to additional responsibilities for subrecipients. For example, some subrecipients we interviewed said they may have to report the same or similar information to multiple granting entities, resulting in duplicative or redundant reporting. Federal agencies may require states, as part of their responsibilities as pass-through entities, to conduct monitoring site visits of subrecipients; however, the three federal agencies we selected may also conduct site visits to select subrecipients. Some subrecipients in our focus groups that are required to have a single audit expressed frustration that state monitors are looking at much of the same information contained in the single audit. Some subrecipients in our focus groups said they dedicate a significant amount of time to each step of the monitoring process, so duplicative or redundant reporting may reduce the amount of time they can devote to service delivery. We have additional work under way for the Senate Committee on Homeland Security and Governmental Affairs that looks more closely at federal grants management reform efforts, including what actions have been taken to address challenges such as communicating with grantee recipients. We plan to issue the results from this work later this spring. We provided a draft of this report to the Secretaries of HHS and HUD, the Attorney General, and the Director of OMB for review and comment. Each agency provided technical comments, which were incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees and the Secretaries of HHS and HUD, the Attorney General, and the Director of OMB. The report also is available at no charge on the GAO website at http://www.gao.gov. If you or your staff has any questions concerning this report, please contact me at (202) 512-6806 or czerwinskis@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors are listed in appendix II. Our objectives were to report on (1) government-wide and select agencies’ requirements and oversight related to the timeliness of federal grant funds from states to subrecipients and the portion of funds states may withhold for their own administration, (2) select states’ practices in disbursing federal grant funds to subrecipients and the extent to which select federal granting agencies have identified compliance issues with these requirements, and (3) the views of subrecipients on the impact of selected states’ practices in disbursing grant funds. To determine the government-wide and select agencies’ requirements governing the timeliness of federal grant funds from states to subrecipients and the portion of funds states may withhold for their own administration, we identified Office of Management and Budget’s (OMB) circulars related to grants management and reviewed these circulars to determine the extent that they related to pass-through grants. We discussed these circulars with OMB officials. We selected three federal pass-through grant programs to illustrate how federal agencies manage pass-through grant programs, including requirements for distribution and monitoring practices. To select these programs, we examined data on grant programs from USASpending.gov to determine the amount federal agencies awarded in grant funds across the agency and the amount of grant funds awarded by grant program. We identified programs that had significant pass-through requirements with a range of subrecipients. We also identified programs in which we had conducted past work in order to leverage resources. Table 3 presents the programs we selected and the criteria for this selection. To determine states’ practices in disbursing federal grant funds to subrecipients, we identified three states to illustrate variation in states’ management of pass-through grants. We based the decision on state population, the amount of federal grants awarded to the state, and the census region (as shown in table 4). We also considered the per capita grant amount for each state, recommendations of subject matter experts and stakeholders in the field of grant administration, and results from a 2010 Urban Institute study on nonprofit organizations’ perceptions of With the subject matter expert states’ grant administration practices.recommendation and the survey results, we identified states with a range of reputations in grant administration. We conducted site visits at each of these three states, interviewing state financial control officials, such as staff from the state auditor or state comptroller office, to identify state procedures for managing grant funds. We also interviewed administrators of the three selected programs to determine their procedures for administering the selected grant programs. We reviewed monitoring reports for select states’ administration of the selected programs. We also reviewed 3 years worth of single audit reports for the select states to identify potential cash management issues. While the programs and states we reviewed present differences in the management of pass-through grants, they do not represent a generalizable sample, thus information we obtained from them cannot be generalized to all federal agencies and related grant programs or state recipients. However, they provide insights and examples related to pass- through grants management. To determine the impact on recipients of state practices in disbursing grant funds, we convened focus groups of subrecipients in each of the three states we visited. In each state, we conducted two focus groups: one of subrecipients from local governments and one of subrecipients from nonprofit organizations. The primary criterion for selecting participants was that they were subrecipients of federal grants. To identify these subrecipients, we used sources including the Single Audit Clearinghouse, referrals from state agency officials or nonprofit organizations, and state-specific sources of information on grantees. Each focus group had from 4 to 8 participants and there were a total of 34 participants across the three states. At these focus groups, we discussed how federal and state management of pass-through grants positively impacted their organizations as well as suggestions for improvement. We also reviewed external literature and discussed concerns with stakeholder groups including the National Council of Nonprofits and the National Association of State Auditors, Comptrollers, and Treasurers. We conducted this performance audit from May 2012 to April 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Carol Patey, Assistant Director; Veronica Mayhand; Sarah McGrath; Jeffrey Niblack; Robert Robinson; Cynthia Saunders; Albert Sim; Sabrina Streagle; and Michelle Wong made key contributions to this report. | Grant programs in which states are awarded federal grants but then pass funds on to subrecipients--entities within states' jurisdiction--are referred to as "pass-through grants." These relationships pose challenges for the management of grant programs, as the multiple levels add complexity to the flow of funds, administration, and oversight. As requested, GAO examined management and oversight of pass-through grants. This report addresses (1) requirements and oversight related to the timeliness of federal grant funds from states to subrecipients and the portion states may withhold for their administration, (2) select states' practices in disbursing federal grant funds to subrecipients and the extent to which select federal granting agencies have identified compliance issues with these requirements, and (3) the views of subrecipients on the impact of selected states' practices in disbursing grant funds. To conduct this study, GAO selected states and programs based on characteristics affecting pass-through grants management. GAO reviewed documentation on government-wide regulations and selected federal pass-through grant programs; reviewed monitoring reports and audits of state pass-through entities; and interviewed federal and state officials, as well as subrecipients and others with relevant expertise. Findings cannot be generalized to all states and programs, but our work provides insights related to pass-through grants management. GAO makes no recommendations in this report. OMB and selected federal granting agencies provided technical comments, which were incorporated as appropriate. As pass-through grant funds flow to subrecipients, they are subject to government-wide and program-specific policies, two of which are particularly relevant to disbursement issues for states as they pass funds on to subrecipients. Pass-through grants are typically first awarded to states, local governments, or other entities and then further awarded to subrecipients. The Cash Management Improvement Act governs the exchange of funds between the federal government and the states and is applicable to timeliness in the grant disbursement process. In addition, the Office of Management and Budget's (OMB) Circular No. A-133, Audits of States, Local Governments, and Non-Profit Organizations , provides general guidance on the roles and responsibilities of the federal awarding agencies and primary recipients of government funds regarding audit requirements of grantees. Specific program policies can provide additional requirements for individual grant programs related to disbursement of funds. For example, as with the programs we reviewed, authorizing legislation may contain statutory limits on the amount of funds that states and local governments can withhold from the grant awards for their own administrative expenses. To ensure states comply with federal requirements and agency regulations for disbursing federal grant funds, federal agencies monitor aspects of pass-through grants related to administrative costs that states withhold and timeliness of reimbursement. According to their monitoring procedures, selected federal agencies also review the results of states' "single audits"--annual audits performed on many recipients of federal funds. Selected states' pass-through grant disbursement practices varied for the three programs GAO reviewed, but generally complied with federal requirements. For example, states had some flexibility in determining whether a grant would be distributed on a reimbursement basis or through a cash advance. For the programs and states GAO reviewed, GAO found that states generally worked within the federal parameters of their grant programs and reimbursed the subrecipients within the time allowed in their grant agreements. Subrecipients in GAO focus groups reported minimal issues with timeliness of federal funds' reimbursement and administrative funds that states withheld. In addition, these subrecipients did not report instances in which federal requirements related to reimbursement timeliness or administrative funds withheld were not followed and therefore were not impacted by these requirements not being met. |
HUD’s homeownership centers support the single-family activities of FHA. FHA insures lenders against losses on mortgages for single-family homes. Lenders usually require mortgage insurance when a homebuyer makes a down payment of less than 20 percent of the value of the home. Thus, FHA plays a particularly large role in certain market segments, including loans to low-income borrowers and first-time homebuyers, whose cash for down payments is likely to be limited. During fiscal year 2000 alone, FHA endorsed more than 900,000 mortgages totaling about $94 billion. As of June 2001, the total value of HUD’s single-family insured portfolio was almost $498 billion. If a borrower defaults and the lender subsequently forecloses on an FHA-insured mortgage, the lender can file an insurance claim with FHA for the unpaid balance of the loan. When FHA reimburses a lender for a defaulted loan, HUD receives the deed to the foreclosed property. HUD, in turn, sells this property via one of its management and marketing contractors to recoup as much of FHA’s reimbursement costs as possible. In the past, HUD carried out its single-family activities—such as processing mortgage insurance and overseeing lenders participating in FHA’s programs—in 81 separate field offices. As part of its 2020 Management Reform Plan announced in 1997, HUD consolidated the single-family housing activities of its 81 field offices at 4 homeownership centers. According to the 2020 plan, the homeownership centers would, among other things, (1) improve service to lenders through automated systems; (2) provide faster, more uniform, and more efficient services to lenders, borrowers, and industry clients; and (3) improve HUD’s risk assessment, loss mitigation, and quality assurance activities. The consolidation of activities at the four centers was carried out in phases and was substantially completed in December 1998. The homeownership centers are located in Atlanta, Georgia; Denver, Colorado; Philadelphia, Pennsylvania; and Santa Ana, California, and they report directly to HUD’s Deputy Assistant Secretary for Single Family Housing. They perform a variety of activities that fall within three basic functions: Lender oversight—Many activities formerly performed by FHA staff have been delegated to lenders, increasing the importance of the centers’ oversight of lenders’ performance. The centers are responsible for granting direct endorsement authority to lenders participating in FHA programs.Before granting lenders direct endorsement authority, the centers evaluate mortgages that the lenders have submitted as test cases using FHA’s underwriting requirements. To ensure lenders’ continued compliance with FHA’s mortgage requirements, the centers use two monitoring tools: (1) desk audits of the underwriting quality of individual loans already insured by FHA, known as technical reviews, and (2) on-site evaluations of lenders’ operations, known as lender reviews. Contractor oversight—As many activities formerly performed by FHA staff have been transferred to contractors, the centers have become responsible for overseeing contractors’ performance. For instance, the centers monitor contractors hired to review loan case files and issue mortgage insurance certificates. The centers also oversee contractors hired to manage and market acquired single-family properties, inspect 10 percent of the properties handled by each of the management and marketing contractors, and review 10 percent of the management and marketing contractors’ property case files each month. Customer service—Each center has designated customer service staff who respond to requests from the general public (for basic information on HUD and FHA programs) and industry clients (for support on FHA programs, services, and information systems). Designated program staff also provide more advanced customer service, responding to technical questions on loan underwriting and property disposition. In total, the centers average almost 90,000 telephone calls a month. HUD recognized that its decision to consolidate FHA’s single-family field activities at four homeownership centers would dramatically change its business processes and require changes to its single-family information systems. In its April 1998 assessment of the homeownership centers’ susceptibility to waste, fraud, abuse, and mismanagement, HUD noted the proliferation of financial management systems within the Department and the need to replace them with an integrated, state-of-the-art system. It also observed that two major single-family information systems would have to be modified to reflect the new business processes at the centers. Despite this recognition, FHA’s failure to enhance its information technology systems to support its business processes more effectively was cited as a material weakness in the HUD Inspector General’s last three reports on FHA’s financial statements. For example, according to the Inspector General’s report on FHA’s fiscal year 2000 financial statements, FHA’s inability to acquire more modern information technology has deterred its efforts to be a more efficient and effective housing credit provider. Also according to the report, FHA will be forced to use less efficient processes to collect and report on data until a comprehensive new integrated information technology environment is implemented. A best practice used in the public and private sectors to improve existing information systems and develop new ones efficiently and effectively is the development, maintenance, and implementation of an enterprise architecture, also known as an information technology architecture. The Clinger-Cohen Act requires agency chief information officers to develop, maintain, and facilitate the implementation of sound and integrated information technology architectures. An agency’s architecture should be an integrated framework for evolving or maintaining existing information technology and acquiring new technology to achieve the agency’s strategic and information resource management goals and better support its business needs. According to the Office of Management and Budget (OMB), to develop an enterprise architecture, an agency should identify and document its business processes, information flows and relationships, applications, data descriptions and relationships, and technology infrastructure. OMB has also issued guidance that requires an agency’s information systems investments to be consistent with its architecture. In addition, the federal Chief Information Officers Council, in collaboration with us and OMB, has published a framework that defines effective architecture management controls that successful organizations practice. While more than 20 different information systems support single-family operations, the homeownership centers currently rely on 7 major information systems to oversee lenders and contractors and provide customer service. These seven information systems are a combination of older systems acquired to support business processes in place before the centers were formed and newer systems implemented to better support current center operations and improve the usefulness of the older systems. In addition to these major systems, the centers have developed specialized databases to help them fulfill their missions. Also, each of the four centers uses a different telephone system to distribute calls and track workload data. The seven major information systems the centers use are a combination of legacy systems and newer systems implemented to make better use of these legacy systems and to better support the centers’ current operations. Information on the seven systems is shown in table 1. Of these seven systems, three are legacy systems that were implemented well before the homeownership centers were established—the Single Family Insurance System (SFIS), the Computerized Homes Underwriting Management System (CHUMS), and the Single Family Acquired Asset Management System (SAMS). Since these three legacy systems were implemented, FHA’s single-family business processes have changed dramatically. For instance, when CHUMS was implemented, FHA staff were responsible for all aspects of FHA’s mortgage insurance operations, such as evaluating and processing applications for mortgage insurance. Since then, FHA has increasingly delegated responsibility for the loan underwriting process to lenders, and FHA staff have assumed the role of verifying the underwriting process for completeness and compliance with FHA policies and regulations. Similarly, when SAMS was implemented, FHA staff were responsible for managing and selling properties acquired through foreclosure. Since the formation of the centers, the management and sale of these properties has been contracted out. The four newer systems that the centers use were implemented to improve the usefulness of the legacy systems and better support the centers’ current operations. HUD headquarters created the Single Family Data Warehouse to allow staff to query data from CHUMS, SAMS, and several other smaller legacy systems. Updated monthly, it provides historical case- level information that staff can use to complete trend analyses. As more responsibilities were delegated to lenders, headquarters created the FHA Connection to give lenders direct access to information in CHUMS and other FHA systems. Lenders can use the FHA Connection to request an FHA case number and assign an appraiser, among other things. Headquarters implemented the Approval/Recertification/Review Tracking System (ARRTS) and Neighborhood Watch to help FHA staff monitor lenders’ performance. Figure 1 illustrates the number of information systems used to support just one aspect of the centers’ operations, the origination of FHA-insured loans. In addition to the information systems provided by HUD headquarters, staff at each of the four homeownership centers have developed databases, including spreadsheets, to help them perform tasks that the headquarters-provided systems do not effectively support. The Denver center has developed 15 databases and the Philadelphia center 10 databases to enhance their staffs’ ability to track various program functions. One of these databases was designed to help FHA staff oversee contractors hired to perform desk audits of the underwriting quality of individual loans already insured by FHA, known as technical reviews. In our April 2000 report on lender oversight, we reported that the centers lacked the necessary information systems to readily identify and track the technical review ratings of new direct endorsement lenders. Since our report was issued, all four centers have implemented the Underwriting Reports System (URS), a Microsoft Access database developed by the Denver center that, among other things, tracks the performance of lenders with direct endorsement authority and the performance of the contractors hired to perform technical reviews. The centers have also started using the Inspection Tracking System, another Microsoft Access database developed by the Denver center, to monitor the performance of contractors hired to manage and market the properties HUD acquires through foreclosure. This system is used to assign cases to property inspectors hired to inspect 10 percent of the properties handled by each of the management and marketing contractors and to track the results of their inspections. Each homeownership center was responsible for acquiring its own telephone system to meet the center’s communications needs and track workload data. As a result, each center uses a different system. These telephone systems distribute calls to customer service and program staff and track data on calls received. The centers use the reports that these systems generate to manage their customer service workload. For example, the Philadelphia center uses 10 reports to create management and performance charts that show, among other things, the total number of calls answered daily by the center in a given month and the average number of calls each individual answered per day in a given month. In addition, the Denver center uses weekly reports on abandoned calls and call activity to manage its workload. Each center provides headquarters with monthly data on the number and types of calls received. The willingness of the homeownership centers’ staff to learn and use multiple information systems and develop specialized databases to meet their responsibilities demonstrates a commitment to accomplishing critical tasks. However, the information and telephone systems the centers use have not kept pace with changes in single-family business processes and workload. Therefore, these systems do not adequately support the centers’ efforts to oversee lenders and contractors and provide customer service. Although the systems collect wide-ranging data on single-family operations, center staff often must use multiple systems or manipulate the data in order to obtain the information needed to carry out their missions. For example, they must use data from multiple sources to identify high- risk lenders for monitoring reviews and identify and investigate potential fraud cases. Further, the information systems do not readily provide center staff with all the data they need to monitor contractor performance and manage contracting costs. Regarding customer service, inadequate telephone systems and the limited reporting capabilities of some information systems have made it difficult for center staff to provide service to both their external and internal customers. FHA’s single-family information systems do not effectively help center staff target high-risk lenders for review or identify and investigate potential fraud cases—two activities integral to lowering insurance risk. In response to our recommendations, HUD recently incorporated risk factors when monitoring the performance of lenders. Center staff must now consider factors such as lenders’ default rates and loan volume, borrowers’ complaints, and reports of fraudulent activity when selecting lenders for review. However, they must go to multiple sources to obtain this information. As shown in table 2, the centers must compile data from three information systems and several nonautomated sources in order to develop the list of lenders to be reviewed each quarter. Although center staff are able to identify lenders for review using these multiple sources, center staff could logically save valuable time each quarter and be better assured that they are identifying the lenders of highest risk if this information were more integrated. Not only do FHA’s single-family information systems not facilitate the identification of high-risk lenders, they also do not help the centers identify predatory lending schemes before they become a major problem. Property flipping—buying a home at a low price and reselling it at an inflated price within a short time, often after making only cosmetic improvements—is one example of fraud in FHA loan origination activities. While property flipping is not always illegal, most known FHA cases involved fraudulent documentation provided by the lender and/or appraiser, which is illegal. The HUD Inspector General testified in June 2000 that these fraudulent property flipping activities could have been more quickly identified and the losses minimized had appropriate controls been in place. Similarly, according to the HUD Inspector General’s report on FHA’s fiscal year 2000 financial statements, FHA must continue to place more emphasis on early warning and loss prevention for single- family insured mortgages, including increasing its use and analysis of available data to monitor lenders. Although the report states that FHA has improved its early warning and loss prevention processes, it notes that the Inspector General has still found a high risk of fraud concentrated in certain geographic areas. FHA’s single-family information systems also do not facilitate investigation of predatory lending schemes once they have been identified. To protect FHA borrowers from abusive mortgage practices such as illegal property flipping, HUD has designated certain low-income neighborhoods with higher-than-normal foreclosure rates as “hot zones.” Under this new initiative, applications for FHA-insured loans in these hot zones, located in Atlanta, Baltimore, Chicago, Los Angeles, and New York, receive increased scrutiny. Because the centers’ current information systems were not designed to support this new initiative, the centers had to develop the following manual procedures for identifying potential fraud cases and tracking the results of property flip checks: The centers have to perform up to three different searches to determine if properties for which FHA insurance has been requested have been flipped. Center staff search a commercial system containing property deed information, state and county web sites containing information on property transactions, and SAMS to determine if a property has been sold twice within 12 months, with the latest sales price exceeding the prior sales price by more than 30 percent. When searching each system or site, HUD staff must enter variations of the address (e.g., 135 N. Main, 135 North Main, and 135 N Main) to ensure a thorough search. Some of the reports the centers need to review hot zone cases have to be generated manually. For example, to create a list of properties to be checked in one hot zone, an Atlanta official has to pull two standard CHUMS reports—one listing cases by county and one listing cases by state—and merge them outside the system. Because it is not possible to extract information from ARRTS by zip code, a Santa Ana official must pull up information for the entire center, check each entry in the relevant state to determine if it is in one of the hot zone zip codes, and manually prepare a list. Once they have researched hot zone cases, the centers use a special database to track the results of their reviews. The Philadelphia center developed the Predatory Lending Monitoring System for use by all the centers to ensure that the results of their reviews in the designated hot zones are uniformly recorded. This system records case information and generates reports for HUD headquarters. It is a credit to the centers’ staff that they have developed these manual processes to investigate potential fraud. Automating all or part of these processes, however, might enable the homeownership centers to extend their flip checks beyond the five hot zones and prevent additional fraud. Although the centers have expanded their use of contractors, their information systems do not readily provide some of the performance data needed to monitor these contractors or the procurement and financial data needed to manage contract costs. For instance, center staff cannot always easily obtain the data they need to monitor endorsement contractors hired to review loan case files and issue mortgage insurance certificates. The Atlanta official responsible for overseeing these contractors has developed spreadsheets to help him use CHUMS data more effectively to gauge contractors’ performance, but to obtain and analyze the data he must go through a multi-step process. He first transfers standard CHUMS hard-copy reports to a disk for analysis. He then creates spreadsheets that highlight weekly production trends, such as how many days it takes the contractor’s staff, on average, to process mortgage insurance applications and how many errors the contractor’s staff have made. He uses this information to determine where bottlenecks are occurring and whether the endorsement contractor is making the same mistakes repeatedly. This effort is commendable, but it could be avoided if the CHUMS system readily generated this information. Center staff also cannot readily obtain and analyze data needed to monitor management and marketing contractors—contractors paid millions of dollars to manage and sell HUD’s single-family acquired properties. The homeownership centers have a number of resources upon which they can draw to aid them in making monthly assessments of these contractors’ performance. For instance, HUD hired third-party contractors to inspect 10 percent of the properties handled by each of the management and marketing contractors and to review 10 percent of the management and marketing contractors’ property case files each month by following a HUD checklist. The centers also use data from SAMS in making their monthly assessments. However, the following examples illustrate that very little of the data the centers receive is automated in a way that facilitates analysis: In fiscal years 1999 and 2000, the centers received the results of thousands of property inspections and reviews of property case files. However, a recent HUD Inspector General report on single-family property disposition activities at the Philadelphia center found that the results of these inspections and file reviews are not automated in a way that enables staff to use them effectively to monitor the performance of management and marketing contractors. According to the report, deficiencies identified by these inspections and reviews were not being included in monthly assessment reports because center staff did not have the time to analyze fully the results of third-party monitoring. In some cases, the staff were not even reviewing the reports because they did not understand the results and did not know how to use them as part of the overall monitoring process. As a result, the Inspector General concluded that the center needs to strengthen its monitoring of management and marketing contractors to ensure that performance deficiencies identified by third- party contractors are reported and tracked. Several officials whom we interviewed at the Philadelphia center questioned the usefulness of the monthly reports they received assessing the management and marketing contractors’ property case files. According to one center official, the summary reports provided by the original contractor were so bad that center staff used only the last page of the report, which indicated the number of files reviewed. In September 2000, the Department hired a new national contractor to conduct operational, management, and performance reviews of each management and marketing contractor. This new contractor is required to develop and maintain a national database file that can be used to perform detailed analyses of the results of the reviews at various levels of risk. Center staff cannot obtain from SAMS the information they need to monitor certain aspects of management and marketing contractors’ performance. One of the responsibilities the centers have delegated to these contractors is the sale of foreclosed properties under the Officer Next Door and Teacher Next Door programs. Under these programs, HUD allows police officers and teachers to purchase HUD-owned homes at 50 percent off the list price in HUD-designated revitalization neighborhoods.According to one center official, SAMS does not generate reports that list the properties sold through these two programs. Therefore, to oversee the programs, the center must rely on its management and marketing contractors to provide reports listing the properties sold under these programs. Work by HUD’s Inspector General indicates that the centers’ oversight of the Officer/Teacher Next Door programs has not been adequate. In two recent reports, the Inspector General concluded that the two programs were at high risk for noncompliance and abuse by homebuyers and that HUD had not established adequate management controls over the programs. It found, among other things, that homebuyers had abused the programs by not fulfilling occupancy requirements and thus received unearned discounts of about $735,000. In response to the first report, HUD imposed an immediate 120-day suspension of sales under the Officer/Teacher Next Door programs, effective April 1, 2001. On August 1, 2001, HUD announced that it would resume these programs after having taken some corrective measures to prevent homebuyer fraud, including a review of program procedures. In addition to the difficulties previously discussed, the homeownership centers also cannot readily obtain the procurement and financial data they need to manage contract costs, which is essential to contractor oversight. In order for the centers to manage their contracting costs, they must know how much they are obligating for and spending on contracts. As the following examples illustrate, however, HUD’s information systems do not readily provide the centers with contract obligation and expenditure data: The centers could not readily provide us with the amounts they had obligated for single-family contracts in fiscal years 1999 and 2000. The Atlanta center provided us with the most complete information—reports from HUD’s procurement system that listed awarded active contracts for different time periods during the 2 fiscal years. The other three centers provided fewer reports from HUD’s procurement system and/or estimates. For example, staff at the Philadelphia center manually compiled lists of the contracts the center had awarded for the 2 fiscal years. The staff found this information to be useful and told us they intended to keep the lists current. Later, we were able to request and analyze data from HUD’s procurement system to determine how much HUD headquarters and each of the homeownership centers had obligated for single-family contract support in fiscal years 1999 and 2000. HUD headquarters took approximately 3 months to provide us information on the funds expended on different types of major single-family contracts during fiscal years 1999 and 2000. When provided, the information did not seem to match the obligation data that we had received and analyzed. HUD initially sought to get the expenditure information from the centers before finally retrieving the data from its financial system. The information HUD provided from this system, however, appears incomplete. Our analysis of data from HUD’s procurement system showed that HUD obligated about $465 million for one major single-family contract type, management and marketing services, in fiscal years 1999 and 2000. Yet, the data HUD provided from its financial system showed that expenditures for all major single-family contract types during the 2 fiscal years totaled only about $44 million. The centers’ ability to provide service to their external and internal customers has been hindered by inadequate telephone systems and the limited reporting capabilities of some information systems. While the centers’ telephone systems generate a number of reports, they do not provide all the information needed to manage the centers’ customer service workload. For example, according to the branch chief at Philadelphia’s customer service call center, the center’s telephone system does not produce any information showing the peak usage times—the times of day that generate the highest volume of phone calls. The Atlanta center director also expressed concern about his ability to track the total number of calls coming into the center. The center’s customer service telephone system tracks the number of calls the center receives on its toll- free number, but the calls made directly to program staff, which represent an estimated 25 percent of the center’s call volume, cannot be easily tracked. In addition, the centers’ telephone systems do not allow telephone calls to be transferred between centers. Finally, because of telephone network inadequacies, more than 14,000 single-family calls were blocked in fiscal year 2000. In October 2000, the Office of Single Family Housing hired a contractor to assess the single-family toll-free number information systems and operations and identify a more efficient and cost-effective means of providing customer service. In its March 2001 report, the contractor concluded that some of the single-family telephone systems were strained to the limit and that the systems in place would not provide the services that would be needed in the future. It recommended just one single-family toll-free number for the centers and a tracking system that would provide information on each interaction with the public and include responses to frequently asked questions. According to the Associate Deputy Assistant Secretary for Single Family Housing, the contractor’s recommendations are still under consideration. The centers’ information systems also do not effectively support their customer service activities, for they do not allow center staff to generate easily some of the reports their internal customers—center and headquarters managers—need to manage center operations. According to an August 2000 assessment of HUD’s loan origination systems needs, the extensive number of reports produced by the current information systems that support the loan origination process do not accurately serve the new FHA business model. Also, an overly rigid reporting process hinders staff’s ability to meet their responsibilities. For example, reports are delivered in hard copy on predetermined reporting schedules, and staff must enlist programming help if they want to specify reporting parameters. During our visits to the four centers, we found the following examples of center staff going to great efforts to produce needed reports: Staff at two of the four centers must generate reports manually because, while CHUMS has been modified to produce some standard reports by center, it continues to generate other reports by field office only. For a CHUMS user in Philadelphia to prepare a monthly report that shows whether the center is meeting its performance goals, the user must print out a standard report that provides information by field office and enter data for each of the 24 field offices within the center’s jurisdiction into a spreadsheet. It takes up to 1 full day each month to prepare this report. Similarly, the CHUMS user in Santa Ana responsible for generating monthly management reports spends 1 day each month manually summarizing data on the 16 field offices in the center’s jurisdiction. According to single-family officials, HUD plans to automate part of this process in fiscal year 2002. Center staff must print out SAMS reports for each management and marketing contract area in the center’s jurisdiction in order to supply monthly data to headquarters on the HUD-owned properties for which the center is responsible. Since 6 of the 16 contract areas fall within Santa Ana’s jurisdiction, the center must print out reports for each of the 6 areas each month. SAMS’ ad hoc reporting feature is so difficult to use that center staff responsible for overseeing property disposition often must request ad hoc reports from headquarters if the system’s standard reports do not meet their needs. For example, when the Atlanta center temporarily assumed responsibility for the management and sale of certain properties after terminating a management and marketing contractor, center staff had to request an ad hoc report from headquarters every week. Rather than request an ad hoc report from headquarters, some SAMS users told us that they pull needed information from multiple reports intended for other purposes, which can be time-consuming. In addition, one SAMS user said that she sometimes makes do without the information. The Single Family Data Warehouse has helped center staff obtain needed information from multiple single-family legacy systems. However, the data in the warehouse are updated only once a month. According to Office of Single Family Housing officials, the warehouse is helpful to those who want to use historical data to develop trend analyses but is of limited use for day-to-day operational needs. Center staff who need up-to-date data would probably have to go directly to the individual data systems, according to these officials. Another limitation of the warehouse is that it is difficult for most staff to use. Because it requires extensive knowledge of database software, only a few staff at each homeownership center are proficient in using the warehouse. To make the warehouse more useful to the average user, HUD has developed a front-end query tool that enables staff to request basic information without having knowledge of database software. However, advanced queries still require program and database knowledge. To better ensure that FHA’s single-family information systems support the homeownership centers’ operations, HUD’s Office of the Chief Information Officer is developing an enterprise architecture, and its Office of Single Family Housing is planning improvements to specific information systems. An enterprise architecture defines an organization’s current (baseline) and desired (target) systems operating environments and provides a road map for moving between the two. It is an essential tool for effectively and efficiently reengineering business processes and for implementing and evolving their supporting systems. A well-defined enterprise architecture can assist in optimizing an organization’s business operations and the underlying information technology supporting these operations. As required by the Clinger-Cohen Act, HUD’s Office of the Chief Information Officer is developing an enterprise architecture for the Department. As planned, its enterprise architecture will define: the work HUD performs in achieving the Department’s mission, the information necessary to deliver programs and operate the Department, the automated systems that create or manipulate data to support HUD’s business, and the technology, such as hardware and software, necessary to support the Department’s activities. As part of its efforts to develop an enterprise architecture for the Department as a whole, the Office of the Chief Information Officer plans to complete its definition of FHA’s baseline architecture by the fall of 2001. By January 2002, it expects to define some aspects of the Department’s target architecture. According to the Associate Deputy Chief Information Officer for Information Technology Reform, HUD will use this partial target architecture to guide its decisions on the next round of information technology projects to be submitted in March 2002. Once its target architecture is complete, HUD will develop an implementation plan for transitioning over time from the baseline to target architecture. Although HUD has made progress in developing an enterprise architecture, it has not yet put in place certain management controls for developing, implementing, and maintaining an enterprise architecture recommended by the Chief Information Officers Council. The Council’s guidance on best practices for successfully managing an enterprise architecture states that an agency should, among other things, have a written policy that governs the development, maintenance, and use of enterprise architecture and a committee or group that is responsible for directing, overseeing, and/or approving the enterprise architecture.According to its response to our survey of federal departments’ enterprise architecture efforts, HUD has drafted an architecture policy, but it has not been approved. Obtaining a clear mandate for the architecture in the form of an enterprise policy statement is a critical success factor and will be instrumental in gaining the buy-in and commitment of all organizational components of the enterprise, whose participation is vital to successfully developing and implementing the enterprise’s architecture. Also according to HUD, it plans to form a committee to oversee its enterprise architecture, although it has not yet done so. Such a committee should be an executive body whose members represent all stakeholder organizations and have the authority to commit resources and to make and enforce decisions for their respective organizations. Concurrent with the development of an enterprise architecture, HUD’s Office of Single Family Housing has developed plans to replace CHUMS, SFIS, and two other systems that support the origination of FHA-insured loans with one system that has greater capabilities. As designed, the new system would support a paperless insurance process, complete with virtual case binders and digitally signed mortgage documents. The Internet would be used as the mode of communication among FHA, its business partners, and service providers. According to a contractor’s cost/benefit analysis, HUD could save $70 million by replacing the four old systems with a new system that would operate for 10 years. HUD included funding to begin development of this new system in its proposed fiscal year 2002 budget. Although plans for a new loan origination system have progressed, HUD has not yet completed its enterprise architecture or assessed the business processes that the new system should support. HUD’s Office of Single Family Housing has developed plans for a new system and contracted for a cost/benefit analysis, yet the Office of the Chief Information Officer does not plan to have even a partially completed target architecture until January 2002. Furthermore, the Office of Single Family Housing did not assess the current loan origination processes in place at the centers before laying the groundwork for a new loan origination system. As recently as early August 2001, it was planning to contract out preparation of a functional requirements document for the new system. It decided later that month, however, to delay acquisition of the new system until after it had assessed the current loan origination process. As part of this review, the Office of Single Family Housing plans to assess what information is needed to support the process as well as research the best way to maintain case binders and generate mortgage insurance certificates. Since HUD has not yet completed its enterprise architecture or examined single-family business processes, it is premature to assess whether these efforts will fully address the centers’ information systems needs. However, ensuring that single-family business processes are reviewed and HUD’s enterprise architecture is completed before attempting to acquire a new system would be in accordance with OMB guidance, which requires an agency’s information systems investments to be consistent with its architecture. Our experience with federal agencies has shown that attempting to define and build major information technology systems without first completing an enterprise architecture often results in systems that are duplicative, are not well integrated, are unnecessarily costly to maintain and interface, and do not effectively optimize mission performance. The Office of Single Family Housing also plans to enhance the Single Family Data Warehouse. For example, it plans to add information on nonprofit agencies that participate in FHA’s programs and on FHA-insured reverse mortgages. In addition, it intends to expand the front-end query tool that enables users to query the warehouse without having to use database software. However, it has no plans to update the data in the warehouse more often than once a month; therefore, the warehouse still will not be able to provide data essential to meeting day-to-day operational needs. Given the multibillion-dollar insurance risk that FHA assumes annually, it is critical that the agency’s single-family information and telephone systems help it carry out its responsibilities efficiently and effectively. However, the information and telephone systems in use at FHA’s four homeownership centers do not support FHA’s current business processes or efficiently supply FHA staff with necessary information. Center staff have demonstrated dedication and a willingness to overcome problems with these systems. Still, nonintegrated systems and cumbersome reporting mechanisms make it difficult for them to obtain the information needed to oversee lenders and contractors and provide timely and consistent customer service. Also, because FHA’s information systems do not share information, time and effort must be spent pulling together data needed for routine oversight or customer service purposes, and this lengthens response times and increases costs. Staff spend time learning, using, and working around information system problems—time that could be spent in more productive ways. Furthermore, while it seems that the Single Family Data Warehouse should alleviate some of these problems, it is updated only monthly and requires special technical expertise to extract all but basic reports. These problems indicate that new single-family information and telephone systems are necessary to support homeownership center operations and reduce insurance risk. However, HUD’s Offices of the Chief Information Officer and Single Family Housing have to work together to acquire new systems for the centers. Currently, as required by the Clinger-Cohen Act, the Office of the Chief Information Officer is developing an enterprise architecture to better ensure that HUD’s information systems support its business processes. However, HUD has not yet put into place certain architecture management controls recommended by the Chief Information Officers Council. Only if developed and implemented effectively will HUD’s enterprise architecture help ensure that the centers have the information and telephone systems necessary to support their efforts to oversee lenders and contractors and provide customer service. Meanwhile, the Office of Single Family Housing has designed a new loan origination system and made plans to assess the loan origination process at the centers before acquiring the new system. While we agree that assessing single-family business processes before acquiring new systems is prudent, any plans to reengineer single-family business processes or improve single-family information systems should fall within the framework of HUD’s enterprise architecture. If this does not occur, HUD risks acquiring systems that are not well integrated and do not effectively support the centers’ efforts to oversee lenders and contractors and provide customer service. To address the information system challenges facing HUD’s homeownership centers, we recommend that the Secretary of Housing and Urban Development direct the Chief Information Officer and Assistant Secretary for Housing-Federal Housing Commissioner to: Implement the best practices for enterprise architecture management recommended by the Chief Information Officers Council, including forming an enterprise architecture steering committee and formulating an enterprise architecture policy; Continue delaying any sizable single-family systems acquisition or development until the Department’s enterprise architecture is complete; and Ensure the development of an enterprise architecture that reflects the Office of Single Family Housing’s analysis of business processes and data needs at the homeownership centers and provides a framework for the future acquisition of single-family information systems. Finally, we recommend that the Secretary of Housing and Urban Development direct the Assistant Secretary for Housing-Federal Housing Commissioner to implement telephone systems that track the data, such as peak usage periods, that the centers need to manage their customer service workload. We provided copies of a draft of this report to HUD for its review and comment. In a letter from the Assistant Secretary for Housing-Federal Housing Commissioner (see app. II), HUD did not take issue with any of our findings or factual statements. It agreed with three of our recommendations and expressed concerns about one recommendation. HUD commented as follows on each recommendation: HUD stated that it plans to recharter its Technology Investment Board Executive Committee to include oversight of its enterprise architecture. This committee will then be asked to adopt the existing draft enterprise architecture policy after it has completed departmental clearance. While HUD agreed that its single-family information systems must be replaced or substantially overhauled in order to meet FHA’s business needs, it disagreed that any major efforts to improve these information systems should be suspended until HUD’s enterprise architecture is completed. It stated that its Offices of the Chief Information Officer and Housing will immediately review all systems development work planned for single-family systems in fiscal year 2002 and assess whether the development of any significant new capabilities for these systems should be deferred until the target enterprise architecture for Office of Housing systems is under development. In addition, it noted that the Office of the Chief Information Officer will ensure that the information technology project selection process for fiscal years 2003 and 2004 explicitly uses the target enterprise architecture as a major determining factor for selecting an information technology investment for funding. While these are positive steps, we still believe that HUD should delay any sizable single-family systems acquisition or development until the Department’s enterprise architecture is complete. Our experience with federal agencies has shown that attempting to define and build major information technology systems without first completing an enterprise architecture often results in systems that are duplicative, are not well integrated, are unnecessarily costly to maintain and interface, and do not effectively optimize mission performance. Therefore, we made no changes to our recommendation. HUD noted that the Offices of the Chief Information Officer and Housing will work together to ensure that the information needs of the homeownership centers are integrated into the design and architecture for the single-family business process and subsequent information technology. HUD stated that it plans to establish a single toll-free telephone number for FHA’s clients and to acquire new telephone equipment and tracking systems to capture information about the caller, the nature of calls, and the number of calls. We conducted our work at HUD headquarters and at all four of the Department’s homeownership centers in Atlanta, Denver, Philadelphia, and Santa Ana. We reviewed documents describing FHA’s single-family information systems and telephone systems. We interviewed officials from HUD’s Office of Single Family Housing, Office of the Chief Information Officer, and the four centers. Finally, we reviewed documents outlining HUD’s enterprise architecture and its plans for specific single-family information systems. We performed our work from July 2000 through September 2001, in accordance with generally accepted government auditing standards. We are sending copies of this report to the Chairman, Subcommittee on Housing and Transportation, Senate Committee on Banking, Housing, and Urban Affairs; the Chairman and Ranking Minority Member, Senate Committee on Banking, Housing, and Urban Affairs; the Chairwoman and Ranking Minority Member, Subcommittee on Housing and Community Opportunity, House Committee on Financial Services; and the Chairman and Ranking Minority Member, House Committee on Financial Services. We will also send copies to the Secretary of Housing and Urban Development; the Assistant Secretary for Housing-Federal Housing Commissioner; and the Director of the Office of Management and Budget. We will make copies available to others upon request. Please call me at (202) 512-2834 if you or your staff have any questions about this report. Key contributors to this report are listed in appendix III. Our objectives were to (1) describe the information systems used by the homeownership centers, (2) analyze the effectiveness of these systems in supporting the centers’ current operations, and (3) assess the Department of Housing and Urban Development’s (HUD) plans for the information systems used by the centers. To determine what information systems the homeownership centers use, we reviewed a list of HUD’s information systems and identified those information systems that support single-family operations. We then interviewed HUD officials to determine which information systems are the major single-family information systems. For the seven major single-family information systems, we obtained and reviewed documentation that described the purpose of and the information contained in each system. At each of the four homeownership centers, we interviewed the director, division heads, and system users to determine which information systems are used to support the center’s operations. Our interviews also focused on (1) the databases, including spreadsheets, the centers have developed to support their operations, and (2) the telephone systems the centers use. We obtained and reviewed documentation on these databases and telephone systems. To determine how effectively these systems support the centers’ current operations, we interviewed information system users at each of the four homeownership centers. We asked them about how they use FHA’s single- family information systems to accomplish their missions, the training they received on the information systems they use, the quality of the data in the information systems, and information systems’ limitations. Our interviews focused on the Computerized Homes Underwriting Management System, the Single Family Acquired Asset Management System, the Approval/Recertification/Review Tracking System, and the Single Family Data Warehouse. We reviewed the procedures followed by the centers to target high-risk lenders for review and identify potential loan fraud cases in order to determine the extent to which the centers can rely on their information systems for data. Similarly, we obtained and analyzed contract obligation data from the HUD Procurement System and contract expenditure data from HUD’s Central Accounting and Program System to determine the systems’ ability to readily provide complete and accurate contract cost information. We also reviewed a March 2001 HUD-sponsored assessment of the centers’ toll-free telephone information systems and operations for information on the strengths and weaknesses of the systems and their ability to provide efficient and cost-effective customer service. Finally, we reviewed reports on our prior work at the centers and reports issued by HUD’s Inspector General for information regarding HUD’s oversight of lenders and its management and marketing contractors. To assess HUD’s plans for the information systems used by the centers, we interviewed HUD officials and reviewed documents outlining HUD’s enterprise architecture and its plans for specific single-family information systems. Specifically, we interviewed officials from the Office of the Chief Information Officer and reviewed documents to determine the status of HUD’s efforts to develop an enterprise architecture. These documents included the Chief Information Officers Council’s A Practical Guide to Federal Enterprise Architecture Version 1.0 and HUD’s response to our survey of federal departments’ enterprise architecture efforts. We also interviewed officials from the Office of Single Family Housing regarding HUD’s plans for individual single-family information systems. Finally, we reviewed documents outlining plans to replace the Computerized Homes Underwriting Management System and three other information systems with one integrated system. We performed our work from July 2000 through September 2001 in accordance with generally accepted government auditing standards. In addition to those named above, Bess Eisenstadt, Daniel Gage, Cathy Hurley, Barbara Johnson, John McGrail, Stanley Ritchick, Stewart Seman, Paige Smith, and Alwynne Wilbur made key contributions to this report. | The Federal Housing Administration's (FHA) homeownership centers use more than 20 different information systems implemented by the Department of Housing and Urban Development (HUD) headquarters, including seven major systems, databases developed by the centers, and various different telephone systems. Some of these technologies were implemented before FHA formed the centers and transferred some responsibilities to lenders and contractors. Others were implemented later, to help FHA staff oversee lenders and contractors and provide customer service. Although homeownership center staff have developed specialized databases to help them better meet their responsibilities, neither FHA's single-family information systems nor its telephone systems adequately support the centers' efforts. To better ensure that FHA's single-family information systems support current center operations, HUD is developing a systems blueprint, or enterprise architecture. HUD's Office of the Chief Information Officer plans to finish defining the current capabilities of FHA's information systems by the fall of 2001 and to have partially defined the desired capabilities of all the Department's information systems by January 2002. |
The production and maintenance of nuclear weapons produces a variety of waste by-products, including transuranic waste. DOE is storing almost 100,000 cubic meters of transuranic waste, primarily at six sites, and expects to generate another 78,000 cubic meters of the waste over the next several decades as it cleans up its nuclear facilities. About 97 percent of the existing volume of transuranic waste is stored in standard 55-gallon steel drums and other types of containers. This waste, which typically consists of contaminated equipment, tools, protective clothing, and scrap materials, is called “contact-handled” waste because it can be handled with limited precautions to protect workers from radiation. The remaining volume of waste is called “remote-handled” waste because it emits higher levels of penetrating radiation that requires special shielding, handling, and disposal procedures. In 1979, the Congress authorized DOE to develop WIPP expressly to demonstrate the safe disposal of radioactive wastes resulting from U.S. defense activities and programs. By the end of 1988, DOE had constructed all surface facilities, shafts leading to the underground disposal area, and 7 of 56 planned disposal rooms. DOE had not, however, established a clear link between its scientific program to conduct underground tests at WIPP with transuranic waste and its plans to demonstrate compliance with EPA’s disposal regulations. In October 1992, the Congress passed the Waste Isolation Pilot Plant Land Withdrawal Act. Among other things, the act authorized DOE to conduct testing at WIPP with limited quantities of contact-handled waste after EPA had (1) approved DOE’s testing and waste retrieval plans, (2) issued final disposal regulations for radioactive wastes, (3) determined DOE’s compliance with the terms of EPA’s “no migration” determination, and (4) found that the planned tests would provide data “directly relevant” to a certification of compliance with the disposal regulations or with RCRA. Also, before DOE may dispose of transuranic waste in WIPP, DOE must apply for and obtain from EPA a certification of WIPP’s compliance with the agency’s disposal regulations. In conjunction, EPA was required to establish the criteria for issuing a certificate of compliance to DOE. Finally, DOE may not begin disposing of waste in WIPP until 180 days after it has received a compliance certificate from EPA. DOE must also meet the requirements for disposing of hazardous wastes as defined under RCRA because, the Department estimates, over 60 percent of its stored transuranic waste also contains hazardous waste. The land disposal restrictions in EPA’s regulations implementing RCRA generally prohibit the disposal of untreated hazardous waste unless the agency makes a “no migration” determination. To receive such a determination for WIPP, DOE must demonstrate that there will be essentially no migration of hazardous waste from the repository’s boundary for as long as the waste remains hazardous. Also, because New Mexico is authorized by EPA to carry out a state RCRA program, DOE must obtain a permit from New Mexico for the design, maintenance, operation, and closure of WIPP. If DOE meets New Mexico’s requirements, the state expects to issue a draft permit for public comment by late summer 1996 and a final permit by June 1997. In addition to these key requirements, DOE must comply with other applicable federal environmental laws, such as the Federal Facility Compliance Act of 1992, which pertains to the treatment and disposal of waste at the sites where the waste is stored and/or generated. In 1993, DOE and EPA concentrated on the details of the planned waste disposal tests at WIPP and the relevance of the tests to a future compliance determination. At that time, DOE expected to complete the tests, apply for and receive a compliance certificate, and begin disposing of waste in the repository in 2000. In October 1993, however, DOE announced that by substituting waste tests conducted in laboratories for the planned tests in WIPP, it could open the repository 2 years earlier. The accelerated schedule has created a more dynamic, higher-risk environment for completing preparations for both the compliance application and disposal operations because more interdependent activities had to be conducted in parallel, rather than in sequence, with little time available to make adjustments on the basis of the results of individual activities. It is unclear whether DOE can accomplish all of the work needed to comply with EPA’s regulations for disposing of transuranic waste at WIPP on a schedule that would enable the Department to open the repository in April 1998. (See fig. 1 for DOE’s most recent schedule.) One reason is the disparity between the contents of DOE’s draft application for a certificate of compliance and EPA’s disposal regulations and the related criteria for deciding whether to issue the certificate. In addition, DOE was in the process of analyzing the results of the completed and ongoing scientific research that is to feed into the compliance application before it can submit a complete application. DOE, in its 1995 draft application, did not address many of EPA’s compliance criteria. This situation occurred, in part, because DOE submitted the draft application to EPA shortly after the agency had issued its proposed criteria for public comment in January 1995, well before EPA issued the final criteria in February 1996. Although the WIPP Land Withdrawal Act required EPA to issue the final criteria within 2 years of its enactment, or by October 30, 1994, the delay in issuing the criteria occurred, in part, because of the agency’s emphasis in 1993 on reviewing DOE’s plans for the tests with waste at WIPP and on issuing the agency’s disposal regulations. In addition, according to the director of EPA’s WIPP program, the agency took some additional time to complete the criteria so that it could ensure that the public had an adequate opportunity to participate in developing the criteria. When DOE eliminated the proposed tests in the WIPP underground, however, timely issuance of the compliance criteria became important to achieving DOE’s accelerated timetable for opening WIPP. In April 1994, when DOE announced that it planned to begin operating WIPP in mid-1998, it assumed that EPA would issue the final compliance criteria in January 1995 and that DOE would submit a draft compliance application to EPA 2 months later. EPA, however, did not issue the proposed criteria for public comment until January 1995 and, at that time, estimated that it would take at least 1 year to issue the final criteria. Nevertheless, DOE submitted part of its draft compliance application to EPA in March 1995 and the remaining part of the application 4 months later. DOE recognized and informed EPA, the state of New Mexico, and other parties that its draft application was incomplete but sought these parties’ comments to help it prepare to submit its final compliance application in December 1996 and receive a certificate of compliance 1 year later. (In October 1995, DOE amended its schedule, including moving the planned date for submitting its final application to October 1996.) In remarks prefacing the draft application, DOE noted that because EPA had issued the proposed compliance criteria a few months earlier, the Department was not able to follow all of the criteria in preparing the draft application. DOE also noted that the draft application did not include details on many of the subjects addressed in the draft criteria. Among other things, these subjects included the results of experiments in progress to support the final calculations on WIPP’s performance as a repository, information on the potential barriers to the release of the waste materials from the repository, seals for the shafts leading from the surface to the underground area, and the active institutional controls planned for the site after the repository is closed. Finally, DOE stated that its draft application did not contain analyses demonstrating that WIPP could meet the requirements of EPA’s disposal regulations for protecting groundwater from radioactive materials. In January 1996, after reviewing the draft application, EPA advised DOE that the application lacked the necessary detail for an appropriate and thorough review for technical adequacy. Although the agency refrained from commenting on the draft application’s completeness, it provided DOE with over 370 detailed comments on apparent deficiencies in the application. For example, the agency said the application lacked the necessary detail on the characteristics of the WIPP site, the waste to be disposed of in the repository, and barriers to the release of radioactive materials from the repository that DOE might engineer to enhance the repository’s performance. (See app. I for examples of the deficiencies in DOE’s draft application that were observed by EPA and New Mexico’s Environmental Evaluation Group.) Other parties that are likely to provide comments to EPA on DOE’s application for a certificate of compliance also expressed concern that DOE’s draft application was incomplete. The 1992 WIPP Land Withdrawal Act provided special status to New Mexico, the Environmental Evaluation Group, and the National Academy of Sciences. The act required DOE to provide these parties with free and timely access to the data on health, safety, or environmental protection issues at WIPP and authorized the parties to evaluate and publish analyses of DOE’s regulatory compliance activities. In a March 1996 report, the Environmental Evaluation Group stated that the draft application could not be considered an adequate draft document for demonstrating compliance with EPA’s disposal regulations because the application lacked substantial features that would be expected in the final application. According to the Group, the document resembled the framework rather than a draft of an application because it lacked a logical presentation of the proofs of compliance with EPA’s disposal regulations. Even the most basic information, the Group said, is absent from the draft application. Among other deficiencies, the Environmental Evaluation Group stated, the application did not adequately describe the waste that DOE would dispose of in WIPP or discuss the problems that the Department had been encountering in documenting the physical, chemical, and radiological characteristics of this waste. Thus, the Group pointed out, the assessments of the repository’s performance described in the application were based on “assumed” rather than actual characteristics of the waste. In October 1995, New Mexico also commented to EPA on DOE’s draft application. In many cases, the state said, information was either lacking or so preliminary that the state could not meaningfully comment on DOE’s treatment of various issues. Moreover, EPA’s final criteria contained provisions that DOE, in commenting on the draft criteria, had objected to and other provisions that were not in the agency’s draft criteria. DOE will have to address these provisions in its final application. One example concerns the assumptions that DOE must use in addressing the likelihood and possible types of human intrusion at WIPP, such as mining and drilling. EPA’s final criteria established assumptions about the types and frequency of mining and drilling that DOE will have to use in its final application. What the appropriate assumptions are had been an area of contention among DOE, EPA, and others, including the Environmental Evaluation Group. For this reason and because DOE has not yet addressed the issue of human intrusion in accordance with EPA’s final criteria, the Department’s analyses of the mining and drilling issues in its final application are likely to receive close review by EPA and other parties who may be commenting on the application. DOE will have to resolve many issues over the next several months if it is to submit, by October 1996, an application for a certificate of compliance that will withstand the scrutiny of EPA, which will review, and other parties, which may comment on, the completeness and quality of the application. According to the Assistant Manager for Regulatory Compliance at DOE’s Carlsbad Area Office, the Department was making substantial progress toward completing an application for a certificate of compliance on schedule. In addition, the director of EPA’s Radiation Protection Division said that DOE is giving priority to issues the agency raised in its review of the draft application. Whether DOE can successfully resolve the outstanding issues in the next few months is uncertain because DOE’s final technical positions on WIPP have been evolving since the submission to EPA of the draft compliance certification application. According to the assistant manager for regulatory compliance in DOE’s Carlsbad Area Office, the Department intends to send EPA sections of its final application for early review and comment over the next several months to facilitate EPA’s review of the completeness of the application when DOE submits the application to EPA in October 1996. The assistant manager also stated that the application will document DOE’s current technical positions on WIPP. As of early May 1996, the Director of EPA’s WIPP Center told us that the EPA staff had received one section of the application dealing with the site’s characteristics and geological features. However, for sections of the final application that document DOE’s compliance with the disposal regulations, DOE was making the final decisions about the details of the conceptual and computational models that it will use to simulate and assess the performance of the repository over the required 10,000-year period. The performance assessment is critical to demonstrating that neither radioactive nor hazardous materials will migrate from the repository’s boundary. At the same time, DOE was feeding the current results from completed and ongoing research projects into the performance assessment calculations, parts of which have already begun. In addition, to satisfy EPA’s compliance criteria, DOE is implementing a program to ensure that its key scientific and regulatory compliance programs and activities meet generally accepted standards of quality in the nuclear industry. Some of the data DOE has collected predate the Department’s adoption of the quality standards that EPA has prescribed in its final compliance criteria. Therefore, DOE is now attempting to demonstrate, using the procedures permitted by the criteria, that the data to be used in the compliance application, which the Department collected before it implemented the required quality assurance program, meet the quality assurance standards for existing data. According to DOE’s Carlsbad Area Office, about 10 percent of the data that the Department collected in prior years would, to the extent that the data are used to support the final WIPP compliance analysis, have to be qualified by either of two approaches. The first approach is to demonstrate that the data were collected under standards that were equivalent to DOE’s current quality assurance program. The second approach is to use alternative means of qualification, such as peer review, that are permitted by EPA. These officials added that the qualification work is currently on schedule to support the submission of the final application to EPA. Finally, in February 1995, DOE asked the National Academy of Sciences’ Committee on WIPP to evaluate the key scientific studies and modeling supporting DOE’s ongoing assessments of the repository’s performance. The Committee’s study would provide DOE with feedback on several important aspects of the assessment program, such as the hydrology of the rock formations where the repository is situated, the use of peer review and expert judgment in DOE’s scientific program, and studies of the potential effects on the repository’s performance of gases that might be generated from waste materials. As of May 1996, the Committee anticipated issuing its report late in July of 1996. Officials at DOE’s Carlsbad office stated that until they have received and reviewed the Committee’s report, they do not know what actions they might have to take if the Committee finds deficiencies in DOE’s research program or recommends that DOE perform additional research. Moreover, DOE has already cut back the scope of its research program, and by the time the Committee releases its report, DOE expects to be nearly finished with its calculations of WIPP’s compliance with EPA’s disposal regulations. For the first several years of WIPP’s operations, DOE expects to dispose of contact-handled waste at less than one quarter of the design disposal rate of the repository. The disposal operations in these years will be constrained by the number of transportation containers that are available and the lack of facilities and equipment at the storage sites for preparing waste for shipment and disposal. DOE does not expect to begin disposing of remote-handled waste until 2002. DOE estimates that it has about 97,000 cubic meters of contact-handled transuranic waste in storage and projects that it will generate almost 56,000 cubic meters more of this waste. (See table 1.) More than 98 percent of the total anticipated volume of contact-handled waste is stored or will be generated at six facilities. DOE’s Carlsbad Area Office plans to ship contact-handled waste to WIPP from the Idaho, Rocky Flats, and Los Alamos sites in 1998 and from the Savannah River site in 1999. Thereafter, the office may also make shipments from other storage sites. The office expects to make almost 1,300 shipments to WIPP at an accelerating rate over the approximately 5-year period ending December 31, 2002. (See table 2.) During that same period, the repository is expected to be operationally capable of receiving and disposing of over 1,900 shipments of waste. Thus, the planned disposal rate is about two-thirds of the expected capability to dispose of waste in WIPP through 2002. One constraint on DOE’s initial disposal capability is the number of available transportation containers. Several years ago, when DOE expected to begin operating WIPP earlier as a test facility, the Department procured 15 containers for transporting contact-handled waste. Since then, DOE has concentrated its budget for WIPP on the scientific and technical issues that need to be resolved to demonstrate compliance with EPA’s disposal regulations and has not procured additional containers. DOE expects to acquire more containers in 2000—enough to make 10 shipments per week to WIPP by the end of that year—and to have a total of 60 containers by 2002—enough to make 17 shipments per week. A second operational constraint is the extent to which DOE’s storage sites are limited in their ability to prepare contact-handled waste for shipment and disposal. Waste managers at each site must be able to (1) retrieve the waste and put it in temporary storage areas; (2) characterize, or identify the constituents of, the waste; (3) identify the waste that meets the criteria for shipping and disposal; (4) treat the waste, as necessary, to make it suitable for shipment and disposal; and (5) package the waste for shipment and load the transportation containers onto transport vehicles. At present, according to DOE’s Carlsbad Area Office, only the Idaho and Rocky Flats sites are capable of completing these steps for a limited amount—about 4,500 cubic meters—of the existing 97,000 cubic meters of contact-handled waste. Each of DOE’s major storage sites needs facilities for characterizing, repackaging, treating, and/or loading waste for transportation. At some sites, waste managers are taking interim measures, such as identifying the waste that does not require treatment, to prepare enough waste for shipment and disposal to meet the Department’s obligations for managing wastes under the Federal Facility Compliance Act and its schedule for opening WIPP. At Los Alamos, for example, waste managers expect to have mobile characterization and transportation loading equipment in place by 1998; therefore, DOE’s Carlsbad office estimates that the site may have about 600 cubic meters of waste ready to ship in 1998. If funds are made available for the necessary equipment at the Rocky Flats site, the site’s waste managers expect to have about 1,000 cubic meters of waste ready for shipment and disposal in 1998. (See app. II for a discussion of each of the six major storage sites.) In connection with the Idaho site, DOE recently agreed, in a settlement of litigation with Idaho, to ship 3,100 cubic meters—about 15,000 drums—of contact-handled waste from Idaho by the end of 2002. Making two shipments a week from this facility—up to about 4,370 drums of waste per year—could enable DOE to meet its agreement with the state. It is uncertain, however, if DOE will be able to prepare the waste for shipment at that rate. As recently as September 1995, site officials estimated that they would have only about 700 drums of waste ready by June 1998. Since then, however, these officials have reinterpreted DOE’s criteria covering the requirements that waste must meet to qualify for shipment to and disposal in WIPP. As a result, they now expect that by mid-1998 they will be able to certify that at least 2,000 drums of waste meet all of the criteria for shipment and disposal and that subsequently they will be able to certify another 3,500 drums per year. For remote-handled waste, DOE does not expect to have the essential facilities and equipment in place for preparing and shipping the waste to WIPP until at least 2002. Most of the stored waste is at Oak Ridge, but DOE expects to generate much more of this waste at its Hanford site (see table 3). DOE’s schedule for disposing of remote-handled waste may present an operational problem at WIPP, particularly if DOE is unable to begin disposing of the waste at Hanford for many years. By 2002, at the earliest, DOE may have a new facility at its Oak Ridge site that is ready to begin retrieving and preparing almost 800 cubic meters of remote-handled sludge for disposal in WIPP. The Department has no firm plans, however, for when and how it will prepare to recover, treat, and dispose of the remaining remote-handled waste at Oak Ridge. At Hanford, moreover, site officials do not have plans for preparing remote-handled waste for disposal; however, they expect to begin disposing of this waste within 20 years. The latter waste will largely consist of equipment that is now part of the system of underground tanks that store high-level radioactive waste from the earlier production of plutonium at the site. Currently, site officials expect that most of the remote-handled waste may eventually be decontaminated and disposed of at the site and that only about 3,470 cubic meters of this waste will be shipped to and disposed of in WIPP. DOE is negotiating milestones that will affect the shipment of transuranic waste with the state of Washington and EPA. DOE designed WIPP so that remote-handled waste would be disposed of in the walls of storage rooms before contact-handled waste is placed in these rooms. Because of the delay in disposing of remote-handled waste, less of the repository’s storage area will be available when DOE is ready to dispose of this waste. According to DOE’s manager of remote-handled waste, the Department is reviewing alternatives that would make up for the loss of disposal space for remote-handled waste in the initial years of WIPP’s operations. He added that an alternative would not be ready in time for inclusion in DOE’s compliance application to EPA; therefore, if DOE wants to pursue an alternative disposal approach, it would seek an amendment to the compliance certificate after WIPP opens. Moreover, if DOE is not able to dispose of all of the remote-handled waste within the walls of the waste-storage rooms for contact-handled waste, it may have to mine new storage areas in the repository specifically for disposing of remote-handled waste. This effort would increase the cost of operating the repository. Looking beyond the first few years of WIPP’s operations to the 25- to 35-year period over which DOE expects to ship waste to WIPP and emplace the waste in the repository for permanent disposal, DOE will not be able to significantly increase the rate at which it emplaces transuranic waste in WIPP until it has (1) developed the facilities and equipment at each site for retrieving, processing, and packaging the waste for shipment and (2) procured more numbers and varieties of transportation containers. In a 1995 report projecting the potential costs of cleaning up its nuclear sites, DOE estimated that the required investment in facilities and containers for transuranic waste and related operations over several decades will cost more than $11 billion. In addition, DOE estimated that the waste transportation and disposal operations at WIPP could cost almost $8 billion, for a total cost of about $19 billion to manage and dispose of transuranic waste. According to DOE’s Carlsbad Area Office, a 1996 updated version of the baseline cost report now being prepared will increase the estimated cost to about $29 billion. The Idaho site illustrates the need for DOE to develop the ability to characterize, treat as necessary, and prepare larger quantities of waste for shipment before it can begin to make significant headway in disposing of the contact-handled waste stored at the site. Officials at that site estimate that about 58 percent of the waste is stored in boxes that are incompatible with existing waste characterization facilities. Other major storage sites, except for Los Alamos, are in similar situations. DOE will also need to develop other types of transportation containers for much of its contact-handled waste. DOE estimates that about 26 percent of the waste can be efficiently transported in the existing type of container. About 41 percent of the waste is expected to be too heavy for efficient transport in the existing type of container. DOE plans to develop and procure new containers for this waste. DOE has not yet decided how it will transport the remaining amount of contact-handled waste. How soon DOE can bring these essential facilities and equipment on line and operate them depends upon the availability of funds at a time when DOE faces significant competing priorities for limited funds. For fiscal years 1996 through 2000, DOE expects to reduce its overall budget by more than $14 billion when compared with earlier budget projections. This reduction includes $4.4 billion in its environmental management programs. It is unclear what the precise implications of DOE’s planned or other budget reduction proposals are for the timing and extent of WIPP’s operation and for DOE’s ability to prepare the existing and projected inventories of waste for shipment to and disposal at WIPP. Tighter future budgets could further restrain DOE’s ability to prepare, ship, and dispose of transuranic waste at the planned rates. In these circumstances, WIPP is likely to remain open, at a less-than-optimal operating level, for many years beyond the currently planned operating life of 35 years. According to DOE’s estimate of the annual cost to operate WIPP, each additional year that DOE must operate the repository could cost about $130 million. We provided a draft of our report to DOE and EPA for their review and comment. DOE provided written comments on this report, which appear in appendix III. We also met with the Directors of EPA’s Division of Radiation Protection and WIPP Center (within the agency’s Office of Radiation and Indoor Air) and the agency’s Engineer Director, Permit and States Program Division, Office of Solid Waste, to obtain their comments on this report. DOE said the tone of our draft report was pessimistic, while the Department is optimistic about its transuranic waste management program. DOE is optimistic, it said, because all work is known, planned, and on schedule; the success rate in accomplishing scheduled activities and milestones is 100 percent. Specifically, DOE pointed to its filing of a draft compliance application with EPA as evidence of the success of its strategy to achieve the maximum amount of input to the final application. We recognized in our report that DOE had met its past milestones, such as submitting a draft compliance application to EPA, for opening WIPP. In our view, however, the effectiveness of the Department’s efforts to open WIPP depends on its ability to submit an application for a compliance certificate to EPA that is of sufficient completeness and quality to enable the agency to issue a certificate to DOE within the 1-year period specified in the WIPP Land Withdrawal Act. Whether DOE will meet this requirement remains to be seen. DOE also said our draft report failed to recognize that its plans to bring WIPP to full operation meet the resource needs of the Department and exceed all requirements at the storage sites that stem from agreements between DOE and the states. If, over the first 5 years of WIPP’s operation, DOE is successful in shipping and disposing of the quantities of waste currently planned, then it should meet the short-term requirements of the sites where the waste is stored. As our report discusses, however, there is some uncertainty about the Department’s ability to meet its short-term disposal objectives and even greater uncertainty over the long term. For example, tight budgets in future years could restrain DOE’s ability to dispose of transuranic waste at currently planned rates. Finally, DOE provided other specific clarifying comments that we incorporated as appropriate. The EPA officials agreed with our report and suggested changes intended to clarify the agency’s role and authority in regulating WIPP. We incorporated these suggested changes in the report as appropriate. We performed our review at WIPP and at the offices of DOE and the state of New Mexico in Albuquerque, Carlsbad, and Santa Fe. We also visited DOE’s storage sites for transuranic waste in Colorado, Idaho, Tennessee, and Washington. Finally, we also performed our review at the headquarters of DOE and EPA in Washington, D.C. We conducted our review from June 1995 through May 1996 in accordance with generally accepted government auditing standards. (See app. IV for details of our scope and methodology.) As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to the appropriate congressional committees; the Secretary of Energy; the Administrator of EPA; and the Director, Office of Management and Budget. We will also make copies available to others on request. Please call me at (202) 512-3841 if you or your staff have any questions. The major contributors to this report are listed in appendix V. Before the Department of Energy (DOE) can dispose of transuranic waste in the Waste Isolation Pilot Plant (WIPP), it must obtain, among other things, the Environmental Protection Agency’s (EPA) certification that the repository will comply with the agency’s regulations for disposing of transuranic waste in WIPP. The disposal regulations specify the requirements for containing the waste, protecting individuals and groundwater from radiation, and providing additional assurances to reduce the likelihood of a release of radiation from the repository. As mandated by the Land Withdrawal Act of 1992, EPA developed the compliance criteria to clarify the requirements of the disposal regulations and required DOE to provide the agency with specific types of information in the Department’s compliance application. The compliance criteria implement the containment, individual protection, groundwater protection, and assurance requirements of the disposal regulations. In addition, the criteria contain public participation requirements describing how the agency will involve the public in the certification rulemaking process and general requirements covering subjects such as the extent to which the waste needs to be characterized (analyzed to determine its contents) before it is disposed of, guidance on the computer models and codes that simulate the repository’s performance, and demonstrations that the data and assumptions developed by DOE have been adequately peer reviewed. According to EPA and others, DOE’s mid-1995 draft application for a certificate of compliance did not include sufficient detail to address the elements of the agency’s disposal regulations and proposed criteria of January 1995. Also, the final compliance criteria of February 1996 contained provisions that either DOE had objected to in commenting on the draft criteria or that EPA had not included in the draft criteria. EPA commented that DOE’s draft application lacked adequate technical information and emphasis on the capability of the repository site to adequately isolate the waste from the surrounding environment. For example, EPA noted that although the application described the geology of the site, the application did not show how this information has been transformed into the mathematical models that are used to assess how the repository would perform over the 10,000-year period covered by the containment requirements of the disposal regulations. EPA also raised questions about the hydrology of the site. The agency said, for example, that it appeared that the hydrologic properties of the Dewey Lake rock formation—a layer of rock between the surface of the site and the underground repository—are not well documented and that additional study of that formation may be warranted before it can be ruled out as a potential pathway for contaminants to escape the repository area. In commenting on DOE’s draft application, EPA stated that the application contained only a limited discussion of how DOE might use engineered barriers to develop adequate confidence that WIPP would comply with the agency’s disposal regulations. For its part, DOE believes that the agency’s interest in engineered barriers goes beyond what is necessary to demonstrate compliance with the regulations. New Mexico’s Environmental Evaluation Group has sided with EPA because, in the Group’s view, DOE has not adequately considered the advantages of engineered barriers in the repository. Subsequently, DOE decided that it will use additional engineered barriers at WIPP to comply with EPA’s disposal regulations. The effectiveness of the planned engineered barriers will be addressed by DOE in its final compliance application and by EPA and others in their reviews of the application. EPA’s proposed and final compliance criteria include provisions that implement its assurance requirement on engineered barriers. EPA is requiring DOE to study the available options for engineered barriers at WIPP and submit this study as part of its compliance application. Consistent with this requirement and the containment requirements in the agency’s disposal regulations, DOE must analyze the performance of the complete disposal system, including any planned engineered barriers, and EPA must consider this analysis when evaluating compliance with both the containment and assurance requirements. EPA stipulated that DOE must evaluate the benefits and detriments of engineered barrier alternatives and consider specific factors, such as the effectiveness of the barriers in preventing or substantially delaying the movement of radioactive contaminants to the accessible environment and the effect of the barriers on the total costs of disposal. Also, EPA is requiring DOE to study and describe any engineered barrier(s) for existing waste that is already packaged, not yet packaged, or in need of re-packaging and, to the extent practicable, waste to be generated in the future. During EPA’s rulemaking on its proposed compliance criteria, DOE objected to the proposed requirements related to engineered barriers because, in the Department’s view, the requirements have no basis within the agency’s disposal regulations. DOE was concerned that the engineered barrier requirements would make the agency’s regulations more stringent than the agency had intended when it developed the regulations and could be interpreted as requiring barriers beyond those necessary to demonstrate a “reasonable expectation” of compliance with the regulations. Before EPA issued the proposed compliance criteria in January 1995, DOE had informally agreed with EPA to study engineered barriers. After EPA included the requirement for the study of engineered barriers in its proposed criteria, DOE questioned why the agency needed to prescribe the study in its regulations when the Department had already begun to perform the study. DOE also questioned the role the study would have in EPA’s process for considering DOE’s application for a certificate of compliance, because the performance of such a study was not a part of the basis for developing the regulations. DOE added that it intended to use this study to make decisions about the need for additional engineered barriers to meet EPA’s assurance requirements. The study would not, DOE said, aid in the selection of the engineered barriers needed to comply with EPA’s containment requirements. Finally, although the Department generally agreed with EPA’s approach of assessing the benefits and detriments of engineered barriers, it expressed concern that the proposed criteria provided no meaningful basis for selecting engineered barriers other than the outcome of the benefit/detriment analysis. In September 1995, DOE released its engineered barrier study. The study, according to officials of DOE’s Carlsbad Area Office, evaluated the types, applicability, cost, and benefit of using engineered barriers at WIPP. DOE concluded from the study that engineered barriers, other than planned shaft seals, would be of little benefit in demonstrating that WIPP complies with EPA’s disposal regulations. Therefore, the Carlsbad office decided to use only those engineered barriers that it believed were necessary to appreciably add to the assurance of compliance with EPA’s disposal regulations and/or to meet other specific regulatory requirements. According to officials of EPA’s WIPP Center, the agency expects to complete its review of DOE’s study in June 1996. EPA, however, noted that it will not be evaluating the merits of DOE’s engineered barrier study as a “stand alone” effort but, rather, in the context of DOE’s total compliance application submission. Because DOE has not finished its final compliance calculations and associated sensitivity analyses, it is too early to tell what the relative contribution of the barriers studied by DOE will have on EPA’s compliance determination or if the contribution would appreciably enhance confidence in DOE’s final results. New Mexico’s Environmental Evaluation Group has been critical of DOE’s consideration of engineered barriers at WIPP. The Group disagrees with DOE’s position that EPA’s compliance criteria impose additional requirements on DOE. In the Group’s view, the criteria provide a basis for DOE to select or reject various engineered barrier alternatives. The Group also maintains that DOE’s definition of an “engineered barrier,” as stated in the Department’s draft compliance application, is inconsistent with the definitions used by other agencies, such as the definition the Nuclear Regulatory Commission uses in connection with DOE’s proposed repository at Yucca Mountain, Nevada. According to the Group, although DOE repeatedly stated in its draft application that it will use “multiple barriers” at WIPP, the only barriers that DOE is known to be planning are seals for the shafts leading to the underground repository. The Group called DOE’s effort a “minimal” approach and pointed out that on the Yucca Mountain repository project, the Commission does not consider shaft seals to be an engineered barrier. The Group also believes that DOE’s essentially sole reliance on the calculations for the repository’s performance to decide whether or not to implement engineered barriers at WIPP is contrary to the well-established principle within the nuclear industry of multiple and redundant barriers to isolate nuclear materials. Finally, the Group has urged DOE to backfill the waste-filled storage rooms and tunnels with crushed salt that was mined from the underground repository and is now stored on the surface. The Group believes that backfilling the repository can serve useful purposes, such as reducing the generation of gases and minimizing settlement and fracturing in the rock formations overlying the repository. In April 1996, an assistant manager of DOE’s Carlsbad Area Office told us that the Department has decided to follow the Group’s recommendation. DOE will, he said, place bags of magnesium oxide around the sides and over the top of the containers of waste in underground storage rooms at WIPP. This approach, he added, will control gas formation in the repository and reduce the possibility that harmful transuranic materials might become dissolved in brine that could seep into and then out of the repository and find its way into the groundwater. According to the Group’s deputy director, in May 1996 the Group was in the process of discussing the backfill issue with DOE’s Carlsbad office but had not yet decided whether to fully support DOE’s choice of backfill material. EPA criticized DOE’s draft compliance application for its lack of detail on the expected characteristics and components of the transuranic waste that would be disposed of at WIPP. Subsequently, in April 1996 DOE disclosed its plans for controlling the types and quantities of transuranic waste to be shipped to WIPP for disposal and to address waste characteristics and components in its analysis of compliance with EPA’s compliance criteria. EPA’s proposed criteria required that DOE identify in its compliance application the chemical, radiological, and physical characteristics and components of all transuranic waste to be disposed of at WIPP. In commenting on DOE’s draft application, the agency noted that DOE had made assumptions—rather than provide actual data—about the characteristics and components of the waste, such as the types and quantities of radioactivity, amounts of moisture in waste containers, and quantities of other materials contained in the waste containers, that could affect the repository’s long-term performance. EPA also stated that DOE had not (1) identified the waste characteristics that are important to the long-term performance of the repository; (2) discussed the relationships that the characteristics of the waste may have to important processes, such as the generation of gases over time in the storage rooms; and (3) identified the uncertainties associated with these relationships. According to EPA, however, the inclusion of such information is essential to an assessment of WIPP’s performance as a repository. Furthermore, EPA stated, DOE had not explained how it would control and track the types of waste disposed of in the repository from the time the waste is characterized to the time it is emplaced in WIPP to ensure that only waste with the characteristics and components that have been found acceptable for disposal are actually emplaced in the repository. EPA’s final criteria require DOE to identify and assess, in its compliance application, the effects on the repository’s performance of only those waste characteristics and components that might actually influence the containment of waste in the disposal system. Under this requirement, DOE is to ensure that all of the characteristics and components of the waste that could influence its containment in the repository have been systematically identified and evaluated. Once DOE has identified (1) by physical samples, knowledge of the waste streams from the operations of DOE’s nuclear facilities or (2) by other means, the waste’s significant characteristics and components, EPA’s criteria require that DOE limit, control, and quantify these characteristics and components. Until recently, DOE had not stated how it intends to implement these criteria. In 1993, DOE proposed using assessments of the repository’s performance as a tool for identifying the waste’s characteristics and components having the greatest influence on performance. This is a concept in which DOE would, using performance assessments as a starting point, “screen” waste streams at storage sites to establish an envelope of, or boundaries on, the characteristics and components that are acceptable for disposal. By comparing the data on the characteristics and components of the waste in storage or expected to be generated in the future with the envelope, DOE could identify those wastes that are acceptable for disposal at WIPP. However, in late 1995, DOE canceled this activity because, according to officials of DOE’s Carlsbad office and Sandia National Laboratories (DOE’s principal scientific contractor for WIPP), the Department now anticipates that all the waste that it has planned to dispose of in WIPP will be acceptable for disposal. In April 1996, DOE took a first step toward addressing EPA’s concerns by identifying the criteria that DOE will use to identify the waste that is acceptable for disposal in WIPP. Furthermore, according to officials in DOE’s Carlsbad Area Office, in May 1996 the Department revised its baseline inventory report for transuranic waste across the DOE complex to include information on the waste characteristics and components that will be included in the performance assessment for WIPP. They added that in July and August of 1996, a panel of outside experts will perform a peer review of the Department’s efforts to identify the waste characteristics and components consistent with the provisions in EPA’s compliance criteria. EPA’s Director of its WIPP Center, however, told us that DOE had not yet provided the Center with a copy of this report; moreover, DOE has yet to complete another part of its analysis of waste characteristics and components to be submitted with its final compliance application to EPA. Thus, it is too early to ascertain whether the recent initiatives by DOE will be responsive to EPA’s concerns. EPA stated that the absence of a plan for emplacing both contact- and remote-handled waste in the underground repository was a major omission in DOE’s draft application. DOE had designed WIPP so that it would insert containers of remote-handled waste in the walls of disposal rooms before stacking containers of contact-handled waste in these rooms. In the application, DOE stated that for the purpose of assessing the repository’s performance, it assumed that contact-handled and remote-handled waste would be distributed equally among all storage rooms. EPA noted, however, that it did not appear that DOE would have much, if any, remote-handled waste ready to ship to WIPP in 1998. Therefore, according to EPA, the actual distribution of both types of waste within the repository may differ from the distribution of waste that DOE had assumed in its draft application. EPA concluded that DOE should have addressed in the application how the late arrival of remote-handled transuranic waste would affect the disposal operations at the repository and the long-term performance of the repository. In its final criteria, EPA stated that if DOE does not include a waste-loading scheme in its compliance application, the Department must assume, in assessing the repository’s performance, that waste containers are randomly emplaced in the repository rather than, as DOE had assumed in its draft application, that the various characteristics and components of the waste would be evenly distributed throughout the repository. EPA and the New Mexico Environmental Evaluation Group stated that the draft application did not provide detailed descriptions of how DOE intends to implement one or more of the assurance requirements of the agency’s disposal regulations. For one of these assurance requirements—maintaining active institutional control of the site for as long as practicable—EPA said the lack of information in DOE’s draft application precluded an evaluation of the technical adequacy of the subject. Likewise, the agency said, DOE’s application lacked detailed monitoring plans for the site. The Environmental Evaluation Group took exception to both EPA’s and DOE’s positions on implementing the assurance requirement in the agency’s criteria that address disincentives for extracting natural resources in the area of the repository. The resource disincentive assurance requirement states that a repository should generally not be located in an area where previous mining for resources has occurred, a reasonable expectation of future exploration exists, or a significant concentration of a rare material occurs, unless DOE can show how the favorable characteristics of the site offset these disadvantages. The Group said that the WIPP site fails all three of these resource disincentive criteria because there is a significant concentration of potash, oil, and gas reserves in the vicinity of WIPP. Accordingly, the Group said, DOE should have provided documentation of the favorable compensating characteristics of the site. In the compliance application, the Group recommended, DOE should recognize the existing characteristics of the site and consider all plausible human intrusion scenarios instead of debating the favorable site characteristics and the degree to which these characteristics compensate for the presence of resources. Finally, the Group noted that the location of WIPP within an area that is rich in resources is another reason to include engineered barriers in the design of the repository. In the final compliance criteria, EPA decided that DOE would not have to provide a separate analysis of the favorable compensating characteristics at WIPP if the Department can demonstrate compliance with the agency’s containment requirements. The basis for the agency’s position was that the assessments of the repository’s performance, properly done, would consider all types of human intrusion and any mitigating factors that might affect compliance. The Group, however, disagreed with EPA’s position on the basis that EPA, in its disposal regulations, had intended that the assurance requirement be an added measure to enhance confidence that the containment requirements would be met. In addition, New Mexico’s assistant attorney general had similar concerns about DOE’s and EPA’s treatment of resource disincentives in the draft application and the final compliance criteria, respectively. EPA stated that DOE’s draft application lacked sufficient evidence of an adequately designed and implemented program to ensure that the information and analyses that will be included in the compliance application meet the standards for quality. EPA said that the draft lacked information describing the method(s) used to implement a quality assurance program and to verify that the program is being implemented properly. For example, the agency noted, DOE omitted information on the training of workers on quality procedures; records of audits, surveillance, and resolution of nonconformance and corrective actions; and document control. EPA also highlighted the shortcomings in DOE’s software quality assurance requirements, such as the lack of documentation of computer software and codes, that it had brought to DOE’s attention several months before the Department submitted the draft application. And EPA expressed concern about certain of DOE’s processes to establish that the data collected before DOE had implemented an approved quality assurance program are acceptable for use in an application for a certificate of compliance. DOE must satisfy a rigorous set of quality assurance procedures generally adopted by the nuclear industry covering virtually all aspects of WIPP, including the scientific and modeling studies in support of the final performance assessment. These requirements stem from, among other things, EPA’s compliance criteria for WIPP. Important quality assurance measures include the standards related to work processes; control of engineering designs; document control and management; procurement; inspection and testing; sample management and control; planning and performing scientific investigations; peer review of scientific studies and modelling efforts; software quality assurance; and documentation, control, and qualification of information. Since October 1993, when DOE decided to accelerate its schedule for opening WIPP, the Department and its contractors have been implementing quality assurance measures related to the Department’s effort to establish that WIPP meets all of the regulatory requirements for disposing of transuranic waste. As of May 1996, however, DOE still needed to complete several important quality-assurance-related activities before it will be prepared to submit an application for a certificate of compliance. One key activity is demonstrating that the scientific data collected before DOE had implemented the quality assurance program that EPA requires are of satisfactory quality for use in supporting DOE’s application for a certificate of compliance. According to Carlsbad Area Office’s Quality Assurance Manager, about 10 percent of the scientific information that Sandia National Laboratories has collected was under a quality assurance program that did not fully meet the current program’s requirements. Therefore, to the extent that DOE would use this information in support of the WIPP compliance application, the data will have to be qualified for their intended use by alternative means acceptable to EPA. Finally, officials of DOE’s Carlsbad Area Office stated that they have made improvements to comply with EPA’s compliance criteria and are on schedule to complete the qualification of information necessary to submit DOE’s final compliance application in October 1996. EPA is requiring DOE to consider two potential pathways for future human intrusion into a repository at WIPP that, in DOE’s view, go beyond the intent of the disposal regulations and add to the cost of demonstrating compliance with the regulations but contribute little to protecting public health and safety. Specifically, to account for the presence of potash mining in the vicinity of WIPP, the agency’s final criteria require that DOE, in assessing the performance of the repository, address the effects of excavation mining after the repository has been filled and closed. Although EPA had stated in its proposed criteria that it was not requiring consideration of mining in its compliance criteria, it included mining in the final criteria because, it said, mining could alter the properties of certain rock formations above the underground repository. These requirements address the potential changes in the hydrogeology of the rock formations—specifically, groundwater travel time—the size and shape of mines, and when mining might occur. EPA is also requiring DOE to consider the effects of two types of drilling for resources: “shallow drilling,” which is drilling to depths above the level at which waste would be disposed of in the repository, and “deep drilling,” which is drilling to depths below the disposal level. EPA established criteria that require DOE to use past human activities to predict future activities. The agency requires that the rate of drilling over the last 100 years be calculated in the Delaware Basin, which is the geographical area within which WIPP is located. Included in the basis for calculating the drilling rates are any existing leases of drilling rights that can reasonably be expected to be developed in the near future. Once DOE calculates the rate of drilling, it is required to use this rate to predict the rate of drilling that may occur over the 10,000-year period of analysis which the disposal regulations require. The fixed rate is to be based on both exploratory boreholes drilled and developmental (production) boreholes and is to be held constant as the types of resources change over time. Furthermore, EPA required DOE to assume that after WIPP is closed, boreholes drilled nearby would affect the properties of the disposal system for the remainder of the regulatory period. Thus, DOE’s assessments of the repository’s performance must take into account the hydrologic effects of drilling on the disposal system and on the creation of any new pathways for the release of radioactive materials from the repository. Finally, EPA is requiring that DOE consider the consequences of events and processes associated with all types of resource extraction activities, including solution mining and fluid injection for secondary recovery of depleted oil reserves. EPA limited consideration of these activities to the resource exploitation that has actually occurred in the vicinity of WIPP and the existing plans and leases for future drilling in the area for these purposes. In commenting on the proposed compliance criteria, DOE stated that it should only have to consider human intrusion from exploratory drilling, and not production- or development-related drilling, in its compliance application. On the basis of DOE’s interpretation of EPA’s disposal regulations and their underlying technical basis, mining was not an activity intended for consideration in an assessment of the repository’s performance. DOE noted that EPA, when developing the disposal regulations, clearly stipulated that the most severe form of human intrusion to be considered in performance assessments was “intermittent and inadvertent” exploratory drilling for natural resources. In DOE’s view, the inclusion of human-initiated events and processes other than exploratory drilling when calculating the frequency of human intrusion is therefore inconsistent with the technical assumptions on which EPA based its disposal regulations. Furthermore, DOE stated that addressing these other types of human intrusion in its compliance application would add to the time and cost required to demonstrate compliance with the disposal regulations but would provide few benefits in terms of protecting public health and safety. Officials of EPA’s Radiation Protection Division agreed with DOE that inadvertent and intermittent drilling for resources would be the most severe type of human intrusion likely to be encountered at WIPP, but they said that this does not mean that less severe types of human intrusion should be discounted in the performance assessment. The officials stated that DOE’s inclusion and consideration of less severe types of human intrusion will result in a more complete and credible compliance application by DOE. For the first few years of WIPP’s operations, DOE will have a limited capability at its six primary storage sites to determine if transuranic waste satisfies the technical criteria for transportation and disposal and to prepare this waste for shipment. In fact, DOE will not be ready to begin disposing of remote-handled waste until at least 2002 and will not be able to begin disposing of most of this waste for about 20 years. The six sites are the Idaho site, the Rocky Flats site (Colorado), Los Alamos National Laboratory (New Mexico), the Oak Ridge site (Tennessee), the Hanford site (Washington), and the Savannah River site (South Carolina). Over the longer term, DOE must develop facilities and equipment at all six sites to prepare the waste for shipment if it is to dispose of all stored and projected quantities of transuranic waste over the repository’s 35-year operating life. According to DOE’s Baseline Environmental Management Report of 1995, these facilities and equipment may cost about $11 billion to develop and operate. The Idaho site’s nuclear activities began in 1949 with testing of nuclear reactors and, subsequently, reprocessing spent nuclear fuel and receiving and storing the nuclear waste generated at other locations, such as Rocky Flats in Colorado. The nuclear wastes managed at the site include transuranic waste, low-level waste, and high-level waste. In addition, DOE stores spent nuclear fuel from the Navy’s nuclear reactor program and other sources at the site. DOE’s Baseline Environmental Management Report states that environmental management activities over the 91-year period from 1995 through 2085 could cost about $29 billion. These environmental activities include stabilizing the nuclear materials and facilities, restoring the environment, managing the wastes, managing various environmental activities, and providing site-wide services such as environmental monitoring and security. Of that amount, the cost of preparing transuranic waste for disposal is estimated to be about $1.35 billion through 2050. The administration’s budget for fiscal year 1997 requests almost $111 million for waste management at Idaho. About $22 million, or 20 percent, would go for transuranic waste activities, primarily to bring the storage of the waste into compliance with the regulatory requirements and to accelerate the characterization and certification of waste. The site has 39,255 cubic meters of contact-handled transuranic waste in storage. Commingled with this waste is about 25,000 cubic meters of alpha low-level waste that contains transuranic elements that DOE will not allow to be disposed of at the site. Thus, the total amount of contact-handled waste that the site will ship to the repository is about 65,000 cubic meters. However, site managers intend to treat, as appropriate, both types of waste, which is expected to reduce the volume of waste eventually shipped to and disposed of at WIPP to substantially less than 65,000 cubic meters. In addition to the contact-handled transuranic waste, the site has about 200 cubic meters of remote-handled waste. Between mid-1998 and the end of 2002, DOE expects to ship and dispose of enough transuranic waste from the site—3,100 cubic meters—to meet the requirements of a recent settlement of litigation with Idaho. However, whether the Department can achieve this short-term objective is uncertain. In an October 16, 1995, settlement agreement resolving litigation between Idaho and the federal government over planned federal shipments of spent fuel and nuclear waste to the site, the parties agreed that DOE would ship about 65,000 cubic meters of transuranic waste (including the alpha-emitting low-level waste) from the site. The agreement states that (1) by April 30, 1999, the first shipments shall be made from the site; (2) by December 31, 2002, not less than 3,100 cubic meters of the waste shall be shipped out of the state; (3) after January 1, 2003, a running average (the average over any 3-year period) of at least 2,000 cubic meters per year shall be shipped out of the state; and (4) by December 31, 2002, DOE should complete the construction of a facility (and, by March 31, 2003, begin operating it) to treat mixed (waste containing both radioactive and hazardous components) transuranic and low-level waste. Failure to meet any of these deadlines would require DOE to stop shipping its spent fuel to the site. To achieve the short-term stipulation in the settlement agreement, DOE will need to have an adequate supply of contact-handled waste ready for shipment to and disposal at WIPP. This means that DOE will have to retrieve containers of waste—55-gallon drums—from existing storage areas, characterize the contents of the drums, and identify those drums of waste that meet the technical criteria for transportation and disposal. The drums of waste that do not meet the acceptance criteria for either transportation or disposal will eventually have to be treated and/or repackaged to make the waste acceptable. In all, DOE will have to identify about 15,000 acceptable drums of contact-handled waste and ship these drums to WIPP to remove 3,100 cubic meters of transuranic waste from Idaho by the end of 2002. On the bases of our discussions with site officials and our review of the documents we obtained from these officials, it is uncertain whether DOE will be able to prepare and ship enough contact-handled waste to meet its agreement with the state. As of March 1995, DOE had characterized about 640 drums of contact-handled waste at the site. About 420 of these drums, however, did not meet the waste acceptance criteria that were then in effect but which have been superseded by new criteria. In September 1995, site managers of transuranic waste estimated that by June 1998, they will have identified about 700 drums of waste that meet the final criteria for transportation to and disposal in WIPP. Subsequently, in April 1996, the manager of transuranic waste at the site revised the estimate of the waste that the site expects to have certified as acceptable for shipment by mid-1998. According to this DOE official, the site now anticipates that at least 2,000 drums of waste will be certified as acceptable for transportation to and disposal in WIPP when the repository opens. Also, the site now expects to have the capability of characterizing and certifying waste at the rate of about 3,200 drums per year once WIPP opens. In large part, he said, the increase in the projected rates of characterization and certification is due to (1) an ongoing effort to develop scientific evidence to convince the Nuclear Regulatory Commission, which must approve transportation containers, that the types of waste that can be safely shipped in the containers can be expanded, (2) a relaxation of waste acceptance criteria for particulates in the waste, and (3) a less conservative view of the amount of waste that can be certified. In connection with the latter reason, for example, the latest changes in the waste acceptance criteria allowed DOE to take a less restrictive interpretation of the amount of free liquids allowed in each drum. According to this official, if the new approach is successful, the site should be able to sustain this rate of waste characterization and certification and reach the short-term goal of shipping about 15,000 drums to WIPP by the end of 2002. Because most of the contact-handled waste and much of the commingled low-level waste are expected to require treatment before these wastes can be shipped to and disposed of in WIPP, the site needs a treatment facility to meet the stipulation that, beginning in 2003, it must ship an average of 2,000 cubic meters of transuranic waste per year from Idaho. According to a June 1995 summary of the status of the transuranic waste prepared by site officials, only about 20 percent of the estimated volume of stored contact-handled waste will not require some form of treatment or repackaging. About 53 percent of the contact-handled waste is not expected to meet the transportation criteria because the waste is in boxes and the contents need to be repackaged. To provide the facilities and equipment that are needed to prepare these wastes for shipment and disposal, DOE plans to contract with a private company for waste processing services. The private company would build and operate a facility for characterizing, treating, packaging, and certifying drums and boxes of transuranic and low-level waste. DOE expects that it will award this contract in September 1996 and that the facility will begin operating in 2003. Site officials, however, cannot yet estimate how many drums of waste would be available for shipment each year after the facility is operational, the technologies to be used in the facility, or the cost to purchase waste processing services from a private company in comparison with the construction and operation of a federally owned facility. With the end of the production of nuclear weapon components several years ago, the new mission of Rocky Flats has been environmental management and possible economic development. The mission involves remediation, waste storage, treatment and disposal, consolidation of materials, deactivation of buildings, and decommissioning. According to DOE’s Baseline Environmental Management Report, the total cost of environmental management at the site could be about $36.6 billion over a 66-year period. Of that amount, about $9.6 billion is for waste management, including about $2.2 billion for transuranic waste management. The site currently has 1,869 cubic meters of contact-handled transuranic waste, and DOE projects that the site will generate an additional 3,205 cubic meters for disposal in WIPP. The stored waste includes both transuranic waste and over 800 cubic meters of plutonium residues. At one time, DOE had intended to recover the plutonium from these residues for reuse. Because weapons production activities have ended at the site, however, DOE has decided that the residues are now waste and may be disposed of in WIPP. This approach, DOE says, implements a recommendation of the Defense Nuclear Facility Safety Board. The Board, which provides independent oversight of DOE, recommended that because the plutonium residues are potentially unstable in their present condition, DOE expedite a program for putting the residues in a stable condition for storage. The residues may need to be processed and repackaged to put them in a more stable condition for storage and for disposal at WIPP. Under the Federal Facility Compliance Act, Colorado issued DOE a compliance order calling for the Department to begin shipping mixed transuranic waste from Rocky Flats at or before the end of 1998. The order also precludes DOE, after it begins shipping the waste, from storing newly generated mixed waste, including mixed transuranic waste, for more than 2 years without the state’s approval. Mixed waste from stabilizing and repackaging residue, however, was not part of the order; rather, it was part of a separate agreement between DOE and the state. As of September 1995, the site had characterized about 500 drums of contact-handled transuranic waste using older waste acceptance criteria which have been superseded. At that time, DOE anticipated that the site would have about 1,000 drums of waste characterized by mid-1998; however, not all of those drums would meet the acceptance criteria for transportation to WIPP. DOE now anticipates that the site will have 5,000 drums (about 1,043 cubic meters) of waste ready for shipment to WIPP by the time WIPP opens in 1998 if (1) the schedule for processing the potentially unstable plutonium residues is met and (2) enough drums of transuranic waste can be characterized and certified. For the residues, the objective is to stabilize the waste by venting residue drums to minimize the risk of hydrogen accumulating and creating pressure in the drums and treat and/or repackage salts, combustibles, and miscellaneous residues on an accelerated basis. For stored transuranic waste, DOE believes that 60 percent of the drums may be certifiable without repackaging and further processing. DOE expects to have about 600 drums of transuranic waste partially characterized by September 1996 and additional characterization methods will be required. If funding were available for additional equipment, DOE officials said, they would have 5,000 drums or more of waste available when WIPP opens. More problematic for the site is the treatment of the remaining 40 percent of the drums of transuranic waste that waste managers estimate is unacceptable for disposal in its current condition. According to a 1995 DOE report on the Rocky Flats transuranic waste program, construction of a treatment facility for this waste has been delayed from 2002 to 2007. Because of this delay, the site does not expect to process this waste until the period from 2012 though 2022. In April 1996, DOE officials told us they are working to develop a plan for removing special nuclear materials and transuranic waste from the site by 2015. Implementing such a plan, they estimated, would cost an additional $10 million per year, or a total of over $51 million, more than their current budget. The Los Alamos National Laboratory was established in 1943 to design, develop, and test nuclear weapons. The laboratory’s current mission remains focused on national defense but now also includes research in fields such as space physics and biomedicine. The ongoing plutonium processing operations continue to generate transuranic waste. According to DOE’s Baseline Environmental Management Report, the environmental management activities at the laboratory could cost about $4.4 billion over the 36-year period from 1995 through 2030. This cost estimate includes $507 million for preparing the transuranic waste for disposal. Los Alamos has 10,953 cubic meters of contact-handled transuranic waste, and another 7,351 cubic meters is projected for a total of 18,304 cubic meters. For the most part, DOE’s projection of waste to be generated is based on the transuranic waste that will be produced at a plutonium processing facility. The laboratory’s waste manager said that the plutonium facility is expected to generate about 500 drums of contact-handled waste in fiscal year 1996 and could generate as much as 1,000 drums per year in the future. By October 1996, according to the laboratory’s waste manager, 500 drums of waste will be certified as acceptable for shipment to and disposal at WIPP according to DOE’s most current waste acceptance criteria. Also, the laboratory expects to have 3,000 drums certified and ready for shipment by the time WIPP opens. The manager said that the laboratory had certified about 3,000 drums of waste as meeting earlier waste acceptance criteria that have since been superseded. Additional characterization measures will have to be performed on 2,500 of these drums to determine if they meet the current acceptance criteria for transportation. The laboratory, however, does not have the equipment needed for some of the essential characterization work. The laboratory expects to obtain mobile equipment that will take certain gas samples from drums at the rate of almost 5,000 drums per year. If any drums fail this test, the laboratory will need to treat the waste by repackaging or other means. When WIPP opens, according to the waste manager, the site expects to be prepared to make two shipments per week to WIPP for 50 weeks per year. For each shipment, a tractor-trailer would haul three shipping containers loaded with a total of 35 drums. (The maximum capacity of three shipping containers is 42 drums.) This would amount to about 3,500 drums per year. He said the laboratory is studying whether to remove 16,000 drums of transuranic waste from storage under an earthen cover for characterization beginning in 1996. If the laboratory is able to characterize those drums in the near future, the total amount of waste ready for shipment to WIPP could be as high as 10,000 drums. According to the laboratory’s manager for transuranic waste, no new facilities will be required to prepare transuranic waste for shipment and disposal if, as planned, DOE obtains from the Environmental Protection Agency a “no migration” variance in accordance with the agency’s regulations for implementing the Resource Conservation and Recovery Act. If, however, DOE is unsuccessful in obtaining the variance, he added, then new facilities would be required to treat mixed transuranic waste to make the waste suitable for disposal in WIPP. The Oak Ridge site in eastern Tennessee is comprised mainly of a national laboratory, a manufacturing and developmental engineering plant, and a retired plant for enriching uranium. The activities at the site include, among other things, nuclear weapons component disassembly and material storage, nonweapons research, environmental restoration, and waste management. According to DOE’s Baseline Environmental Management Report, the total cost of the environmental management activities over a 71-year period could be about $38 billion. This cost estimate includes about $2.6 billion over the next 51 years for managing transuranic waste. In April 1996, DOE’s contractor at the site said the first revision to the Baseline Environmental Management Report will reduce the estimate for transuranic waste to about $850 million. The site has 1,326 cubic meters of contact-handled waste, and an additional 256 cubic meters are projected for a total of 1,582 cubic meters. More importantly, Oak Ridge has most of DOE’s stored remote-handled transuranic waste. The site has 1,832 cubic meters of remote-handled waste, and another 344 cubic meters is projected for a total of 2,176 cubic meters. The remote-handled waste consists of about 800 cubic meters of sludge, stored in underground tanks, and solids such as paper, glass, plastic tubing, shoe covers, wipes, filters, and discarded equipment. The solid remote-handled waste is typically contained in cylindrical concrete casks. In September 1995, the state of Tennessee issued an order under the Federal Facility Compliance Act requiring DOE to comply with a plan for the treatment of mixed waste, including mixed transuranic waste. For transuranic waste, the order requires (1) initial treatment of the remote-handled sludge by June 30, 2002, and shipment of this waste to WIPP starting in September of that year; (2) initial shipment of solid remote-handled and contact-handled transuranic waste by March 2015; and (3) final shipment of all transuranic waste from the site by 2023. DOE does not expect to ship contact-handled transuranic waste for disposal in WIPP until after 2002. As of September 1995, the site had 822 drums of waste characterized to the WIPP waste acceptance criteria that were then in effect but which have been superseded. The site projects that by the time WIPP opens, 900 drums of contact-handled waste will have been characterized, but not all of this waste will meet the transportation requirements for shipment to WIPP. In any event, according to the manager of transuranic waste, the remote-handled sludge is the site’s first priority for treatment and disposal because this waste constitutes a greater risk than the contact-handled waste and the state has given remote-handled sludge priority in its compliance order. According to the Baseline Environmental Management Report and the original site treatment plan, DOE intended to build a waste processing facility for transuranic waste at an estimated cost exceeding $1 billion. However, the site’s manager of transuranic waste told us that budget cuts have eliminated plans for the facility. Furthermore, until the state issued its compliance order, DOE had anticipated building the facility much later than 2002. In addition, the manager said, the treatment plan relied on an unproven technology. In September 1995, DOE completed a study of more than 20 alternative treatment methods for remote- and contact-handled waste at the site. The study concluded that the most feasible alternative for the remote-handled sludge was solidifying the sludge with cement. The study also estimated that the necessary facilities and technologies would cost $226 million net present value ($693 million escalated) for processing the remotely-handled sludge by cementation and processing remotely handled and contact-handled solids by sorting and compaction. DOE expects to issue an invitation for bid in January 1997 for a private facility to process the remote-handled sludge. If funding for the site’s transuranic waste program is not reduced in the coming years, he said, the facility should be available in time to meet the deadline in the state’s compliance order for disposing of remote-handled waste. He added that the rate at which the new facility could prepare this waste for shipment to WIPP is unknown. The rate, in part, would depend on the capabilities of the containers that DOE will design and procure for transporting remote-handled waste. The manager pointed out that if the waste is solidified by adding concrete, the volume will increase and the radioactivity will be diluted to the point where the waste might not be classified as transuranic waste. He added, however, that officials in DOE’s Carlsbad office have assured the site that the waste would be accepted for disposal because it contains transuranic waste. Also, there are currently no firm plans for treating and processing the solid remote-handled waste and the contact-handled waste at the site. The original mission of the Hanford site—to produce plutonium for nuclear weapons—ended in 1989. The primary mission at the site now and for the foreseeable future is environmental management. According to DOE’s Baseline Environmental Management Report, the total cost of environmental management activities over the 66-year period from 1995 through 2060 could amount to $73 billion. Of this amount, about $42 billion would be spent for waste management activities, including over $3.2 billion for the management of existing and projected transuranic waste through 2050. DOE estimates that about 11,028 cubic meters of contact-handled transuranic waste is stored at the site and that it will generate another 34,909 cubic meters of this waste. The Department also estimates that it has 200 cubic meters of remote-handled transuranic waste in storage. This waste typically consists of debris such as metals, plastics, rubber, clothing, rags, and glass. Moreover, DOE projects that it will generate 21,521 cubic meters of remote-handled waste in the future, primarily consisting of contaminated equipment that is currently part of the network of underground tanks at the site in which high-level radioactive waste is stored. The high-level waste was produced as a by-product of reprocessing production reactor fuel to recover plutonium for weapons purposes. The amount of remote-handled waste that may actually be generated in the future is uncertain. Earlier projections by DOE have been as low as 4,000 cubic meters and as high as 45,000 cubic meters. The actual amount may depend, in part, on the selection of technologies for cleaning up the network of underground storage tanks. For example, site managers now believe that most of the equipment that they had projected would be remote-handled waste may eventually be decontaminated and disposed of at the Hanford site. For this reason, they have recently lowered their estimate of projected remote-handled waste from 21,521 to 3,470 cubic meters. DOE does not expect to prepare any contact-handled transuranic waste for shipment to and disposal in WIPP until 2002. The basic reason is that transuranic waste management is relatively low on the list of priorities for environmental management activities at the site. For example, over 300 other projects at the site have higher priority than processing contact-handled waste for shipment and disposal. Furthermore, DOE has no current plans for preparing remote-handled waste for shipment and disposal; however, according to officials of DOE’s Carlsbad Area Office, ongoing negotiations among DOE, EPA, and the state of Washington should lead to plans for managing all stored and projected transuranic waste at the site. The facilities and equipment planned for retrieving the contact-handled waste from earthen-covered storage have been designed, but construction is on hold due to a lack of funds. The latest estimate is that the construction of the facility, which DOE estimates will cost $35 million, may begin in 2002. The Department recently constructed a facility for characterizing, repackaging, and certifying low-level and contact-handled waste generated and stored at the site. For the next several years, however, DOE intends to use this facility to process mixed low-level waste and dispose of this waste at the site. Due to a lack of funds, DOE does not expect to begin processing contact-handled waste until at least March 2002, and then only if the funds for this purpose are obtained beginning in that year. Moreover, some contact-handled waste may require incineration to meet the standards for disposal in WIPP. To fulfill this potential requirement, DOE will have to either construct an incineration facility at the site, use an off-site vendor’s facility, or use another DOE facility. The plans that DOE once had to develop facilities and equipment that are needed to retrieve and process contact-handled waste for disposal have been placed on indefinite hold due to a lack of funds. Thus, it is uncertain at this time when DOE will be able to begin preparing contact-handled waste for shipment and disposal in significant quantities. As discussed earlier, transuranic waste is relatively low on the site’s list of environmental management priorities. Moreover, although DOE had once planned to construct facilities for processing remote-handled waste for shipment and disposal, these plans have been canceled due to a lack of funds. DOE now expects that its ongoing negotiations with EPA and the state of Washington will lead to plans for managing, within the next 20 years, the large quantity of remote-handled waste projected to be generated at the site. DOE’s Savannah River Site was developed in the 1950s to produce nuclear materials for national defense, medical uses, and the space program. The emphasis is shifting from producing nuclear materials to environmental management. According to DOE’s Baseline Environmental Management Report, the total cost of environmental management activities over the 61-year period from 1995 through 2055 could be about $68 billion. This amount includes over $800 million through 2050 to manage the transuranic waste now stored and expected to be generated at the site. DOE, in its most recent inventory of the transuranic waste stored at its sites, estimated that 6,551 cubic meters of contact-handled transuranic waste are stored at the site. The Department projects that the site will generate 8,946 cubic meters more of this type of waste, for a total of 15,497 cubic meters. DOE’s current estimates of the transuranic waste at the site include a very small amount of remote-handled waste in storage. The site intends to begin shipping transuranic waste to WIPP in 1999. All transuranic waste is expected to require detailed characterization, but the existing capability for this process is limited. To date, the site has emphasized the retrieval, repackaging, and temporary storage of these wastes pending detailed characterization. Also, treatment of some or all transuranic waste to make the waste acceptable for shipping and disposal will likely be required, but a treatment facility has not yet been included in the waste management plans. Finally, no facilities at the site are capable of loading transuranic waste into DOE’s existing fleet of shipping containers, and some of the waste is not suitable for shipment in these containers. According to the site’s manager of transuranic waste, DOE will need to develop extensive facilities at the site to retrieve, characterize, treat, package, and ship about 75 percent of the transuranic waste. In fact, mixed waste shipments may not begin until about 2012, according to the site’s proposed treatment plan. We performed our work at DOE’s headquarters in Washington, D.C.; its Carlsbad Area Office in Carlsbad, New Mexico; and at WIPP. We also performed work at the Department’s Sandia National Laboratories in Albuquerque, New Mexico; Idaho National Engineering Laboratory, Idaho Falls, Idaho; Oak Ridge National Laboratory, Oak Ridge, Tennessee; Rocky Flats Environmental Technology Site, Golden, Colorado; and Hanford Site, Richland, Washington. In addition, we obtained and reviewed information on management of transuranic waste from DOE officials at its Savannah River site, Aiken, South Carolina, and Los Alamos National Laboratory, near Santa Fe, New Mexico. To assess the prospects for opening WIPP on DOE’s schedule, we interviewed officials and examined the records and reports of the Department of Energy’s Office of Environmental Management, its Carlsbad Area Office, and its contractors on WIPP, particularly Sandia. We also interviewed officials and obtained documentation from EPA’s Office of Radiation and Indoor Air concerning the agency’s disposal regulations and its Office of Solid Waste concerning RCRA-related land disposal regulations. In addition, we met with officials of New Mexico’s Environmental Department in Santa Fe concerning the state’s procedures for issuing permits under RCRA and obtained documents related to DOE’s current permit application. In addition, we discussed WIPP scientific and regulatory issues with various parties in New Mexico, including the state’s Environmental Evaluation Group, the assistant attorney general, and other interested groups. We attended three meetings on WIPP between DOE and EPA and a meeting of the WIPP Committee of the National Academy of Sciences’ Board on Radioactive Waste Management. Finally, we discussed the status of the Committee’s ongoing study of DOE’s research program on WIPP with Committee staff. To assess whether DOE is positioned to begin filling WIPP in both its first few years of operation and over the longer term, we obtained information about the planned waste management operations at WIPP. We toured the repository and interviewed officials of the Carlsbad Area Office and its contractor for operating WIPP and the waste transportation system. We also reviewed the documents and reports that DOE had prepared on these subjects. To evaluate the readiness of DOE’s waste storage sites to prepare and ship transuranic waste to WIPP, we toured the waste storage and preparation facilities at Idaho, Hanford, Rocky Flats, and Oak Ridge and interviewed officials of DOE and its contractors at these sites. We also interviewed DOE officials at Savannah River and Los Alamos by telephone. In addition, we obtained and reviewed documents from all six sites pertaining to their waste inventories and plans for preparing and shipping waste to WIPP. We discussed the facts presented in this report with DOE headquarters officials and incorporated their comments where appropriate. Nuclear Waste: Issues Affecting the Opening of DOE’s Waste Isolation Pilot Plant (GA0/T-RCED-95-254; July 21, 1995). Nuclear Waste: Change in Test Strategy Sound, but DOE Overstated Savings (GAO/RCED-95-44; Dec. 27, 1994). Nuclear Waste: DOE Assistance in Funding Route Improvements to Waste Isolation Plant (GAO/RCED-92-65FS, Jan. 14, 1992). Nuclear Waste: Weak DOE Contract Management Invited TRUPACT-II Setbacks (GAO/RCED-92-26; Jan. 14, 1992). Nuclear Waste: Delays in Addressing Environmental Requirements and New Safety Concerns Affect DOE’s Waste Isolation Pilot Plant (GAO/T-RCED-91-67, June 13, 1991). Nuclear Waste: Issues Affecting Land Withdrawal of DOE’s Waste Isolation Pilot Project (GAO/T-RCED-91-38, Apr. 16, 1991). Nuclear Waste: Storage Issues at DOE’s Waste Isolation Pilot Plant in New Mexico (GAO/RCED-90-1, Dec. 8, 1989). Status of the Department of Energy’s Waste Isolation Pilot Plant (GAO/T-RCED-89-50, June 12, 1989). Status of the Department of Energy’s Waste Isolation Pilot Plant (GAO/T-RCED-88-63, Sept. 13, 1988). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the proposed opening of the Department of Energy's (DOE) Waste Isolation Pilot Plant (WIPP) in 1998, focusing on how well DOE is positioned to begin filling the repository in its first few years of operation as well as over the long term. GAO found that: (1) it is uncertain whether DOE can accomplish all of the work needed to comply with the Environmental Protection Agency's (EPA) regulations for disposing of transuranic waste at WIPP by April 1998; (2) before DOE can submit an application for a certificate of compliance to EPA, it must resolve various scientific issues; (3) due to the lack of available transportation containers and equipment at the storage sites for preparing waste for shipment and disposal, DOE will have limited disposal capability for the first several years of WIPP operations; (4) DOE does not expect to start disposing of remote-handled waste until 2002; and (5) it will cost DOE an estimated $11 billion over the next several decades to increase the rate at which it emplaces transuranic waste in WIPP. |
The reaffirmation agreement process involves several parties and many steps. Creditors, debtors, debtor attorneys, and the courts each have reaffirmation agreement roles. The process begins when a debtor in bankruptcy decides to reaffirm a debt or the creditor proposes a reaffirmation agreement and forwards it to a debtor. Officials at four credit institutions that frequently engaged in reaffirmation agreements we reviewed stated that once they receive notice of a debtor filing for bankruptcy, the creditors generally send the debtor or the debtor’s attorney their proposal for a reaffirmation agreement. Under the Reform Act, certain certifications may be required such as an attorney certification that certifies that the agreement does not impose an undue hardship on the debtor. Also, under the Reform Act, the court is to review agreements where debtors are not represented by an attorney and those that have a presumed undue hardship. A presumption of undue hardship is triggered when a debtor’s monthly net income (including expected monthly payments on post-bankruptcy debt) is not sufficient to pay the proposed monthly reaffirmed payment. A debtor may try to counter this presumption in writing. After these reviews, the court may approve or disapprove the agreement. Figure 1 provides more detailed information about the reaffirmation agreement process. Described as representing the most comprehensive set of reforms in more than 25 years, the Reform Act addressed, among other things, the reaffirmation agreement process. The Reform Act added new disclosures and court review requirements with respect to reaffirmation agreements designed to help ensure that reaffirmation agreements are consistent with the debtor’s best interests. The reaffirmation agreement requirements include, among other things, disclosures notifying debtors of reaffirmed terms. Agreements must also include the debtor’s monthly income and expenses to determine if a presumption of undue hardship exists and certification by the debtor’s attorney that the agreement represents a fully informed and voluntary agreement by the debtor and that the agreement does not impose an undue hardship. Table 1 summarizes key reaffirmation agreement disclosure requirements under the Reform Act. Appendix II provides the full text of the disclosure statements required by the Reform Act. The Reform Act allows for some flexibility in the disclosures required in reaffirmation agreements and provides that the disclosure requirements shall be satisfied if the disclosures were provided to the debtor in good faith. Almost all of the disclosures can be made in a different order and with different terminology than what is set forth in the law. In addition, one creditor official we interviewed stated that his organization uses a standardized form created by the creditor to comply with the Reform Act. However, according to the official, his company has 32 versions of the form designed to comply with individual bankruptcy court requirements. The AOUSC, which provides administrative, legal, and other support to the federal judiciary, issued a reaffirmation agreement form that incorporates the required and recommended language in the Reform Act. The form was issued on October 2005 and was revised in August 2006 and in January 2007. Because of the requirement for flexibility in disclosure language, creditors and debtors are not required to use the AOUSC form and may use other forms. According to AOUSC, one of the primary reasons the federal judiciary did not impose a specific reaffirmation form was because the law allows for different reaffirmation agreement forms. The federal judiciary also took other factors into consideration, according to AOUSC, including the many variations of statutory requirements that are dependent upon the debtors’ circumstances, and the fact that a creditor and debtor might want to include language in addition to what the law requires. The Reform Act also specifies in what circumstances reaffirmation agreements are to be reviewed by the courts during the bankruptcy process. In general, if the reaffirmation agreement is signed by a debtor attorney, the agreement is effective upon filing with the court, unless income and expense information on the debtor’s statement in support of the agreement reflects insufficient funds for making the reaffirmation payment, which triggers the presumption of an undue hardship. If a presumption of undue hardship exists, the court is required to review and approve or disapprove the agreement based on whether the presumption is countered by debtor explanations for how the debtor can afford the reaffirmation payment. The Reform Act makes certain provisions for reaffirmation agreements that do not include a debtor attorney signature. Reaffirmation agreements might not include a debtor attorney signature because the debtor has opted to undergo the bankruptcy process without an attorney, which is known as a pro se bankruptcy filing. Also, in some instances, debtor attorneys might not sign the agreement because he or she believes the agreement imposes an undue hardship on the debtor. All agreements that are not signed by debtor attorneys must be submitted to the court for review and approval or disapproval, with the exception of reaffirmed consumer debt secured by a lien on real property such as a home. Agreements cannot be disapproved without a hearing and notice of the hearing to the debtor and creditor. Under the Reform Act, reaffirmation agreements where the creditor is a credit union have different requirements than reaffirmation agreements with other types of creditors. While most requirements for reaffirmation agreements are uniform across all types of lenders, the Reform Act’s presumption of undue hardship provisions are not applicable to agreements with credit unions. This exemption is evident in several Reform Act provisions. For example, while reaffirmation agreements where the creditor is a credit union are to include a debtor statement in support of the reaffirmation agreement when the agreement is signed by a debtor attorney, the statement is not, however, required to include the income and expense information otherwise used to calculate whether there is a presumption of undue hardship. Instead, such a debtor’s statement in support of the agreement is required to include that the reaffirmation is in the debtor’s financial interest, that the debtor can afford the reaffirmation payments, and that the debtor received a copy of the required disclosures and has completed and signed the agreement. In addition, when the debtor is represented by an attorney, with respect to credit unions, a required debtor notification is to explain the reaffirmation process and state that the reaffirmation agreement is effective upon filing with the court. By contrast, the required debtor notification for agreements with all other types of creditors states that the agreement is effective upon filing with the court unless the agreement is presumed to be an undue hardship. We estimate that required disclosure statements were included in most reaffirmation agreements for each of the five districts. For example, we estimate inclusion of the statement “Amount Reaffirmed” and the amount was from 87 percent (in WV-S) to 98 percent (in CA-C) of all reaffirmation agreements. Similarly, we estimate that the statement “Annual Percentage Rate” and the amount were in 86 percent (in AL-N and WV-S) to 97 percent (in CA-C) of all reaffirmation agreements for the five districts. Debtor attorney certifications were frequently included in reaffirmation agreements signed by attorneys—from an estimated 95 percent (in WV-S) to 100 percent (in CA-C) of agreements. We also estimate that 67 percent (in AL-N and WV-S) to 88 percent (in IL-N) of non-credit union agreements included monthly income, expense, and net income information— conversely, an estimated 12 percent (in IL-N) to 33 percent (in AL-N and WV-S) were missing this required information (as mentioned previously, these data are not required of credit union agreements signed by a debtor attorney). This information helps to inform debtors, debtor attorneys, creditors, and court officials of the potential inability of the debtor to make payment on reaffirmed debt. While information about income, expenses and net income available can be determined from other schedules in the bankruptcy filings or during hearings, having that information included in the agreement makes it easier for the courts to evaluate the debtor’s financial situation. In March 2007, the Judicial Conference’s Advisory Committee on Bankruptcy Rules proposed the use of a reaffirmation agreement coversheet that, if approved, would make it mandatory for debtors to provide income and expense information, among other things, on the coversheet to be used in the evaluation of undue hardship. If approved by the Judicial Conference, the mandatory coversheet would appear to address the issue of missing financial information. The Reform Act requires that reaffirmation agreements include financial data disclosure statements for the amount reaffirmed and the annual percentage rate for the amount reaffirmed. As shown in table 2, we estimate that these disclosure statements were included in a high percentage of all agreements within the five districts. For example, the statement “Amount Reaffirmed” and the amount were included in an estimated 87 percent (in WV-S) to 98 percent (in CA-C) of all reaffirmation agreements. Similarly, the statement “Annual Percentage Rate” and the amount were included in an estimated 86 percent (in AL-N and WV-S) to 97 percent (in CA-C) of all reaffirmation agreements for the five districts. The Reform Act requires several notification disclosure statements designed to help ensure that debtors make decisions about reaffirming debt that are in their best interests, such as informing them of the reaffirmation agreement process as well as the effect of agreeing to reaffirm debt. We estimate these notification disclosure statements were included in high percentages of reaffirmation agreements in all five districts. One notification disclosure explains the reaffirmation process and certain requirements. For example, the disclosure instructs debtors on which sections of the agreement to read and sign and under what circumstances the agreement may be reviewed by the court before becoming effective. The notification also informs debtors, among other things, of their right to rescind the agreement and that reaffirmation agreements are not required. As shown in table 2, we estimate that non- credit union reaffirmations included the notification disclosure statement explaining the reaffirmation process and certain requirements in an estimated 87 percent (in AL-N) to 96 percent (in CA-C) of reaffirmation agreements for the five districts. This disclosure statement differs for credit unions. We provide estimates for inclusion of disclosure statements in credit union agreements later in this report. Other required debtor notification disclosure statements were also included in high percentages of reaffirmation agreements in the five districts. As shown in table 2, we estimate that 87 percent (in WV-S) to 97 percent (in CA-C) of all reaffirmation agreements within the five districts included a statement instructing debtors to review the required disclosures. Similarly, we estimate that 87 percent (in WV-S) to 97 percent (in CA-C) of all reaffirmation agreements in the five districts included a statement that summary information in the reaffirmation agreement was made pursuant to the requirements of the Bankruptcy Code. With respect to unsecured debts, other than requiring a brief description of the credit agreement, the Reform Act does not provide a specific disclosure statement for inclusion of asset information, the original purchase price, or the original amount of the loan. However, when a reaffirmed debt is secured by an asset, the Reform Act requires both the asset and either the original purchase price or original amount of the loan be listed in the agreement. As shown in table 2, we estimate that required information describing the asset securing the agreement was included in 87 percent (in WV-S) to 99 percent (in CA-C) of agreements in the five districts. The original purchase price or the original amount of the loan was also included in a high percentage of agreements—an estimated 81 percent (in WV-S) to 91 percent (in CA-C) for the five districts. The Reform Act requires a disclosure statement that includes a statement that the debtor agrees to reaffirm the debt, a field for a brief description of the credit agreement, a field for a description of any changes to the credit agreement made as part of the reaffirmation agreement, and fields for debtor and creditor signatures. As shown in table 3, we estimate that this statement by the debtor was included in 84 percent (in WV-S) to 95 percent (in CA-C) of all reaffirmation agreements within each of the five districts. We also estimate that creditors or debtors included required information describing the credit agreement in 76 percent (in WV-S) to 84 percent (in IL-N) of all reaffirmation agreements in the five districts, as shown in table 3. These descriptions varied in content. For example, when reviewing reaffirmation agreements, we observed agreements that included descriptive information about the type of contract involved in the transaction, such as a type of “retail installment contract” or “promissory note.” Other agreements included more detail about the terms of the credit agreement, such as the number of payments, monthly payment amounts, and original amount of the loan. While not required information in all reaffirmation agreements, in a few reaffirmation agreements in the five districts, creditors or debtors included a description of changes to the credit agreement made as a part of the reaffirmation. We estimate that 12 percent (in AL-N) to 20 percent (in CA- C and WV-S) of all reaffirmations within the five districts included a description of such a change. For example, in one district we reviewed 36 agreements that had changes identified in the agreement. The interest rate was cited as a change in 17 of these 36 agreements. Other changes described in the remaining 19 agreements included reductions in the loan balance, changes to the payment date, and changes in the monthly payment amount. Because analysis of these 36 agreements was conducted in addition to our standardized file review, we are not able to generalize these figures to all reaffirmations in this district. Debtor attorneys signed reaffirmation agreements in an estimated 90 percent to 97 percent of all agreements in each of the five districts. When a debtor attorney signs a reaffirmation agreement, the Reform Act requires that a disclosure statement be included with the signature. The disclosure statement includes the debtor attorney’s certification that (1) the agreement represents a fully informed and voluntary agreement by the debtor, (2) the agreement does not impose an undue hardship on the debtor or any dependent of the debtor, and (3) that the attorney has fully advised the debtor of the legal effect and consequences of the agreement and any default under the agreement. As shown in table 4, we estimate that this required attorney disclosure statement was included in 95 percent (in WV-S) to 100 percent (in CA-C) of reaffirmations signed by attorneys in the five districts. In addition to the disclosure statement, the Reform Act requires that when a presumption of undue hardship has been established with respect to the agreement, the debtor attorney certify that in his or her opinion, the debtor is able to make the reaffirmation payment. We estimate that 1 percent (in CA-C) to 11 percent (in WV-S) of all reaffirmation agreements with non-credit unions in the five districts included an attorney certification statement explicitly identifying that a presumed undue hardship was established with respect to the agreement, and that in the opinion of the attorney, the debtor could make the reaffirmation payment. Reaffirmation agreements are not required by the Reform Act to include an explicit indication of whether a presumed undue hardship has been established with respect to the agreement. While we did not formally track the extent to which reaffirmation agreements forms included an explicit way for presumed undue hardship to be identified, we observed that some reaffirmation agreement forms included a way to explicitly identify presumed undue hardship in the debtor attorney certification and some did not. For example, we observed reaffirmation agreement forms that included the following language for debtor attorneys to certify, which has no explicit indication of whether a presumption of undue hardship had been established: “If a presumption of undue hardship has been established with respect to this agreement, in my opinion the debtor is able to make the payment.” This statement is unclear because it does not explicitly identify whether a presumption of undue hardship has been established with respect to the reaffirmation agreement. Instead, the statement only indicates that “if” a presumption of undue hardship is established, the debtor attorney certifies that the debtor can make the payment. AOUSC’s reaffirmation agreement form dated January 2007 addresses this ambiguity by placing a checkable box next to an explicit statement indicating that there is a presumption of undue hardship and that the debtor attorney certifies that the debtor is able to make the required payment. As mentioned previously, use of the AOUSC form is suggested but not required. Figures 2 and 3 illustrate clear and unclear debtor attorney certification of an undue hardship. In some reaffirmation agreements, the debtor attorney certification portion of reaffirmation agreements included additional language beyond the disclosure statement required by the Reform Act. We estimate that from 1 percent (in CA-C) to 11 percent (in TX-N) of reaffirmation agreements in the five districts included such additional language. We observed several instances where the additional language supplemented the debtor attorney’s certification by indicating that the attorney was not guaranteeing the debtor’s reaffirmation payment. One example of this supplemental information is shown in figure 4. The Reform Act requires a debtor statement in support of the reaffirmation agreement that is to include debtor-inserted data for determination of whether a presumption of undue hardship is established. The data to be inserted include monthly income, monthly expenses (which are to include payments on post-bankruptcy debt and other reaffirmation payments), and the monthly net income remaining to make the monthly payments for the reaffirmed debt. If the net income is not sufficient to pay the reaffirmation payments, a presumption of undue hardship is established that the debtor may overcome if the debtor explains, to the satisfaction of the court, how the debtor can afford to make the payments. Specifically, the court is required to review reaffirmation agreements where a presumption of undue hardship has been established when net income is insufficient for the reaffirmation payments. The court may approve or disapprove the agreement based on information presented to the court. The majority of reaffirmation agreements were with non-credit unions— from 80 percent to 94 percent of reaffirmations in the five districts. As shown in table 5, we estimate that 89 percent (in AL-N) to 97 percent (in CA-C) of reaffirmation agreements with non-credit unions for the five districts included the debtor statement that the reaffirmation agreement does not impose an undue hardship on the debtor. We also estimate that 67 percent (in AL-N and WV-S) to 88 percent (in IL-N) of non-credit union agreements included monthly income, expense, and net income information—conversely, an estimated 12 percent (in IL-N) to 33 percent (in AL-N and WV-S) were missing this required information. This information is a key component in the presumption of undue hardship determination. While information about income, expenses, and net income available can be determined from other schedules in the bankruptcy filings or during hearings, having that information included in the agreement makes it easier for courts to evaluate the debtor’s financial situation. For example, one case included reaffirmation agreements certified by a debtor attorney for two automobiles with monthly payments of $315 and $376. The debtor’s statement in support for each reaffirmation agreement was signed by the debtor but did not include the required monthly income, expense, and net income data. According to monthly expenses and net income reported on other documents the debtor provided the court, which included only one of the reaffirmed car payment amounts as an expense, the debtor’s monthly net income was negative $131, reflecting the potential inability of the debtor to afford the reaffirmed payment amounts. In March 2007, the Judicial Conference’s Advisory Committee on Bankruptcy Rules proposed the use of a reaffirmation agreement coversheet form that, if approved, would make it mandatory for debtors to provide financial information on the coversheet, such as amount of debt reaffirmed, the annual percentage rate for reaffirmed debt, monthly reaffirmation payment, and monthly income and expense information at the time of petition and reaffirmation agreement filings, to facilitate the evaluation of undue hardship. The coversheet form also requires a supplemental debtor certification that any explanation of the difference between the income and expenses reported on the debtor’s bankruptcy petition documents and the income and expenses reported in the debtor’s statement of support of the reaffirmation agreement is true and correct. According to AOUSC, this new coversheet form could take effect by December 1, 2009, after undergoing a period of at least 6 months for public comment on the new coversheet as well as final review by the Judicial Conference. If approved, the coversheet would appear to address the issue of missing financial information. See appendix V for the proposed reaffirmation agreement coversheet. The Reform Act requires that a motion for court approval be included in reaffirmation agreements when the debtor is not represented by an attorney. We determined whether debtors were not represented by an attorney by noting whether reaffirmation agreements were or were not signed by a debtor attorney. Reaffirmation agreements were not signed by debtor attorneys in an estimated 3 percent to 10 percent of agreements for the five districts. In two districts we had sufficient data to estimate the extent to which a motion for court approval was included when a debtor attorney did not sign the agreement. In the two districts (IL-N and WV-S), a motion for court approval was included in 62 percent (in WV-S) and 80 percent (in IL-N) of agreements. In one of the two districts (WV-S) court officials told us that regardless of whether or not a motion for court approval is filed, their internal process calls for clerk staff to review each reaffirmation agreement and forward those with undue hardship or lack of a debtor attorney's signature for court review. In the remaining three districts, the number of agreements including a debtor attorney signature was not sufficient to generate reliable estimates. Our review of cases in these three districts showed that 13 of 26 agreements (in AL-N), 28 of 34 agreements in (TX-N), and 20 of 23 agreements (in CA-C) without debtor attorney signatures included the motion. Reaffirmation agreements with credit unions comprised an estimated 6 percent to 20 percent of all reaffirmations in each of the five districts. As mentioned previously, under the Reform Act credit union reaffirmation agreements have different requirements than reaffirmation agreements with other types of creditors. The debtor notification statement explaining the reaffirmation agreement process is generally the same as the statement for other types of creditors. However, the debtor notification statement for credit union reaffirmation agreements indicates that the agreement is effective upon filing with the court (for non-credit unions, the debtor notification statement indicates that the agreement is effective upon filing with the court unless the reaffirmation is presumed to be an undue hardship). For the five districts, reaffirmation agreements with credit unions included the complete credit union debtor notification statement that explained the reaffirmation agreement process and stated that reaffirmation agreements with credit unions are effective upon filing with the court in an estimated 61 percent to 96 percent of agreements. Another difference for agreements with credit unions is that the Reform Act does not require an agreement with credit unions to include debtors’ monthly income, expense, and net income information in the debtor statement in support of the agreement (unless the agreement does not include a debtor attorney signature). Instead, the disclosure statement indicates that the debtor believes the agreement is in his or her financial interest and that he or she can afford to make the reaffirmation payments. In four districts we estimate that from 54 percent to 93 percent of agreements with credit unions included the credit union debtor statement in support of the agreement. However, for these same four districts, we also estimate that 45 percent to 96 percent of agreements with credit unions included the debtor statement whereby the debtor’s income, expense and net income information could be recorded even though such data were not required (because the agreement was with a credit union and the debtor was represented by an attorney). Appendix II includes estimates for inclusion of other required disclosure statements in agreements with credit unions for the five districts. An estimated 90 percent (in AL-N) to 98 percent (in TX-N and WV-S) of reaffirmations in the five districts were for secured debts. Specifically, debtors more frequently reaffirmed debts for automobiles and homes in comparison to debts for other assets. We estimate that 54 percent (in AL- N) to 87 percent (in CA-C) of reaffirmations in the five districts were for automobiles. In addition to automobiles, in four districts, we estimate that between 15 percent (in WV-S) to 24 percent (in IL-N) of reaffirmation agreements were for homes. Reaffirmations for homes in the remaining district (CA-C) occurred in an estimated 2 percent of agreements. In addition to automobiles and homes, secured debts reaffirmed included, among other things, those for boats, electronics, and household goods. Appendix IV, table 14, provides more information on debts reaffirmed for each district. Unsecured debt was reaffirmed infrequently—occurring in an estimated 2 percent to 10 percent of all reaffirmation agreements in the five districts. In the one district where an estimated 10 percent of all reaffirmations were for unsecured debt, almost half of these agreements were for electricity service (agreements that reaffirmed delinquent payments on electricity service so that the service could continue). Because of the small number of reaffirmation agreements that were unsecured, the following information about these agreements is based on our review of actual reaffirmation agreements and is not projected to the entire population of reaffirmation agreements in the five districts. Twenty of the 52 unsecured agreements reviewed were for some type of credit card debt such as bank charge cards, lines of credit, and merchant credit cards. In the remaining 32 cases, debtors reaffirmed either an unsecured personal loan from a bank, a debt to another individual, a tax debt, or electricity service. Similar to agreements for non-credit unions, most credit union reaffirmation agreements were for automobiles and houses. The types of secured and unsecured debts reaffirmed with credit unions generally did not differ from the types of secured and unsecured debts reaffirmed with non-credit unions. However, we are not able to make statistically significant comparisons between the types of debt reaffirmed by credit and non-credit unions because of the small number of credit union reaffirmation agreements. One concern that consumer advocates and academics have had is that if the debtor reaffirms a high proportion of the total debts owed when filing for bankruptcy, the debtor may not obtain the financial “fresh start” that is one of the fundamental purposes of bankruptcy. To determine whether debtors were reaffirming a large proportion of their total debts, we examined reaffirmed debts in each case we reviewed as a percentage of the total debts the debtor reported he or she owed when he or she filed for bankruptcy. As shown in table 6, we estimate that in 58 percent (in AL-N) to 68 percent (in CA-C) of cases in the five districts, total reaffirmed debts were less than 25 percent of total debts. By contrast, in 0 percent (in TX-N) to 8 percent (in IL-N) of cases, reaffirmed debts comprised 75 percent or more of total debts. We reviewed 63 reaffirmation agreements in 26 cases in which reaffirmed debt was 75 percent or more of total debts. For these 26 cases, automobiles were reaffirmed in 20 cases, homes in 18, mobile homes in 2, and household goods or other assets in 5 of the cases. In addition, we estimate that the average total amount of debt reaffirmed per case for the five districts was from $15,000 to $47,000, while the average of total debts per case was from $120,000 to $189,000. The debt burden for debtors after bankruptcy may include debts in addition to those reaffirmed. For example, debtors’ total debts may include student loans, child support obligations, or other financial obligations that cannot be discharged in bankruptcy. Our scope of work included gathering information about debts reaffirmed and total debts at the time of filing, but did not gather data on total debt burdens following discharge of all nonreaffirmed debt. For the five districts, the average amount reaffirmed per reaffirmation agreement was an estimated $12,000 to $31,000 for non-credit union reaffirmation agreements. For each of the three districts for which we are able to estimate the average amount reaffirmed for credit unions, the average amount reaffirmed in credit union agreements was an estimated $13,000. We estimate that the vast majority of interest rates on reaffirmed debt were equal to or less than the interest rate on the original debt, as shown in figure 5. As mentioned previously, the Reform Act requires that the rate on reaffirmed debt be disclosed as the “Annual Percentage Rate,” hereafter referred to as the interest rate. While the Reform Act requires a brief description of the underlying credit agreement, the interest rate for the original debt is not specifically required to be included in reaffirmation agreements. However, we found the original interest rate in reaffirmation agreements or attached original credit agreements in some case files. The original interest rate was available in five districts in an estimated 32 percent to 88 percent of reaffirmation agreements or their supporting documents. One district had a high percentage of agreements with the original interest rate because the district had a standard form that required disclosure of the original interest rate. As reflected in figure 5, we estimate that from 91 percent (in IL-N) to 100 percent (in WV-S) of agreements in each of the five districts had reaffirmed interest rates less than or equal to the original rate. In one of the five district bankruptcy courts (WV-S), the reaffirmed interest rate was less than the original interest rate in an estimated 44 percent of reaffirmation agreements. In the other four districts, interest rates were less than original interest rates in an estimated 10 percent (in IL-N) to 17 percent (in AL-N) of reaffirmation agreements. Average interest rates on reaffirmed debt were similar for the five districts. The same was true for original interest rates. For the five districts the average interest rate on the debt reaffirmed was an estimated 8 percent (in CA-C, IL-N, and TX-N) to 10 percent (in AL-N and WV-S), while the average interest rate on the original debt was an estimated 9 percent (in TX-N) to 14 percent (in AL-N). Moreover, we noted the following characteristics: For 9 (of 1,164) reaffirmation agreements we reviewed where the reaffirmed interest rate was greater than the original rate, the amount of the interest rate increase ranged from 0.10 percentage points to 4.25 percentage points. Four of the agreements were for houses, 3 for automobiles, 1 for a mobile home, and 1 for household goods. A variety of creditors were involved in the 9 agreements—ranging, for example, from two credit unions to a mortgage company and a department store. For 72 (of 1,164) reaffirmation agreements we reviewed where the reaffirmed interest rate was less than the original rate, the amount of the interest rate decrease ranged from 0.01 percentage points to 26.99 percentage points—which was for a reduction in rate to 0 percent. Over half of the 72 reaffirmation agreements were for automobiles and homes—44 were for automobiles and 7 were for homes. The remaining 21 agreements included 6 agreements for household goods, 5 for mobile homes, 8 agreements for a wide variety of debt—including jewelry and electronic equipment—and 2 agreements that did not disclose the type of debt. For the 61 (of 1,164) reaffirmation agreements we reviewed with credit unions (that disclosed both an original and a reaffirmed interest rate), in each of the five bankruptcy courts the interest rate on the reaffirmed debt was equal to or less than the interest rate on the original debt in all but 2 reaffirmation agreements—10 of 10 agreements in AL-N, 27 of 28 agreements in CA-C, 2 of 2 agreements in IL-N, 16 of 17 agreements in TX-N, and 4 of 4 agreements in WV-S. We interviewed four creditors who were among the firms that most frequently engaged in reaffirmations that we reviewed in the five selected districts. Officials from three of these creditors stated that during the reaffirmation process their policy was to not adjust credit terms, such as interest rates, from the terms established in original debt contracts. The fourth creditor said that when reaffirming debt it had a policy to consider reducing amounts reaffirmed and interest rates based on certain criteria. As mentioned previously, reaffirmed unsecured debt occurred in a small number of reaffirmation agreements reviewed (52 of 1,164). Consequently, information about these agreements is based on our review of actual reaffirmation agreements and is not projected across the entire population of reaffirmed agreements in the five districts. Interest rates and amounts reaffirmed for unsecured debt in reaffirmations varied in the agreements we reviewed in the five districts. Interest rates ranged from 0 to 21 percent among 41 of 52 unsecured reaffirmations that disclosed a reaffirmed interest rate. Nineteen of the 41 agreements were reaffirmed with a 0 percent interest rate, while the remaining 22 agreements had interest rates ranging from 7.99 to 21 percent. Examples of unsecured debt reaffirmed at 0 percent interest included electricity service and credit card debt. Examples of unsecured debt reaffirmed in the 7.99 to 21 percent range included overdraft protection for a checking account reaffirmed at a 12 percent interest rate and a credit card reaffirmed at a 21 percent interest rate. Amounts reaffirmed for the 52 unsecured agreements also varied (all of the agreements had amounts reaffirmed disclosed), ranging from $0 (for a line of credit on a credit card) to $25,000 (for a legal fine owed by a debtor). While we cannot generalize average amounts for unsecured reaffirmations, for the 52 agreements we reviewed, the average amount of unsecured debt reaffirmed was $2,300. During our review, court officials in one district and academics who conduct bankruptcy research that we spoke with expressed concerns that debtors who are not represented by an attorney would potentially be at a disadvantage. To test this premise, we analyzed interest rate data based on whether a debtor attorney had signed an agreement (our proxy for whether a debtor was represented by an attorney) and when an original interest rate could be determined (disclosure of the original interest rate is not required). When a reaffirmation agreement did not include a debtor attorney’s signature, the interest rate on the reaffirmed debt was equal to or less than the interest rate on the original debt in all reaffirmations reviewed in each of the five bankruptcy courts. In addition, with the exception of one district, we estimate that there was no significant difference between the average interest rate on reaffirmed debt when a reaffirmation agreement either included or did not include a debtor attorney’s signature. In one district where the difference was statistically significant, however, the average reaffirmed interest rate was an estimated 8 percent when the agreement was signed by a debtor attorney, and an estimated 12 percent when the agreement was not signed by a debtor attorney. We requested comments on a draft of this report from the Director of AOUSC. AOUSC reviewed a draft of this report and had technical comments, which we incorporated as appropriate. We are sending copies of this report to the Director of AOUSC and interested congressional committees and parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8757 or jenkinswo@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff contributing to this report are listed in appendix VI. Our objectives were to determine the following: To what extent have required Reform Act disclosure statements and other required information (such as the annual percentage rate for the reaffirmed debt) been incorporated into reaffirmation agreements? What types of debts were reaffirmed and to what extent did reaffirmed debt amounts comprise debtors’ overall debt burden when they filed for bankruptcy? How did the reaffirmed and original interest rates compare? To answer these objectives, in five selected bankruptcy court districts we reviewed a representative sample of bankruptcy cases that included at least one reaffirmation agreement. We selected this sample of bankruptcy cases from a universe of bankruptcy case data provided by bankruptcy court officials. We determined that these data were sufficiently reliable to develop the universe of cases for each of the five district bankruptcy courts. To determine the reliability of the bankruptcy data, we reviewed documentation about the system that produced them and interviewed agency officials knowledgeable about the data. Our case file review was performed using a data collection instrument that included uniform questions to ensure data were collected consistently. We relied on data presented in bankruptcy documents filed with the courts by debtors, creditors, and debtor attorneys. Bankruptcy courts and U.S. Trustees manage bankruptcy cases and perform some measures to verify data that help ensure the reliability of information provided. For example, bankruptcy court officials have measures to ensure that data entered into information systems is accurate. Also, as a part of the bankruptcy process, the U.S. Trustee Program verifies selected debtor-reported financial data, such as income and assets. We also interviewed bankruptcy experts, creditor officials as well as bankruptcy court, U.S. Trustee, and bankruptcy administrator officials. We used information gathered during these interviews to identify data contained in bankruptcy file documents, including reaffirmation agreements, that could assist in answering our objectives. We considered reviewing reaffirmation agreements from a nationally representative sample of bankruptcy cases that included at least one reaffirmation agreement. However, because of the nature of the bankruptcy courts’ data, information needed to develop a sample frame of cases with reaffirmation agreements is only available at the district level. In addition, we interviewed four creditors that we identified during our document review as the creditors that most frequently signed reaffirmation agreements in the five selected districts. When reviewing reaffirmation agreements, we determined both whether complete disclosure information was included in the reaffirmation agreement and whether the language in the disclosure, while complete, varied at all from what is required by the Bankruptcy Abuse Prevention and Consumer Protection Act of 2005 (referred to here as the Reform Act). Reaffirmation agreements had some variation in disclosure language, as is allowed by the Reform Act, but the variations were generally minor alterations to the disclosure that did not affect the disclosure content. For example, headings were often inserted in the Part A disclosure sections that were not prescribed in the Reform Act but that did not affect the disclosure content. In another example of variation, many agreements also included both credit union and non-credit union disclosure information. For purposes of our analysis, we combined the data categories indicating inclusion of a verbatim disclosure and a nonverbatim (but complete) disclosure. To obtain additional information about reaffirmation agreements and the bankruptcy process, we reviewed literature from bankruptcy journals and attended the American Bankruptcy Institute One-Year Anniversary Program on the Bankruptcy Abuse Prevention and Consumer Protection Act, October 16, 2006, in Washington, D.C. Our work was conducted from June 2006 through November 2007 in accordance with generally accepted government auditing standards. In selecting bankruptcy court districts in which to conduct reviews of cases with reaffirmation agreements, we selected courts based on the following criteria: a range of filing volume, proportion of Chapter 7 filings within the bankruptcy courts, whether cases were overseen by the U.S. Trustee Program or the Bankruptcy Administrator program, and the courts’ geographic location. During the January 1, 2001, to June 30, 2006, time period, the average quarterly filing volume for the nation’s 90 district bankruptcy courts was 385,424. The five districts we selected collectively represented about 12 percent of those quarterly filings. When determining which districts to include in our study, we selected the 2001 to 2006 time period to gather sufficient historical filing data to determine the average number of filings each district had over time. At each of the selected bankruptcy court districts, we obtained a universe of all cases with at least one reaffirmation agreement filed between October 17, 2005, and October 17, 2006. We chose this time period because the Reform Act’s effective date for reaffirmation agreement requirements was October 17, 2005. We selected stratified random probability samples of cases from each bankruptcy court. We stratified the sampling universe for each district by case status (open versus closed) and by pro se status (i.e., debtors who file for bankruptcy without attorney representation). For each selected case, we examined every reaffirmation agreement on record. We calculated estimates of percentages and means/medians at the reaffirmation agreement level using methods appropriate for stratified cluster samples. Table 8 provides a description of the sampling universe and samples for the five selected districts. With these probability samples, each case had a nonzero probability of being selected, and that probability could be computed for any case. Each selected case was subsequently weighted in the analysis to account statistically for all the members of the population, including those who were not selected. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (e.g., plus or minus <YY> percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the study population. Estimates from these samples are not generalizable to the population of all bankruptcy courts; however, they can be generalized to each of the selected bankruptcy court districts and are intended for illustrative purposes. All percentage estimates in this report have a margin of error of plus or minus 10 percent or less, unless otherwise noted. All mean and median estimates have a relative error of 20 percent or less, unless otherwise noted. Some percentage estimates we present have a margin of error greater than plus or minus 10 percent. This occurred for percentage estimates based on a small number of cases in the district samples with specific characteristics that were unavailable to us prior to sampling. For instance, when we present percentage estimates for reaffirmation agreements made with credit unions, which we estimate comprises only 6 to 20 percent of all reaffirmation agreements across the districts, the small number of agreements made with credit unions in the samples results in larger margins of error. Some mean and median estimates we present in this report have a great deal of variance, resulting in large percentages of relative error. For instance, when we discuss the mean and median amount of the debt reaffirmed per reaffirmation agreement, individual reaffirmation agreement debts range from very low amounts (such as a small outstanding balance for an automobile) to much higher amounts (as with the outstanding balance for the mortgage on a home), these large variations result in larger percentages of relative error. This appendix includes two tables. Table 9 presents Bankruptcy Abuse Prevention and Consumer Protection Act of 2005 (Reform Act) reaffirmation agreement disclosure statements and other required information. We included disclosure inclusion information for all reaffirmation agreements in the letter portion of this report. Because some Reform Act provisions apply only to credit unions, we present data solely for credit unions in table 10. Table 10 shows estimated percentages for inclusion of disclosure statements and other required information for reaffirmation agreements with credit unions, for five bankruptcy court districts—Northern Alabama (AL-N), Central California (CA-C), Northern Illinois (IL-N), Northern Texas (TX-N), and Southern West Virginia (WV-S). As the tables indicate, the Reform Act places most information on reaffirmation agreements within five separately labeled sections—A through E—each with its own set of disclosure statements. Italicized areas in table 9 denote disclosure statements that GAO assessed were or were not included in reviewed reaffirmation agreements. Table 10 lists disclosure information for reaffirmation agreements with credit unions. The percentage estimates can be generalized to all reaffirmation agreements with credit unions in cases filed in the five districts between October 17, 2005, and October 17, 2006. Unless otherwise noted, our percentage estimates in table 10 fall within a margin of error of plus or minus 20 percent or less. As part of our review, we collected information not reported in the body of this letter that provides context about the extent to which reaffirmations are included in Chapter 7 cases, the financial circumstances of debtors, and content of reaffirmation agreements. For example, we collected information at the case-level about the debtor’s assets, liabilities, income, and expenses reported on the petition filed with the court and whether the court waived the requirement for the filer to pay filing fees. We also collected additional data about the reaffirmation agreements that included whether or not (1) the agreement was with a credit union, (2) a debtor attorneys signed the agreement, (3) the court approved or disapproved the agreement, (4) the agreement was rescinded, and (5) the agreement was filed after a case was closed. The following figures and tables provide estimates for the additional information. Unless otherwise noted, our percentage estimates in this appendix fall within a margin of error of plus or minus 10 percent or less, and our mean and median estimates fall within a relative error of 20 percent or less. In addition, for some items we are able to provide estimates for means but not medians, or vice versa. This is because the estimates of means and medians can be different in terms of their distributions and so can the variances of the estimates. Two of the factors that control the sampling error of an estimate are sample size and the estimated variation of the parameter we are estimating (mean and median). The difference between the sampling error of the means and medians is most likely due to the estimated variance of the parameters. Also, our samples were not designed to generate precise estimates of means or medians, so the results (in terms of sampling error) are sensitive to relatively small differences in distributions. William O. Jenkins Jr. at (202) 512-8777 or jenkinswo@gao.gov. In addition to the contact named above, Linda Watson, Assistant Director; Pille Anvelt; James Ashley; Ben Atwater; Amy Bernstein; Carlos Garcia; Daniel Garcia; Geoffrey Hamilton; Lemuel Jackson; Ronald La Due Lake; Brian Lipman; Grant Mallie; Jamilah Moon; and Johanna Wong made significant contributions to this report. | The Bankruptcy Abuse Prevention and Consumer Protection Act of 2005 (referred to hereafter as the Reform Act) included provisions to better inform individuals who file for personal bankruptcy about their options for reaffirming debt--whereby filers may voluntarily agree to pay certain creditors in an effort to retain assets, such as an automobile. Reaffirmation agreements between debtors and creditors are required, by law, to formally disclose to debtors the terms of the agreement, such as the amount of debt reaffirmed. Some requirements differ for credit unions, such as an exemption for reporting debtor financial information when the debtor's attorney signs the agreement. The Reform Act required GAO to study the bankruptcy reaffirmation process. This report discusses (1) the extent to which required Reform Act disclosures and other information have been incorporated into reaffirmation agreements, (2) the types of debts reaffirmed and the percent this debt comprised of debtors' overall debt burden, and (3) how reaffirmed and original interest rates compare. GAO reviewed a representative sample of bankruptcy files with agreements in five bankruptcy courts (in AL, CA, IL, TX, and WV) selected by, among other things, filing volume and geographic dispersion. Estimates from our sample cannot be generalized to all bankruptcy courts, but can be generalized to each of the selected bankruptcy courts. Most reaffirmation agreements across the five districts included Reform Act disclosure statements and other required information. For example, for the five districts, the required disclosure statement for the "Annual Percentage Rate" was included in an estimated 86 to 97 percent of agreements and the disclosure statement for the "Amount Reaffirmed" and the amount was included in an estimated 87 to 98 percent of agreements. We also estimate that, for the five districts, 67 to 88 percent of non-credit union agreements included monthly income, expense, and net income information--conversely, 12 to 33 percent were missing this information. This information helps to inform debtors, debtor attorneys, creditors, and court officials of the potential inability of the debtor to make payment on reaffirmed debt. In May 2007, a federal judiciary advisory committee proposed the use of a reaffirmation agreement coversheet that, if approved, would make it mandatory for debtors to provide required financial information to determine an undue hardship. If approved, the coversheet would appear to address the issue of missing financial information. For the five districts, debts secured by assets, such as an automobile, were the most frequently reaffirmed type of debt--comprising an estimated 90 percent or more of all reaffirmations. Unsecured debt--such as credit card debt--was reaffirmed infrequently in reaffirmation agreements, occurring in an estimated 2 percent to 10 percent among all agreements in the five districts. For the five districts, we estimate that in approximately two-thirds of cases the reaffirmed debt burden comprised 25 percent or less of the debtors' total debts. In those cases where an original interest rate was provided, rates on reaffirmed debt were generally less than or equal to the original rate. Specifically, the interest rates were equal to the original rate in an estimated 56 to 84 percent of reaffirmed debts for the five districts, less than the original rate for 10 to 44 percent of debts, and greater in 0 to 8 percent of debts. (The margin of error for these estimates is at most plus or minus 16 percent at the 95 percent confidence level.) |
Radiological sources are used throughout the world for medical and industrial purposes. Until the 1950s, only naturally occurring radioactive materials, such as radium-226, were available for use in radiological sources. Since then, sources containing radioactive material produced artificially in nuclear reactors and accelerators have become widely available, including cesium-137, cobalt-60, and iridium-192. Sealed sources vary in size from the size of a grain of rice to rods up to several inches in length. Figure 1 provides an image of an americium-241 sealed radiological source. According to IAEA, the level of protection provided by users of radioactive materials should be commensurate with the safety and security risks that the material presents if improperly used. For example, radioactive materials used for certain diagnostic imaging may not present a significant safety or security risk due to their low levels of activity. However, high-risk sealed radiological sources that contain cobalt-60, cesium-137, or iridium-192 could pose a greater threat to the public and the environment and a potentially more significant security risk, particularly if acquired by terrorists to produce a dirty bomb. Industrial radiological sources are used in, among other things: (1) industrial radiography devices for testing the integrity of welds, (2) well logging devices in oil and gas production, (3) research irradiators in the aerospace sector, and (4) panoramic and underwater irradiators used to sterilize industrial products. NRC oversees licensees through three regional offices located in Pennsylvania, Illinois, and Texas. NRC has relinquished regulatory authority for licensing and regulating radiological sources to 37 Agreement States that have entered into an agreement with NRC. Figure 2 shows which states are overseen by NRC and which are Agreement States. Prior to 2003, NRC did not have specific orders intended to address security, but its safety regulations included general provisions that licensees “secure from unauthorized removal or access” radiological sources in storage, and to “control and maintain constant surveillance” over materials not in storage. Following the attacks of September 11, 2001, NRC determined that certain licensed material should be subject to specific security requirements. The security of radioactive materials, or sources, is a stated top priority for the agency to prevent the use of such sources by terrorists. NRC has issued multiple orders and guidance documents that direct licensees possessing high-risk radiological sources to implement security measures. For the purposes of this report, we refer to these NRC security orders and implementation guidance as “NRC security controls” or “security controls.” NRC’s security controls apply to all types of high-risk industrial radiological sources, including mobile and stationary sources. These security controls include the following: A 2003 security order requiring increased security measures for licensees with panoramic and underwater irradiators. A 2005 security order directing all licensees possessing certain types of radiological materials, including those commonly used in industrial processes, to implement increased security measures, such as conducting employee background checks. Implementation guidance was provided with the security order. Order Imposing Compensatory Measures for All Panoramic and Underwater Irradiators Authorized to Possess Greater than 10,000 Curies of Byproduct Material in the Form of Sealed Sources. NRC Order EA-02-249. A 2007 security order requiring criminal background checks and fingerprinting for individuals needing unescorted access to radiological material for their jobs. Fingerprints are required to be sent to NRC, which forwards them to the FBI for criminal background checks. Implementation guidance was also provided with this order. NRC officials told us that they have adopted a risk-based approach to security in which the level of security should be commensurate with the type and amount of sources that licensees are attempting to protect. According to NRC officials, the intent of the security controls is to develop a combination of people, procedures, and equipment that will delay and detect an intruder and initiate a response to the intrusion—not to provide absolute certainty that theft or unauthorized access will not be attempted, but to recognize and address such efforts should they occur. The security controls provide minimum requirements that must be met to ensure adequate security, and licensees may go beyond the minimum requirements. NRC has recently taken action to codify its security orders and guidance into federal regulation. In March 2012, NRC approved the publication of final regulations to, among other things, establish requirements for security measures for medical and industrial radiological sources into NRC regulations, replacing the existing security orders. The final regulations, found in 10 C.F.R. Part 37 (commonly known as Part 37), were published in the Federal Register in March 2013, and they went into NRC licensees were required to comply with the effect 60 days later.regulations by March 2014, while Agreement States are to promulgate compatible regulations by March 2016, with their licensees being required to comply at a subsequent date determined by each state. The current security orders remain in place until the new regulations are implemented. NRC has also developed and provided licensees with implementation guidance for Part 37.inspections would occur once the new regulations were in effect. NRC officials said that a new round of security In September 2012, medical facilities we visited, NRC’s requirements did not consistently ensure the security of high-risk radiological sources. One reason for this is that the requirements, which are contained in NRC security controls, are broadly written and do not prescribe specific measures that licensees must take to secure their equipment containing high-risk radiological sources. We recommended, among other things, that NRC strengthen its security controls by providing medical facilities with specific measures they must take to develop and sustain a more effective security program, including specific direction on the use of cameras and alarms. NRC disagreed that its security controls needed strengthening through more prescriptive security measures, stating that its approach provides adequate protection and gives licensees flexibility to tailor effective security measures across a wide variety of licensed facilities. we reported that, at the 26 selected hospitals and In contrast to NRC’s flexible approach that allows licensees to determine which security measures to implement to meet the security controls, NNSA’s voluntary program for radiological source security uses a prescriptive approach to upgrade the security of facilities—once a facility agrees to participate—to a level beyond NRC’s minimum requirements. According to NNSA’s physical security guidelines, which were established in 2010, the curie amounts for devices using high-risk radioactive material such as iridium-192, americium-241, and cesium-137 determine the level of protection required. For example, NNSA recommends that facilities using devices containing at least 10 curies of these materials upgrade, at a minimum, the security of doors, locks, windows, walls, and ventilation ducts. By comparison, NRC does not require security controls for some devices containing only 10 curies of iridium-192, americium-241, and In addition, NNSA’s guidelines for 10 curies and above cesium-137. GAO, Nuclear Nonproliferation: Additional Actions Needed to Improve Security of Radiological Sources at U.S. Medical Facilities, GAO-12-925 (Washington, D.C.: Sept. 10, 2012). also call for video cameras, bullet resistant glass, hardened doors, cages, and security grating, and if possible, armed on-site response. For high- risk material totaling at least 1,000 curies, or when multiple smaller sources are located in the same storage facility with a combined curie level of 1,000 curies or more, NNSA recommends biometric access control devices, critical alarm remote monitoring systems, and enhanced barriers to delay an adversary’s pathway to the radiological sources. Challenges exist in reducing the security risks faced by licensees using high-risk industrial radiological sources, even when they follow NRC’s security controls. Specifically, licensees face challenges, in (1) securing mobile and stationary sources and (2) protecting against an insider threat. We identified two main types of industrial radiological sources during the course of our review: mobile sources used for testing pipeline welds in the oil and gas sector, and stationary sources used for, among other things, aerospace research, oil and gas production, and food safety. Some of the stationary sources pose unique security challenges due to either how they are stored or their large curie levels. According to NNSA data, there are approximately 1,400 industrial facilities in the United States that house either mobile or stationary high-risk radiological sources, containing a combined total of approximately 126 million curies of radioactive material. The portability of some industrial radiological sources makes them susceptible to theft or loss. According to NRC, as of December 2013, there are approximately 498 radiography licensees with 4,162 radiological sources in the United States. These sources have a cumulative total of about 214,000 curies of primarily iridium-192 and cobalt-60. In 2007, we reported that IAEA officials said that transportation of high-risk radiological sources is the most vulnerable part of the nuclear and radiological supply chain. Furthermore, according to IAEA documents, the size of some of these mobile sources could make it easier for unauthorized removal by an individual as the source is small enough to be placed into the pocket of a garment. The most common mobile source, iridium-192, is contained inside a small device called a radiography camera. NRC officials said that the device is about the size of a gallon paint can and is transported in specially designed trucks to remote locations where it can remain in the field for days or even months. Figure 3 shows an example of a radiography camera. NRC’s security controls call for two independent physical measures— such as two separate chains or steel cables locked and separately attached to the vehicle—when securing a mobile device containing a high-risk source to a truck. The controls also call for licensees to maintain constant control and/or surveillance during transit, as well as disabling the truck containing such devices when not under direct control and constant surveillance by the licensee. While the controls call for certain security measures, they do not include specific requirements for trucks to have alarm systems or specify the strength or robustness of the locks that must be used to secure the source inside the trucks. The controls also do not include requirements for a Global Positioning System (GPS) on the trucks. All of the 15 facilities we visited had implemented NRC’s security controls according to the licensees and accompanying NRC or Agreement State inspector. In August 2012, a radiography camera containing 81 curies of iridium- 192 was stolen from a truck parked outside of a company’s facility in one state. An individual broke into five trucks, taking various items, including one radiography camera that had been left in one of the trucks rather than being returned to the storage facility. A surveillance camera identified the truck used by the individual, and police recovered the radiography camera from the individual’s residence. Agreement State officials initially proposed fining the licensee $10,000, but the licensee and the state office ultimately settled on a fine of $1,000 to address the administrative penalty. In July 2011, a radiography camera containing 33.7 curies of iridium- 192 was stolen from a truck parked in a hotel parking lot in the same state. Although the door to the truck’s darkroom was locked and the device secured using cables and padlocks, the truck’s alarm system was not activated. During the early morning hours, multiple individuals broke into the truck while it was parked at a motel and ripped the cables securing the container holding the radiography camera out of the wall of the darkroom enclosure. The radiological source was never recovered. The state initially proposed a $5,000 fine for the administrative penalty, but the penalty was reduced to $500 due to the efforts and expenses made by the company to recover the device. In September 2006, a radiography camera with approximately 100 curies of iridium-192 was stolen along with a radiography truck, which was parked at a gas station in the same state. The truck was stolen when the radiographer went into the gas station to talk with his supervisor and left the keys to the truck and the darkroom in the cab of the truck. The truck was recovered 2 days later by the police at a nearby business park along with the radiological source. The state decided not to assess a fine against the company for the administrative penalty, noting that the device was recovered. In August 2006, a radiography camera with approximately 75 curies of iridium-192 was stolen along with a radiography truck from a hotel parking lot in another state. Although the truck was equipped with an alarm, the alarm was not activated. In addition, the radiographer left the vehicle’s keys in the truck’s door. The truck was abandoned and found the next day along with the radiological source by the police in a nearby strip mall parking lot. According to an Agreement State official, they do not have statutory authority to impose monetary fines for security violations, so no fine was assessed. Concerning individuals impersonating safety and security inspectors at remote worksites, according to incident reports we reviewed and state officials we spoke to, we found that: In September 2010, a radiography crew was approached at a temporary worksite by an individual who identified himself as an inspector. The individual became confrontational with the crew and approached the worksite. When the radiographers prevented him from entering the worksite, he accused them of violating proper procedures in their operation. The radiographers asked the individual to provide identification, but he refused and later left the worksite. The individual, who was a licensed radiographer, was identified as having multiple convictions on his record, including assault, forgery, and terroristic threats. The individual no longer practices radiography in the state. In March 2010, radiographers working at a temporary worksite were approached by an individual wearing a jacket with the state logo who identified himself as a safety and security inspector. The individual opened and closed the radiographer’s truck doors, went into the darkroom, and then observed the radiographers as they performed operations. He asked the radiographers questions regarding the amount of curies in the radiography camera. After the radiographers contacted their superior, the individual left with two accomplices and was never apprehended. Two radiography licensees, as well as an Agreement State and several NRC inspectors, told us that the existing security controls were adequate and that the industrial sources they use or monitor were adequately protected. For example, one licensee told us that—given the small size of his company, the company’s limited financial resources, and the marginal risks associated with the radiological sources—additional security requirements were not necessary. In contrast, another Agreement State inspector told us that the security controls should be more prescriptive, as more specific controls would make selecting security measures clearer for licensees and evaluating the adequacy of such measures clearer for inspectors. He said that nonprescriptive controls require additional evaluation to determine if something is acceptable or not. In addition, a senior security official at a large radiography company told us that, prior to the July 2011 theft of the source that was never recovered, he believed that NRC’s security controls were adequate. However, after the source was stolen, he concluded that NRC’s controls needed to be more prescriptive. He told us that the controls are too general, which makes them largely ineffective. This official also said that the current playing field is not level and that some smaller radiography companies are doing a disservice to the radiography industry by installing security measures that meet NRC’s security controls but are generally very weak. He cited several examples of security measures he has seen that he believes are substandard, including cheap locks, ineffective alarms, and darkroom doors that can be easily breached. This official recommended that industrial radiographers install common sense security measures, such as high-security locks, which cost approximately $50 each, reinforced doors, and GPS. In addition, in 2007, the governor of Washington State requested that GPS should be required for licensees with highly radioactive mobile sources. Specifically, in 2006, the theft of a radiography camera in her state prompted the governor to petition NRC to consider requiring GPS for vehicles carrying high-risk sources, such as radiography cameras, or allow states the flexibility to implement more stringent security measures than those required by NRC. In the petition, the governor pointed to a separate incident where a smaller radioactive source in a portable gauge was stolen, but it was quickly recovered due to a GPS tracking feature on the phone of the operator. In response to the petition, NRC informed Washington State that the issues raised in the petition would be considered in the ongoing Part 37 rulemaking. However, in March 2013, NRC denied the petitioner’s request and did not require GPS tracking in the final Part 37 rule. NRC also stated in the Federal Register that, with respect to mobile radiological sources, existing security controls provide adequate protection for mobile devices and that GPS was “neither justified nor necessary.” An official from the Washington State’s Department of Health stated in its response to NRC that his agency was very disappointed that the Part 37 rule did not follow through on the recommendation made by the governor and asserted that GPS tracking is inexpensive and an easy way to help with the rapid recovery of a stolen industrial radiological source should preventative measures fail. Notwithstanding NRC’s decision, some licensees that we met with during the course of our audit have installed GPS on their trucks. Of the 15 industrial radiography companies we visited, 8 had installed GPS on their fleet of trucks. Of these 8 companies, 4 also provided their radiographers with vibrating key fobs to alert them when the vehicle alarm goes off. In the view of the radiographers from these 8 companies, GPS is an effective security control. A senior security official at a large radiography company told us that, after learning about a theft in 2011, his company installed GPS in all 120 of its trucks at a cost of approximately $50 to $100 per installation and from $29 to $39 per truck for monthly service. Securing stationary high-risk radiological sources also poses challenges for licensees. Facilities housing these sources include aerospace manufacturing and research plants, storage warehouses, and panoramic irradiators used to sterilize industrial products. Similar to the mobile sources, NRC’s security controls for stationary sources provide a general framework that is implemented by the licensee. However, as we reported the security controls are broadly written and do not in September 2012,provide specific direction on the use of cameras, alarms, and other relevant physical security measures. The challenge that licensees face as a result of the broadly written security controls is that they may select from a menu of security measures, which allows them to meet NRC’s controls but not necessarily address all potential security vulnerabilities. According to the licensees and inspectors that accompanied us, all of the industrial facilities with Category 1 and Category 2 high-risk radiological sources we visited had measures in place to meet NRC’s security controls, such as locks and motion detectors, and the sources themselves were located within the interior of the building. While these facilities met NRC’s security controls, we noted that some facilities appeared to continue to have certain vulnerabilities. For example, many of the facilities we visited did not have security measures of the type often recommended by NNSA as part of Examples of facilities we visited that their voluntary security upgrades.met NRC’s security controls but still had potential security vulnerabilities include the following: At one facility, we observed that a warehouse storing 25 iridium-192 radiography cameras had an exterior rolltop door that was open and unattended (see fig. 6). Once inside the warehouse, we also observed that the wall acting as one of the barriers to the sources did not go from the floor to the ceiling. When we asked the NRC security inspector who accompanied us about the barrier, the inspector told us that the licensee was in compliance with NRC’s security controls because the sources were secured through other measures—such as locks and a motion detector. The inspector told us that while the security measures in place were not optimal, there were no apparent security violations. At another facility, we observed a cesium-137 irradiator with approximately 800 curies that was on wheels and in close proximity to a loading dock rollup door that was secured with a simple padlock (see fig. 7). The irradiator was stored in a vault that had a reinforced sliding door and a motion detector that was activated after normal working hours. The licensee told us that the wheels on the irradiator were needed to move the device to different parts of the facility when conducting research. During our visit, we observed that the sliding door to the vault—which is one of the security measures used by the licensee—was left open for ease of access. In our September 2012 report, we identified a similar situation at a medical facility and concluded that although the facility met NRC’s security controls, it could be vulnerable because of the limited security we observed and the mobility of the irradiator. We also observed unsecured exterior skylights at a number of warehouses that contained radiological sources ranging from iridium-192 radiography cameras to higher curie levels of cobalt-60 and cesium-137 used for industrial research and manufacturing. Of the 33 industrial facilities we visited, 9 had unsecured skylights. When we questioned an NRC safety and security inspector who accompanied us on the visit about the unsecured skylights, he noted that the licensees met NRC’s security controls because the sources were secured in a locked container, and he said that these skylights did not pose a security vulnerability. Figure 8 shows examples of unsecured skylights. We also identified two types of stationary sources that pose unique security challenges due to (1) how americium-241 sources are currently being stored at some well logging facilities and (2) the large curie levels of cobalt-60 sources found in panoramic irradiators. Well logging is a process used to determine whether a well has the potential to produce oil. Some well logging storage facilities with large amounts of americium-241, including two facilities that we visited, are potentially more vulnerable to theft as they have not implemented NRC’s security controls. Under NRC’s security controls, increased security measures are triggered by the type and amounts of curies of radiological sources. For example, licensees with americium-241 are required to implement NRC’s security controls when the radiological sources in their possession total 16 curies or more. Under the security controls, multiple sources of the same type are added together for regulatory purposes only if they are “collocated.” NRC considers these sources to be collocated if someone could gain access to them by breaching a single physical barrier. However, some well logging licensees do not come under NRC’s security controls because they separate their americium-241 into quantities that are not considered collocated. For example, these licensees may store quantities of this source in multiple separately locked containers, which function as barriers, so they do not meet the definition of being collocated. Figure 9 shows an example how licensees could store americium-241 in separate containers that would not be considered collocated, and therefore, not under NRC’s controls. As a result, a segment of facilities with large quantities of radiological sources falls outside of NRC’s increased security controls, including security inspections for the increased controls.identified the security of radioactive sources as a top agency priority to prevent the use of such sources by terrorists. Thus, NRC’s definition of collocation may have the unintended consequence of placing a segment of these sources at greater risk of theft or loss. Licensees of mobile and stationary radiological sources face challenges in determining which of their employees are suitable for trustworthiness and reliability (T&R) certification, as required by NRC’s security controls.Such certification allows for unescorted access to high-risk radiological sources. Officials at almost half of the facilities we visited told us that they face challenges in making T&R determations. These challenges include limited security experience and training and incomplete information to determine an employee’s suitability for unescorted access. Before a licensee can grant an employee unescorted access to high-risk radiological sources, NRC security controls require the licensee, among other things, to: (1) conduct employment and education background checks; (2) perform an identification and criminal history check that includes taking the employee’s fingerprints and sending them to NRC, which forwards the fingerprints to the FBI; and (3) determine that the individual is trustworthy and reliable. These controls are intended to mitigate the risk of an insider threat—an employee or someone else with authorized access who might be trying to steal, tamper with, or sabotage radiological sources. NNSA officials told us that they consider an insider threat to be the primary threat to facilities with radiological sources. According to an NNSA Fact Sheet, almost all known cases of theft of nuclear and radiological material involved an insider. The document states that skills, knowledge, access, and authority held by some insiders make the threat difficult to mitigate. As a result, great care must be taken in determining the T&R of employees who are granted unescorted access in facilities with high-risk radiological sources. Under NRC’s security controls, the criminal history check is performed by the FBI, submitted to NRC, and forwarded to the licensee. NRC’s controls place the responsibility on the licensee to evaluate all the information and determine whether the employee is trustworthy and reliable. In its Part 37 regulations, NRC codified the process for criminal history check and review generally as established in the orders. In response to its proposal for these regulations, NRC received comments stating that it should provide specific criteria—such as disqualifying convictions—for use by licensees with respect to the T&R determination. However, NRC declined to provide specific criteria, stating that it is the licensee’s responsibility to consider all information and make a determination. An NRC official told us that this was a policy choice by the Commission. The official said that NRC’s role in the T&R determinations is limited, but NRC inspectors may review a licensee’s records during a site inspection. However, the official told us that such a review is limited to whether the licensee obtained the required types of information, not the merits of the licensee’s determination to grant unescorted access to an individual. NRC has provided licensees with a number of indicators to consider when evaluating an individual’s T&R. Some of these include the following: conduct that warrants referral for criminal investigation or results in arrest or conviction; uncontrolled anger, violation of safety or security procedures, or attempted or threatened destruction of property or life; and the frequency and recency of the conduct. NRC implementation guidance states that these indicators are not meant to be all inclusive or be disqualifying factors. Moreover, NRC’s guidance states that it is a licensee’s business decision as to what criteria it uses for the basis of the T&R determination. NRC guidance—as well as its new regulations—does not specify how a licensee should evaluate an individual’s T&R. For example, NRC’s current and former implementation guidance do not include indicators that would disqualify an employee from receiving unescorted access. Instead, each case must be judged on its own merits, and final determination remains the responsibility of the licensee. NRC’s implementation guidance also states that the requirements are not intended to stop determined adversaries intent on malevolent action from gaining access to the radioactive sources. Rather, this implementation guidance is designed to provide reasonable assurance that individuals with unescorted access to the radiological sources are trustworthy and reliable and that facilities have a reliable means to monitor events that are potentially malevolent and have a process for prompt police response. Under NRC’s security controls, it is left to the licensee to decide whether to grant unescorted access, even if an individual has been indicted or convicted for a violent crime or terrorism, and the licensee is not required to consult with NRC before granting T&R access. Officials at 7 of the 33 licensees we reviewed said that they have granted unescorted access to high-risk radiological sources to individuals with criminal histories. We found two cases where employees of industrial radiographers in two different states were granted unescorted access despite having serious criminal records. Case 1: Individual with numerous criminal convictions. In one case, a T&R official told us that she granted unescorted access to an individual in 2008 with an extensive criminal history, some of which was included on the FBI report the company received from NRC, and some that was absent. This criminal history included two convictions for terroristic threat that occurred in 1996, which were not included in the background information provided to the T&R official by NRC. While NRC’s security orders do not preclude granting unescorted access to radiological sources to persons with convictions for terroristic activity (or other serious crimes), the T&R official said that had she been aware of the individual’s convictions for terroristic threat, she would not have granted him unescorted access. Based on available documents, we identified that the individual had been arrested and convicted multiple times between 1996 and 2008. These convictions included the following: terroristic threat (twice), assault, forgery, failure to appear in court, driving while intoxicated, and driving with a suspended license. According to that state’s statute, a terroristic threat includes any offense involving violence to any person or property with intent to, among other things: place the public or a substantial group of the public in fear of serious bodily injury; place any person in fear of imminent serious bodily injury; or prevent or interrupt the occupation or use of a building, place of assembly, place of employment or occupation, aircraft, automobile, or other form of conveyance. According to NRC officials, identification of a criminal history through the FBI or a discretionary local criminal history check does not automatically indicate unreliability or untrustworthiness of an individual. The licensee may authorize individuals with criminal records for unescorted access to radioactive materials notwithstanding the individual’s criminal history. In 2010, the individual was declared to be a substantial threat by the Agreement State’s licensing agency after he impersonated a radiography inspector and was hostile toward radiographers in the field, as previously discussed. An investigation performed by the state health department concluded that the individual was a threat to public health and safety, and he subsequently surrendered his state radiography license. It was not clear from available information why the terroristic threats and other convictions did not appear on the FBI criminal background check or why the official deemed the individual trustworthy and reliable. We brought this case to NRC’s attention after learning about it in February 2014. In response, NRC officials said that they contacted the Agreement State office to gather relevant information and independently evaluate whether the situation represented an isolated incident or if it was indicative of a programmatic issue. Based on their initial review, the officials said that they believed the event was an isolated incident. However, without an assessment of the T&R process, NRC will not be able to determine the extent to which this case may represent a larger problem or if corrective actions are needed. Case 2: Individual caught stealing from company. In another case, an industrial radiographer in charge of making T&R determinations told us that an individual with an extensive criminal record was granted unescorted access to radiological sources. The T&R official told us that he considered the individual a risk and objected to granting him unescorted access, but he was overruled by his supervisor. The employee who had been granted access was subsequently arrested for stealing from the company. Without more complete information and specific guidance on how to evaluate an individual’s T&R, licensees could continue to face challenges in making decisions about the suitability of personnel who are granted unescorted access to high-risk radiological sources, elevating the risk of an insider threat, which NNSA has identified as being responsible for almost all known cases of theft of nuclear and radiological material. As noted above, NRC’s approach to providing reasonable assurance to an insider threat is to require licensees to collect and to consider various types of information, including an FBI criminal history, and to make a determination based on the licensee’s judgment, without any NRC- identified disqualifying criteria. Therefore, nothing in the NRC controls or guidance precluded the licensees in these two examples from approving access. Moreover, according to an NRC official, NRC’s role is limited to providing guidance and inspecting that the licensee has accumulated all appropriate information when making T&R determinations—not the merits of any particular decisions. Federal agencies are taking steps to better secure industrial radiological sources. Specifically, NRC is developing a Best Practices Guide for licensees of high-risk radiological sources and planning to provide additional training to NRC inspectors. In addition, NNSA has two initiatives under way to improve industrial radiological source security. However, NRC, NNSA, and DHS—agencies that play a role in nuclear and radiological security—are not effectively collaborating to achieve the common mission of securing mobile industrial sources. NRC plans to develop a Best Practices Guide for licensees of high-risk radiological sources in response to a recommendation in our September 2012 report. According to NRC officials, the guide is expected to be issued in spring 2014 and will include information for licensees on physical barriers; locks; monitoring systems, such as cameras and alarms; as well as examples of how to secure mobile sources and sources in transit. NRC officials told us that the guide will serve as a layperson’s source of practical information about security and have minimal technical language. However, the Best Practices Guide remains in draft form, and it is not clear that it will provide specific direction on cameras, alarms, and other relevant physical security measures. For example, the officials said that the guide will not be a catalogue for specific makes and models of security devices such as cameras and locks. During development of the Best Practices Guide, an NRC official told us that they are relying on a working group that includes, among others, representatives from NNSA, four inspectors from NRC’s regional offices, one Agreement State inspector, and one Agreement State manager to provide insight into challenges licensees face in complying with NRC’s security controls. However, the official also told us that they have not directly reached out to licensees during the development of the Best Practices Guide. NRC data show that there are almost 800 industrial licensees in the United States. As we reported in 2013, active engagement with program stakeholders is a critical factor to success. Furthermore, in 2012, we reported that programs are most likely to succeed when they involve stakeholders in establishing shared expectations for the outcome of the process. According to professional practices, project managers should identify and prioritize stakeholders to include those who will be directly affected (positively and negatively) by the project. Once the stakeholders are identified, continuous communication is needed to ensure that their needs are understood, issues are addressed as they come up, and they are engaged in the project decisions and activities. Although developing the guide is a step in the right direction, without including the views of licensees, NRC cannot be certain that the guide will be as useful as it could for those who will be directly affected by the process. NRC also plans to provide additional security training for NRC and Agreement State inspectors to improve security awareness and reinforce a security culture. For example, NRC began revising the inspector training course in May 2013 and moved the training facility from Sandia National Laboratories to the NRC Technical Training Center in Chattanooga, TN. NRC officials told us that the course will provide information on physical protection systems and NRC security controls, including the identification of threats, an introduction to physical protection systems, and the identification of critical components of physical security, such as detection and access control. NRC officials also said that they have built a mock security laboratory at the Technical Training Center, which includes examples of security equipment such as security sensors, alarms, locks, and cameras. In addition, NRC plans to take inspectors on facility tours to introduce them to security practices at an irradiator site that has installed the voluntary NNSA security upgrades, a small mobile radiography company, and a local emergency response center. NNSA has two initiatives under way to address security risks posed by industrial radiological sources: (1) testing and developing tracking technology for mobile sources, and (2) upgrading the physical security of industrial facilities. Testing and developing technology for tracking mobile sources. In 2013, NNSA officials reported spending approximately $800,000 for a project to develop tracking systems for mobile devices containing radiological sources. Under cost-sharing arrangements, NNSA officials told us that they are collaborating with industry partners from both the industrial radiography and well logging industries who have agreed to provide support for development, design reviews, and field testing of prototype systems. According to the officials, this technology, if successful, would allow for (1) real-time tracking and monitoring of the source in storage, during transport, and during temporary storage within the transport vehicle, (2) immediate notification of a potential loss or theft situation to a central monitoring location, and (3) assistance in recovering a source that is lost or stolen. NNSA officials said that they plan to complete the development of the tracking systems and transfer the technology to one or more vendors for commercial manufacture and sale by summer 2015. Individual industrial radiography and well logging companies would be able to purchase the systems directly from the commercial manufacturer. To encourage use of the technology, NNSA is also evaluating if the government should subsidize all or a portion of the cost of the systems and, if so, for all potential users, or a particular group of users meeting certain criteria. NNSA officials told us that they expect the systems to cost in the range of $300 to $500 for each radiography device, and $500 to $750 for each well logging truck. Security upgrades at facilities. As of June 2013, NNSA had completed security upgrades at 20 industrial facilities at a cost of $5.5 million. Included in the 20 industrial facilities with completed upgrades are 7 USDA sites with irradiators containing cobalt-60 and cesium-137 that are used for research and pest irradiation. Upgrade of these 7 facilities cost $3.8 million. NNSA has also completed security upgrades at one mobile radiography facility but, according to NNSA officials, the agency decided not to upgrade any additional facilities because higher priority facilities were scheduled for completion first. In addition, NNSA officials said that their current plans are to complete the development of the electronic mobile source tracking system prior to implementing security upgrades at additional radiography storage facilities. They told us that security at storage facilities for mobile sources would only address half the risk, as the sources also travel into the field. NNSA’s activities include working with federal, state, and local agencies, as well as private industry to install sustainable security enhancements for high-priority nuclear and radiological materials located at civilian sites in the United States. However, an NNSA official told us that, in light of their available funds for these efforts, many of these civilian sites with industrial radiological sources have not received security upgrades, and it is uncertain when or if such upgrades will be made. To date, NNSA has focused most of its attention and planning—and expended the majority of available funds for making such upgrades—on U.S. medical facilities. As of June 2013, NNSA had completed security upgrades at approximately one-quarter of all U.S. hospitals and medical facilities with high-risk radiological sources at a total cost of $135 million. NNSA officials said that the agency’s focus on medical facilities is due primarily to the large number of facilities that, in their view, pose a more immediate risk because they are located in and around urban areas, contain large quantities of high-risk sources, and include buildings that are generally more accessible to the general public. However, these officials said that, as the number of medical facilities left to upgrade decreases, the program has begun to focus on industrial facilities and is finding that these facilities (particularly in the panoramic irradiation, industrial radiography, and well logging industries) may require unique security solutions and an updated budget estimate. Although DHS, NNSA, and NRC have an interagency mechanism for collaborating on, among other things, radiological security, they were not always doing so effectively. By not having effective ways to ensure consistent collaboration, the agencies may be missing opportunities to achieve the common mission of securing radiological sources. Our previous work has identified that when responsibilities cut across more than one federal agency—as they do for securing industrial radiological sources—it is important for agencies to work collaboratively. Taking into account the nation’s long-range fiscal challenges, we noted that the federal government must identify ways to deliver results more efficiently and in a way that is consistent with its multiple demands and limited resources. In addition, we have previously reported on the need for collaboration in securing radiological sources. For example, we reported in 2007, that while DOE has improved coordination with the Department of State and NRC to secure radiological sources worldwide, DOE has not always integrated its efforts efficiently, and coordinated efforts among the agencies have been inconsistent. During this review, we found that the agencies involved in securing radiological sources—DHS, NNSA, and NRC—meet quarterly, along with the FBI, for “trilateral” meetings that include, among other things, discussions of radiological security. However, these meetings did not help DHS, NNSA, and NRC collaborate and draw on each agency’s expertise during research, development, and testing of new technology for a mobile source tracking device. Specifically, we found that DHS contracted with Sandia National Laboratories in October 2011 to study commercially available technologies for tracking mobile radiological sources.of the study was $271,000. The study concluded that it is physically possible to tag some radiography and oil well logging devices. However, existing technology such as GPS—as opposed to developing a new technology—has limitations that would prevent reliable or effective tracking. DHS collaborated with NRC and several DOE national laboratories to develop the study but did not share the results with key NNSA officials who are directly involved in radiological source security. According to DHS officials, they made NNSA aware of the report through their quarterly meetings of senior officials, but NNSA officials with responsibility for securing radiological sources told us that they were not aware of the report until we brought it to their attention during the course of our review. NNSA officials told us that it would have been helpful to have the report earlier. As a result, the officials had to quickly evaluate the report’s findings to ensure there were no “show stoppers” that would negatively impact their current activities in the same area of technology development. The cost NNSA is also developing a tracking system for devices containing mobile radiological sources, such as industrial radiography cameras. However, we found that NNSA has not been collaborating with DHS and NRC on the project. For example, NNSA did not reach out to DHS for input regarding tracking technologies, even though DHS had completed a related study in 2011 concerning tracking mobile radiological sources (see above). Regarding NRC, NNSA officials told us that they have no plans to coordinate with the NRC division in charge of regulating and licensing radiological sources—the division that has regulatory authority for radiological security. NNSA officials stated that they would reach out to the NRC technical division that approves and certifies changes in the design of the packaging and transportation of the device. However, the officials noted that coordination would only occur if NNSA determined that recertification of the device is required, which they believed was not likely. As we have previously found, collaborating agencies should identify the human, information technology, physical, and financial resources needed The current collaboration to initiate or sustain their collaborative effort.mechanism employed by DHS, NNSA, and NRC appears to not always be effective, and it may contribute to missed opportunities to leverage resources, including expertise, in developing new technology to address vulnerabilities associated with radiological sources while in transit. Federal agencies are taking steps to better secure industrial radiological sources in the United States. Nevertheless, we found that licensees still face challenges in securing these sources. NRC is developing a Best Practices Guide to reduce the risks posed by the sources and thus help inform and educate licensees and other stakeholders about measures that could be taken to raise the level of security awareness and improve security. While this is a positive step, NRC has not directly reached out to licensees to obtain their views. Active engagement with key stakeholders is a leading practice on which we and others have reported. Without including the views of licensees, NRC cannot be certain that the guide will be as useful as it could for those who will be directly affected by the process. NRC requires security controls for radiological sources commensurate with the type and amount of sources that licensees are attempting to protect. However, some well logging licensees do not come under NRC’s increased security controls, because they separate their americium-241 into quantities that do not meet NRC’s definition of collocation. Because these facilities fall outside of NRC’s increased security controls, they do not receive security inspections for the increased controls. As a result, a segment of these sources are potentially at greater risk of theft or loss. In addition, licensees are required to make T&R determinations regarding employee suitability to have unescorted access to high-risk radiological sources. Under NRC’s security controls, even if an individual has been indicted or convicted for a violent crime, the licensee is not required to consult with NRC before granting unescorted access to high-risk sources. It is unclear whether two cases where employees were granted unescorted access, even though each had extensive criminal histories— including, in one of the cases, convictions for terroristic threats— represent isolated incidents or a systemic weakness in the T&R process. Without an assessment by NRC, the agency may not have “reasonable assurance” that the process in place to make access decisions is as robust as it needs to be to prevent the theft or diversion of high-risk radiological sources by a determined insider. NRC’s security controls are also silent on what, if any, indicators would disqualify an employee from being granted unescorted access. Without more complete information and specific guidance on how to evaluate T&R, licensees could continue to face challenges in making decisions about the suitability of personnel who are granted unescorted access to high-risk radiological sources, potentially increasing the risk of an insider security threat, which NNSA has identified as being responsible for almost all known cases of theft of nuclear and radiological material. As we have reported in the past, it is important for agencies to work collaboratively to achieve greater efficiency. An interagency mechanism exists to promote collaboration among the agencies responsible for securing radiological sources. However, DHS, NRC, and NNSA have missed the opportunity to leverage resources, including expertise, in developing a new technology to track radiological sources, which could aid in the timely recovery of a lost or stolen radiological source and support the agencies’ common mission. We are making four recommendations in this report. To ensure that the security of radiological sources at industrial facilities is reasonably assured, we recommend that the Chairman of the Nuclear Regulatory Commission take the following three actions: Obtain the views of key stakeholders, such as licensees, during the development of the Best Practices Guide to ensure that the guide contains the most relevant and useful information on securing the highest risk radiological sources. Reconsider whether the definition of collocation should be revised for well logging facilities that routinely keep radiological sources in a single storage area but secured in separate storage containers. Conduct an assessment of the T&R process—by which licensees approve employees for unescorted access—to determine if it provides reasonable assurance against insider threats, including determining why criminal history information concerning convictions for terroristic threats was not provided to a licensee during the T&R process to establish if this represents an isolated case or a systemic weakness in the T&R process; and revising, to the extent permitted by law, the T&R process to provide specific guidance to licensees on how to review a employee’s background. NRC should also consider whether certain criminal convictions or other indicators should disqualify an employee from T&R or trigger a greater role for NRC. To better leverage resources, including expertise, to address vulnerabilities associated with radiological sources while in transit, we recommend that the Administrator of NNSA, the Chairman of NRC, and the Secretary of DHS review their existing collaboration mechanism for opportunities to enhance collaboration, especially in the development and implementation of new technologies. We provided a draft of this report to the Chairman of the NRC, the Administrator of NNSA, and the Secretary of Homeland Security for review and comment. NNSA and NRC provided written comments on the draft report, which are presented in appendices II and III, respectively. DHS did not provide comments. NRC generally agreed with our four recommendations, and NNSA agreed with the one recommendation directed to it to enhance collaboration with other federal agencies on the development and implementation of new technologies. In its written comments, NNSA also said that it is ready to support NRC efforts with technical expertise and other assistance as required in relation to the recommendations directed toward NRC. NRC and NNSA also provided technical comments that we incorporated as appropriate. In addition, the Organization of Agreement States, which represents the 37 Agreement States responsible for overseeing regulatory compliance for radiological sources, provided technical comments. In its written comments, NRC stated that the security and control of radioactive sources is a top priority and that its regulations provide a framework that requires licensees to develop security programs with measures specifically tailored to their facilities. NRC also noted that its inspectors have already investigated and taken action on some of our concerns identified in the report regarding the use of industrial sources, and if additional measures are needed, it will consider appropriate enhancements. NRC agreed with our recommendations to (1) obtain the views of stakeholders during development of its Best Practices Guide and (2) enhance collaboration with other federal agencies on the development and implementation of new technologies. NRC also acknowledged the merits of the two other recommendations to reconsider the definition of collocation for well logging facilities and conduct an assessment of the Trustworthiness and Reliability (T&R) process and discussed the actions it plans to take to address them. Regarding these two recommendations, NRC plans to reevaluate these issues as part of its review of the effectiveness of the recently issued security regulations under 10 C.F.R. Part 37. This review is expected to occur 1 to 2 years after the regulations are implemented. According to NRC’s comment letter, this review will serve as the basis for determining whether any additional security measures, guidance documents, rulemaking changes, or licensee outreach are appropriate. To that end, NRC stated in its technical comments that it independently evaluated the case we identified of an individual granted unescorted access, even though he had an extensive criminal history and had been convicted for terroristic threats. Based on its intial review, NRC noted that the event was an isolated incident and not a programmatic issue. However, without an assessment of the T&R process, which they have agreed to consider, NRC will not be able to determine the extent to which this case may represent a larger problem or if corrective actions are needed. We recognize that a review of the effectiveness of the recently issued regulations will take time to complete. However, due to the serious nature of the security problems identified in our report, this reevaluation of the definition of collocation and the T&R process should be conducted by NRC with a greater sense of urgency. If NRC follows its current plan to address these recommendations in the time frame outlined in its comment letter, the review will not occur until 1 to 2 years after implentation of 10 C.F.R. Part 37. In the case of the 37 Agreement States, the earliest the review would occur is 1 to 2 years afer they issue their own compatible regulations—required by March 2016. The longer it takes for licensees to implement the security upgrades, the greater the risk that potentially dangerous radiological sources remain vulnerable and could be used as terrorist weapons. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Administrator of NNSA, the Chairman of the Nuclear Regulatory Commission, the Secretary of Homeland Security, the appropriate congressional committees, and other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions concerning this report, please contact me at (202) 512-3841 or trimbled@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in appendix IV. We focused our review primarily on the Nuclear Regulatory Commission (NRC) and the Department of Energy’s (DOE) National Nuclear Security Administration (NNSA) because they are the principal federal agencies with responsibility for securing radiological material at industrial facilities in the United States. We also performed work at the Department of Homeland Security (DHS) because they are also involved in securing radiological sources, and we interviewed officials with responsibility for radiological security at the Department of Transportation (DOT) and United States Department of Agriculture (USDA). In addition, we interviewed an expert in the field of nuclear security, representatives from state government, and safety and security personnel at U.S. industrial facilities to discuss their views on how radiological sources are secured. We visited 33 industrial facilities in California, Colorado, Hawaii, Pennsylvania, Texas, and Wyoming. These facilities included 15 industrial radiography companies, 6 commercial or sterilization companies, 5 academic research facilities, 3 well logging companies, 2 manufacturing and distribution companies, and 2 USDA facilities. The 33 facilities we visited are a nongeneralizable sample, selected on the basis of whether they were NRC states or Agreement States, the amount of curies contained in the devices using radiological sources, and the types of radiological devices. In addition, we considered if the site had undergone security upgrades funded by NNSA, and whether the site is located in a large urban area. At each location, we interviewed facility staff responsible for implementing procedures to secure the radiological sources, including questions about the use of security measures and if the licensee had made contact with NNSA. We also met with security personnel at sites, when available, and spoke to officials at some local law enforcement agencies responsible for security breaches. We used NNSA’s G-2 database, which aggregates data from NRC’s National Source Tracking System (NSTS), to identify the location of industrial radiological sources, determine the different types of industrial devices that use radiological sources, and quantify curie amounts for different types of radiological sources. The G-2 data is based on information extracted from the NRC’s 2011 NSTS database, the NRC’s 2008 Sealed Source Inventory, and NNSA project team visits. G-2 contains all buildings in the United States that have risk-significant radiological sources (> 10 curies). To determine the reliability of these data, we conducted electronic testing and interviewed staff at NNSA and NRC about the reliability of these data. We tested these data to ensure their completeness and accuracy, and we determined that these data were sufficiently reliable to use in selecting locations to visit and summarizing the total number of facilities and the total number of curies. To evaluate the challenges industrial licensees with industrial radiological sources face in securing these sources, we reviewed laws, regulations, and guidance related to the security of industrial radiological sources. We interviewed agency officials at NRC, NNSA, DHS, DOT, and USDA. We also interviewed state government officials in three states, and safety and security personnel at 33 industrial facilities we visited in six states, to obtain their views on how radiological sources are secured and what challenges they face in securing them. To identify thefts and incidents involving radiological sources, we reviewed relevant documentation and spoke to federal and state officials. We also spoke to officials at 33 industrial facilities we visited in California, Colorado, Hawaii, Pennsylvania, Texas, and Wyoming. At the facilities, we observed the security measures in place and spoke to officials in charge of implementing NRC and Agreement State security controls and overseeing the security measures. To learn what steps federal agencies are taking to ensure radiological sources are secured at industrial facilities, we obtained information from and interviewed agency officials at NRC, NNSA, DOT, DHS, and USDA who are involved in securing sources and undertaking studies evaluating technologies related to source security. We also obtained information from Agreement States and NRC regions by reviewing documentation and interviewing officials at four Agreement States (California, Colorado, Texas, and Washington State) and one NRC regional office (Region IV) with responsibility for overseeing high-risk radiological sources. We selected these states and the NRC region based on the amount of curies and number of devices in the state containing radiological sources and the types of devices used. We also interviewed officials at DOE’s Pacific Northwest National Laboratory about the status of efforts made to strengthen remote tracking of mobile devices containing radiological sources. We visited industrial facilities that received NNSA funded upgrades and security assessments in California, Hawaii, and Pennsylvania. To determine the costs of NNSA’s security upgrades for industrial facilities, we obtained cost data from NNSA and interviewed the agency official who manages NNSA’s Global Threat Reduction Initiative program. These data were used to determine the number of U.S. industrial facilities that have received NNSA security upgrades, as well as the total cost for completing these upgrades. We discussed the reliability of these data with knowledgeable NNSA officials and questioned them about the system’s controls to verify the accuracy and completeness of the data. We also analyzed these data for missing information and obvious outliers. We found the data sufficiently reliable for our reporting purposes. We conducted this performance audit from November 2012 to June 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Glen Levis (Assistant Director); Jeffrey Barron; Elizabeth Beardsley; Randy Cole; John Delicath; James Espinoza; Karen Keegan; Rebecca Shea; and Kiki Theodoropoulos made key contributions to this report. | In 2012, GAO identified security weaknesses at U.S. medical facilities that use high-risk radiological sources, such as cesium-137. This report addresses potential security risks with such sources in the industrial sector. Radioactive material is typically sealed in a metal capsule called a sealed source. In the hands of a terrorist, this radioactive material could be used to construct a “dirty bomb.” NRC is responsible for licensing and regulating the commercial use of radiological sources. NNSA provides voluntary security upgrades to facilities with such sources. GAO was asked to review the security of sources at industrial facilities. This report examines (1) the challenges in reducing security risks posed by industrial radiological sources and (2) the steps federal agencies are taking to improve security of the sources. GAO reviewed relevant laws, regulations, and guidance; interviewed federal agency and state officials; and visited 33 of about 1,400 U.S. industrial facilities selected based on, among other things, geographic location and type of device using the radiological source. GAO found that challenges exist in reducing the security risks faced by licensees using high-risk industrial radiological sources. Specifically, licensees face challenges in (1) securing mobile and stationary sources and (2) protecting against an insider threat. Regarding mobile sources, their portability makes them susceptible to theft or loss, as the size of some of these sources is small enough for them to be easily concealed. The most common mobile source is contained in a device called a radiography camera. GAO identified four incidents from 2006 to 2012 where such cameras that use high-risk sources to test pipeline welds were stolen. These thefts occurred even though the Nuclear Regulatory Commission (NRC) has established increased security controls. Licensees also face challenges in determining which employees are suitable for trustworthiness and reliability (T&R) certification to have unescorted access to high-risk radiological sources. GAO found two cases where employees were granted unescorted access, even though each had extensive criminal histories, and one had been convicted for terroristic threats, which include a range of violent threats. In this case, NRC said that the person was convicted not of a threat against the United States, but of making violent verbal threats against two individuals. It is unclear whether these cases represent isolated incidents or a systemic weakness in the T&R process established by NRC. Without an assessment of the process, NRC may not have reasonable assurance that access decisions made by licensees can prevent threats to high-risk radiological sources, particularly by a determined insider. Federal agencies responsible for securing radiological sources—including NRC, the National Nuclear Security Administration (NNSA), and the Department of Homeland Security (DHS)—have taken steps to improve the security of industrial radiological sources. For example, NRC is developing a best practices guide that is expected to provide licensees with practical information about how to secure their sources. Also, NNSA is developing new technology that would, if successful, improve tracking of radiological sources while in transit. However, GAO found that although the agencies have been meeting quarterly to discuss, among other things, radiological security, this mechanism did not always help them collaborate and draw on each agency's expertise during research, development, and testing of a new technology for a mobile source tracking device. By not collaborating consistently, the agencies have missed opportunities to leverage resources and expertise in developing this new technology to track radiological sources. This technology could aid in the timely recovery of a lost or stolen radiological source and support the agencies' common mission. As GAO has previously reported, when responsibilities cut across more than one federal agency—as they do for securing industrial radiological sources—it is important for agencies to work collaboratively to deliver results more efficiently and in a way that is consistent with the federal government's multiple demands and limited resources. GAO recommends, among other things, that NRC assess the T&R process to determine if it provides reasonable assurance against insider threats. In addition, GAO recommends that NNSA, NRC, and DHS review their collaboration mechanism for opportunities to enhance it, especially in the development of new technologies. NRC generally agreed with GAO's recommendations, and NNSA agreed with the one recommendation directed to it. DHS did not comment on the report. |
The intent of the Goldwater-Nichols Reorganization Act of 1986 was, in part, to reorganize DOD into a more unified military structure. Within that act, Congress included several provisions that specifically address the education of officers in joint matters, their assignment to joint organizations, and the promotion of officers serving in joint positions. The act also established a joint specialty officer designation for officers who are specifically trained in and oriented toward joint matters. Although the act contains a number of specific requirements, Congress also provided DOD with flexibility in meeting the requirements by granting it waiver authority when it can demonstrate justification. DOD approves waivers on a case-specific basis. These waivers apply to a number of the provisions, including (1) the methods for designating joint specialty officers, (2) the posteducation assignments for joint specialty officers, (3) the assignment of joint specialty officers to critical joint duty positions, and (4) the promotions of officers to the general and flag officer pay grades. Moreover, Congress has issued follow-on reports and made changes to the law in subsequent legislation. For example, a congressional panel on military education issued a report in April 1989 that contained numerous recommendations regarding joint professional military education. Among other things, this panel recommended that the services’ professional military education schools teach both service and joint matters and that the student body and faculty at each of the service schools include officers from the other services. DOD has implemented these recommendations. Most recently, Congress amended the law regarding the promotion criteria for officers being considered for promotion to the general and flag officer pay grades. The Goldwater-Nichols Act established a requirement that officers must have served in a joint position prior to being selected for these promotions. The amendment, contained in the National Defense Authorization Act for Fiscal Year 2002, will require most officers being considered for appointment to this grade after September 30, 2007, to complete the joint education program as well. DOD uses a number of multiservice and multinational commands and organizations to plan and support joint matters. Since passage of the Goldwater-Nichols Act, officers serving in these commands and organizations have overseen a number of joint and multinational military operations that range from humanitarian assistance and peacekeeping to major operations such as Operation Desert Storm and ongoing operations in Afghanistan. The number of joint positions in these organizations has ranged from a low of 8,217 positions in fiscal year 1988 to a high of 9,371 positions in fiscal year 1998. Changing missions and reorganizations have contributed to this variation. In fiscal year 2001, DOD had a total of 9,146 joint positions. Of these positions, 3,400 positions were allocated to the Air Force; 3,170 positions were allocated to the Army; 2,004 positions were allocated to the Navy; and 572 positions were allocated to the Marine Corps. Figure 1 shows that the Air Force had the largest percentage, followed by the Army, the Navy, and the Marine Corps. Officers in pay grades O-4 (majors in the Air Force, Army, and Marine Corps and lieutenant commanders in the Navy) and above can receive credit for joint experience when they serve in the Joint Staff, joint geographic and functional commands, combined forces commands, and defense agencies. In addition, the Secretary of Defense has authority to award joint credit to officers for serving in certain joint task force headquarters staffs. DOD has developed a joint duty assignment list that includes all of the active duty positions in pay grades O-4 and above in the multiservice organizations that are involved in or support the integrated employment of the armed forces. DOD’s policy places limits on the number of positions in the defense agencies and other jointly staffed activities that can be included on the list. The list of joint organizations and demographic descriptions of the officers serving in those organizations are provided in appendix II. The Assistant Secretary of Defense for Force Management Policy, under the Office of the Under Secretary of Defense for Personnel and Readiness, has overall responsibility for the policies and procedures governing DOD’s joint officer management program. Among other things, the Assistant Secretary is responsible for reviewing joint professional military education initiatives, approving the list of joint duty assignments, reviewing the promotion and appointment of joint specialty officers and other officers who are serving or have served in joint duty positions, and acting on requests to waive DOD joint officer management requirements. The Chairman of the Joint Chiefs of Staff has responsibility, among other things, for implementing DOD’s policies governing joint officer management and for making recommendations to the Assistant Secretary. The service secretaries are responsible for, among other things, supporting DOD policy and for ensuring the qualifications of officers assigned to joint duty positions. These responsibilities are delineated in DOD’s Joint Officer Management Program Directive 1300.19, issued on September 9, 1997. DOD has taken positive steps to implement the provisions of the Goldwater-Nichols Act that address the education of officers in joint matters, officers’ assignments to joint organizations, and the promotion of officers who are serving or who have served in joint positions. In certain cases, DOD has met or surpassed the act’s objectives. However, DOD has also relied on waivers allowable under the law to comply with the provisions. In addition, DOD has experienced difficulties in implementing some of its programs and policies that address joint officer development. Because of these difficulties, DOD cannot be assured that it is preparing officers in the most effective manner to serve in joint organizations and leadership positions. One of the provisions in the Goldwater-Nichols Act requires DOD to develop officers, in part, through education in joint matters. Accordingly, DOD defined joint education requirements in terms of a two-phased program in joint matters. It incorporated the first phase of the program into the curricula of the services’ intermediate- and senior-level professional military education schools. DOD offers the second phase of the program at the National Defense University’s Joint Forces Staff College in Norfolk, Virginia. This phase is designed to provide officers with the opportunity to study in a truly joint environment and to apply the knowledge they gained during the first phase of their joint education. DOD also offers a combined program that includes both phases at the National Defense University’s National War College and Industrial College of the Armed Forces in Washington, D.C. The Secretary of Defense is required to educate sufficient numbers of officers so that approximately one-half of the joint positions are filled at any time by officers who have either successfully completed the joint professional education program or received an allowable waiver to complete the education after their assignment. The act, however, did not identify a specific numerical requirement and, similarly, DOD has not established numerical goals concerning the number of officers who should complete joint professional military education. In the most effective model, officers would complete the first phase of joint education in an in-resident or nonresident program through one of the services’ professional military education schools. The in-resident programs are a full academic year in length; officers completing the curricula in nonresident programs will often do this over several years, given that they are completing their education on a part-time basis in addition to their normal duties. Upon completion of the first phase, officers would attend the second phase of the program at the Joint Forces Staff College. The Joint Forces Staff College offers the second phase three times during the year and, by law, this phase may not be less than 3 months. Upon graduation from the second phase, officers would be assigned to a joint position. According to DOD data, only one-third of the officers serving in joint positions in fiscal year 2001 had received both phases of the joint education program. This is due, in large part, to space and facility limitations at the National Defense University schools that provide the second phase. Although DOD assigns approximately 3,000 active duty officers to joint positions each year, the three schools, collectively, have about 1,200 seats available for active duty officers. Furthermore, the Joint Forces Staff College, from which most officers receive the second phase, is currently operating at 83 percent of its 906- seat capacity. Moreover, the number of unfilled seats at the Joint Forces Staff College has risen significantly in recent years, from a low of 12 empty seats in fiscal year 1998 to a high of 154 empty seats in fiscal year 2001. DOD officials cited pressing needs to assign officers to the increasing number of military operations as a major reason for these vacancies. A Joint Staff officer responsible for joint education expressed concern about the services’ ability to fill seats in the future due to the ongoing war on terrorism. Logistics, timing, and budget issues are also making it difficult for officers to attend the second phase of the joint education program. The Joint Forces Staff College can only accommodate approximately 300 students in each 3-month term and does not have the space to receive all of the service professional military education school graduates at the same time. Given that, officers can report to their joint position after completing the first phase and subsequently attend the second phase on a temporary duty basis at some point during their assignment. However, officers and senior leaders at the sites we visited told us that their joint commands cannot afford a 3-month gap in a position due to pressing schedules and workload demands. Officers at the U.S. Forces in Korea posed a slightly different problem. Given its remote location, officers typically serve in Korea for only 1-2 years. That command cannot afford to send someone serving in a 1-year billet away for 3 months. In addition to logistics and timing issues, related budget issues exist. When an officer attends the second phase en route to a joint command, the officer’s service pays the expenses associated with sending the officer to the Joint Forces Staff College. When the officer attends the program midtour, the joint organization pays the expenses. Officers serving on the Joint Staff told us that a former Chairman of the Joint Chiefs of Staff had instituted a policy that the Joint Staff would not send officers to the Joint Forces Staff College—or to any other training lasting more than 30 days—after they reported to the Joint Staff for duty. DOD officials confirmed this and explained that the former chairman understood the budget implications and, believing in the importance of joint education, instituted his policy with the expectation that the services would send their officers to the second phase of the education before sending them to their Joint Staff assignments. DOD officials acknowledged, however, that unintended consequences resulted from this policy. The services still are not sending their officers to the second phase before they assign them to the Joint Staff. Officers we interviewed suggested that alternatives should be considered for delivering the second phase of DOD’s joint education program. For example, some officers believed that the course should be shortened while others thought that it should be integrated into the first phase of the program that is offered in the services’ professional military education schools. However, to shorten the principal course of instruction at the Joint Forces Staff College, which delivers the second phase, would require a change in the law. In addition, considerable variation exists among the services in terms of the number of officers each service sends to the Joint Forces Staff College. The Chairman of the Joint Chiefs of Staff has directed that the seats at the Joint Forces Staff College be allocated among the services in accordance with the distribution of service positions on the joint duty assignment list. The percentage of seats reserved for each service at the school does, in fact, reflect the distribution on the list. However, while the Air Force filled almost 98 percent of its allocated seats in academic year 2001, the Navy filled only 67 percent of its seats. Moreover, vacancy rates for the Army and the Navy have, for the most part, increased between academic years 1996 and 2001. Table 1 shows seats filled and vacancy rates, by service, at the school for academic years 1996 through 2001. Table 1 also shows that the allocation of seats has been constant for the last 3 years. The officers we spoke with told us that they see the importance of completing the first phase of the joint professional military education program perhaps because, in most services, there is a clear correlation between completion of the first phase and promotion potential. In the Army and the Air Force, completion of the first phase has become a prerequisite for promotion to lieutenant colonel, if not by directive, then at least in practice. In all services, completion of the first phase, whether or not it is an absolute requirement, is looked upon favorably, at the very least, for promotion purposes. The officers we surveyed provided mixed responses when we asked them about their observations of the second phase of the program at the Joint Forces Staff College. Of the 184 officers in our survey who had completed the second phase of the program, 11 percent responded that attending the second phase was important to a very great extent, 33 percent responded that attending the second phase was important to a great extent, and 33 percent responded that attending the second phase was important to a moderate extent. About 24 percent of the officers who had completed the second phase responded that attending the second phase was important to a little or no extent. In focus group discussions, these officers said that the program is too long, redundant with the first phase of joint education, and of little added value. Some of these officers also said that the second phase of the program only had value for officers who were interested in being appointed to the general and flag officer grades in their future. Officers from all the services and pay grades in our focus groups agreed that, if an officer were to attend the second phase at all, an officer should attend en route before reporting to a joint position. Overall, officers at the commands we visited reported that they were adequately prepared for their joint position but, often times, cited a steep learning curve involved with working in their particular joint organization. Officers in over one-half of the focus groups we conducted said that they were most prepared for their joint positions because (1) they were serving in joint positions that drew upon their tactical level primary military occupation skills; (2) their military occupation, by nature, was oriented toward joint matters (e.g., communications, intelligence, special operations, foreign affairs); (3) they had previously served in a joint or staff position; or (4) they had attended both phases of the joint education program. Officers who responded that they were least prepared said that they were serving in joint positions unrelated to their military occupations or that they lacked familiarity of joint structures or organization, systems, and processes. General and flag officers with whom we spoke also provided mixed responses. While the senior officers talked about the strengths and importance of the joint education, some senior officers told us that they did not check the records of the officers serving under them to see whether the officers had attended the second phase of the joint professional military education program and that they did not view this lack of education as an issue. The act contains a number of provisions affecting the assignment of officers to joint positions. These provisions include (1) the percentage of graduates of the National Defense University schools who must be assigned to joint duty, (2) the number of joint critical positions that must be filled by designated joint specialty officers, and (3) the percentage of positions on the joint duty assignment list that must be filled by joint specialty officers or joint specialty officer nominees. The Goldwater-Nichols Act established specific requirements for DOD to assign officers who attended a joint professional military education school to joint positions after graduation. Placement of these graduates in joint positions was intended to help DOD realize the full benefit of education provided by all three joint colleges. First, DOD must send more than 50 percent of the officers who are not joint specialists to a joint position upon graduation from a joint professional military school. Table 2 shows that DOD has exceeded this requirement since fiscal year 1996. Second, DOD must assign all joint specialty officers who graduate from joint professional military education schools, including the Industrial College of the Armed Forces and the National War College, to joint positions upon graduation unless a waiver is granted. Table 3 shows that 140 joint specialty officers graduated from one of these schools in the past 6 years and that DOD did not place 35 officers, or 25 percent, into joint positions. DOD officials explained that the primary reason that these officers were given allowable waivers was because they had received orders to command assignments within their own service. The Goldwater-Nichols Act, as amended, further requires DOD to designate at least 800 joint positions as critical joint duty positions— positions where the duties and responsibilities are such that it is highly important that officers assigned to the positions are particularly trained in, and oriented toward, joint matters. DOD has met this requirement and has designated 808 positions as critical joint duty positions. However, DOD is also required to place only joint specialty officers in these positions unless the Secretary exercises his waiver authority. DOD has increasingly used its waiver authority to meet this requirement. The percentage of critical joint duty positions that were filled by officers other than joint specialty officers has steadily increased from 9 percent in fiscal year 1996 to 38 percent in fiscal year 2001. In fiscal year 2001, DOD was not able to fill 311 of its critical joint duty positions with joint specialty officers. In addition, DOD has left other critical joint duty positions vacant. The percentage of unfilled critical joint duty positions has steadily increased from 8 percent in fiscal year 1989 to 22 percent in fiscal year 2001. Therefore, only 331 positions, or 41 percent, of the 808 critical joint duty positions were filled by joint specialty officers in fiscal year 2001. Figure 2 shows the distribution of vacant and filled critical joint duty positions by joint specialty officers and non-joint specialty officers during fiscal years 1989 through 2001. The services fill these critical joint positions with officers who have both the joint specialty designation and the appropriate primary military skill, any additional required skills, and pay grade. However, when (1) no joint specialty officer with the other requisite skills is available for assignment (e.g., pay grade and military occupation) or (2) the best-qualified candidate is not a joint specialty officer, a waiver must be approved to fill the position with an otherwise qualified officer. Service and Joint Staff officials explained that DOD’s inability to fill a critical position with a joint specialty officer may be due to the fact that the critical joint duty position description may not reflect the commander’s needs at the time the position is filled. These officials told us that the most frequently cited reason for requesting an allowable waiver was because the commander believed that the best-qualified officer for the position was not a joint specialty officer. In addition, DOD’s population of joint specialty officers may not be sufficient to meet this requirement. By fiscal year 1990, DOD had designated just over 12,000 officers, who already had the joint education and experience, as joint specialty officers. However, DOD experienced a 56 percent decrease in its joint specialty officers between fiscal years 1990 and 1997 and has experienced moderate decreases in fiscal years 2000 and 2001. Officials on the Joint Staff attributed the decreases in the early years to the fact that the attrition of officers who received the designation in fiscal year 1990 has exceeded the number of new designations of joint specialty officers. DOD officials also projected that they would need to designate approximately 800 new joint specialty officers each year to maintain its current population. Since fiscal year 1990, however, DOD has only met this projection in 3 of the last 4 fiscal years. Figure 3 shows the number of new designations of joint specialty officers each year and the total number of joint specialty officers for fiscal years 1990 through 2001. Officials told us that DOD has been selective in nominating and designating officers for the joint specialty because of the promotion objectives specified in the law. Officials noted that as a result, the population of joint specialty officers has been small. The act requires the services to promote joint specialty officers, as a group, at a rate not less than the rate of officers being promoted who are serving on, or have served on, the headquarters staff of their service. This higher promotion standard is applied to joint specialty officers from the time they receive the joint specialty designation until they are considered for or promoted to pay grade O-6. DOD sought relief from this provision and, in December 2001, Congress reduced the standard for 3 years. During this 3-year period, the services are to promote joint specialty officers at a rate not less than the promotion rates of all other officers being promoted from the same military service, pay grade, and competitive category. Currently, about 2,700 officers meet the joint specialty officer qualifications but have not been designated, and DOD, given this change in the law, is in the process of designating these officers. Once they are designated, DOD will have a population of about 7,600 joint specialty officers. The act also requires DOD to fill approximately 50 percent of all of the joint positions on the joint duty assignment list either with fully qualified joint specialty officers or with officers who have been nominated for that designation. Although the act does not establish specific numerical requirements, it does require that the number should be large enough so that approximately one-half of the joint positions in pay grades O-4 and above will be filled by officers who are joint specialty officers or nominees who meet certain requirements. Because the act does not require DOD to report these data to Congress and DOD has not maintained historical data on the percentage of joint positions filled by either fully qualified joint specialty officers or joint specialty officer nominees, we were not able to measure progress. Nevertheless, we did ask DOD to provide us with data for a point in time. Table 4 shows that more than 70 percent of the officers who served in joint positions in July 2002 were joint specialty officers or nominees. We note, however, that DOD met this requirement by relying heavily on joint specialty officer nominees who filled more than 80 percent of the positions being filled by joint specialty officers or joint specialty officer nominees. This ranged from 79 percent in the Army to 87 percent in the Marine Corps. Comparable figures for the Air Force and the Navy are 83 percent and 84 percent, respectively. The Goldwater-Nichols Act established promotion requirements and objectives for officers being selected for appointment to the general or flag officer pay grade and for mid-grade officers who are serving or have served in joint positions. The Goldwater-Nichols Act set a requirement that officers must complete a full tour of duty in a joint duty assignment, or receive a waiver, prior to being selected for appointment to the general or flag officer pay grade. The Secretary of Defense may waive the requirement for (1) officers when the selection is necessary for the good of the service; (2) officers with scientific and technical qualifications for which joint requirements do not exist; (3) medical officers, dental officers, veterinary officers, medical service officers, nurses, biomedical science officers, chaplains, or judge advocates; (4) officers who had served at least 180 days in a joint assignment at the time the selection board convened and the officer’s total consecutive service in joint duty positions within that immediate organization is not less than 2 years; and (5) officers who served in a joint assignment prior to 1987 that involved significant duration of not less than 12 months. As of fiscal year 2001, DOD has been promoting more officers who had the requisite joint experience to the general and flag officer pay grades than it did in fiscal year 1995. In fiscal year 2001, however, DOD still relied on allowable waivers in lieu of joint experience to promote one in four officers to these senior pay grades. Figure 4 shows that the percentage of officers who were selected for promotion to the general and flag officer pay grades, and who had previous joint experience, rose from 51 percent in fiscal year 1995 to 80 percent in fiscal year 1999. Conversely, DOD’s reliance on waivers decreased from 49 percent in fiscal year 1995 to 20 percent in fiscal year 1999. Figure 4 also shows, however, that DOD experienced slight increases in its use of promotion waivers in fiscal years 2000 and 2001. DOD’s reliance on good-of-the-service waivers, in particular, to promote officers who had not previously served in joint positions is one indicator of how DOD is promoting its senior leadership. The service secretaries request use of this waiver authority when they believe they have sound justification for promoting an officer who (1) has not completed a full tour of duty in a joint position and (2) does not qualify for promotion through one of the other four specific waivers. We analyzed the extent to which DOD has relied on this waiver category to promote its senior officers because these waivers apply most directly to the population of general and flag officers who are likely to be assigned to senior leadership positions in the joint organizations. The Secretary of Defense has also paid particular attention to this waiver category and, in 2000, established a policy that restricts the use of good-of-the-service waivers to 10 percent of total promotions to the general and flag officer pay grades each year. DOD approved 185 good-of-the-service waivers, representing 11 percent of the 1,658 promotions to the general and flag officer pay grades, between fiscal years 1989 and 2001. Specifically, DOD approved 10 or more good-of- the-service waivers each year between fiscal years 1989 and 1998 and only 3 to 7 waivers in fiscal years 1999 through 2001. DOD relied most heavily on good-of-the-service waivers in fiscal year 1995, when it approved 25 waivers, and used them on a decreasing basis between fiscal years 1995 and 1999. In fiscal year 1999, DOD approved just 3 good-of-the service waivers. In the 2 years since the Secretary of Defense issued limitations on the use of these waivers, DOD has used them in about 5 percent of its promotions. Figure 5 shows the extent to which DOD has used good-of- the-service waivers between fiscal years 1989 and 2001. For most appointments to the general and flag level made after September 30, 2007, officers will have to meet the requirements expected of a joint specialty officer. This means that most officers, in addition to completing a full tour of duty in a joint position, will also have to complete DOD’s joint education program as well. Our analysis of the 124 officers promoted in fiscal year 2001 showed that 58 officers, or 47 percent, had not fulfilled the joint specialty officer requirements. These 58 officers included 18 of 43 officers promoted in the Air Force, 18 of 40 officers promoted in the Army, 19 of 33 officers promoted in the Navy, and 3 of the 8 officers promoted in the Marine Corps. The Goldwater-Nichols Act also established promotion policy objectives for officers serving in pay grades O-4 and above who (1) are serving on or have served on the Joint Staff, (2) are designated as joint specialty officers, and (3) are serving in or have served in other joint positions. DOD has been most successful in achieving its promotion objectives for officers assigned to the Joint Staff, but it has made less significant progress in achieving the promotion objectives for officers in the other two categories. (Appendix III provides detailed promotion data.) DOD has been most successful in meeting the promotion objective set for officers assigned to the Joint Staff. The act established an expectation that officers who are serving or have served on the Joint Staff be promoted, as a group, at a rate not less than the rate of officers who are serving or have served in their service headquarters. Between fiscal years 1988 and 1994, DOD met its promotion objectives for officers assigned to the Joint Staff in 43 out of 68 promotion groups, or 63 percent of the time. Between fiscal years 1995 and 2001, DOD met this objective in 55 out of 60 promotion groups, or 92 percent of the time. DOD has also made improvements in meeting its promotion objective for joint specialty officers. The act established an expectation that joint specialty officers, as a group, be promoted at a rate not less than the rate of officers who are serving or have served in their service headquarters.Between fiscal years 1988 and 1994, DOD met this promotion objective in 26 of 52 promotion groups, or 50 percent of the time. Between fiscal years 1995 and 2001, DOD met the promotion objective in 37 out of 50 promotion groups, or 74 percent of the time. Where DOD did not meet its promotion objective was somewhat random and we were not able to attribute problem areas to specific pay grades or services. As we noted earlier, this standard has been temporarily reduced, and, through December 2004, DOD is required to promote joint specialty officers, as a group, at a rate not less than the rate for other officers in the same service, pay grade, and competitive category. We also compared the promotion rates of joint specialty officers against this lower standard and found that, with few exceptions, DOD would have met this standard between fiscal years 1988 and 2001. DOD has made less significant improvement in meeting its promotion objective for officers assigned to other joint organizations. The act established an expectation that officers who are serving or have served in joint positions be promoted, as a group, at a rate not less than the rate for all officers in their service. Between fiscal years 1988 and 1994, DOD met its promotion objective in 41 out of 82 promotion groups, or 50 percent of the time. Between fiscal years 1995 and 2001, DOD met this objective in 60 out of 84 promotion groups, or 71 percent of the time. With few exceptions during the last 7 years, all services are meeting the promotion objective for their officers being promoted to the O-5 pay grade who are assigned to the other joint organizations. However, the services have had significant difficulty meeting the promotion objectives for their officers being promoted to the O-6 pay grade. For example, the Navy has failed to meet this objective for its O-6 officers since fiscal year 1988, and the Army has only met this promotion objective twice—in fiscal years 1995 and 2001— since fiscal year 1988. The Air Force has generally met this objective for its officers at the O-6 pay grade, but it has not met this objective in the past 4 years. Conversely, the Marine Corps had difficulty in meeting this promotion objective for its officers at the O-6 pay grade between fiscal years 1988 and 1994, but it met this objective in every year until fiscal year 2001. A significant impediment affecting DOD’s ability to fully realize the cultural change that was envisioned by the act is the fact that DOD has not taken a strategic approach to develop officers in joint matters. For example, DOD has not identified how many joint specialty officers it needs, and the four services have emphasized joint officer development to varying degrees. In addition, DOD has not yet, within a total force concept, fully addressed how it will provide joint development to reserve officers who are serving in joint organizations—despite the fact that it is increasingly relying on reservists to carry out its mission. Moreover, DOD has not been tracking certain data in a consistent manner that would help DOD measure its progress in following a strategy to meet the act’s overall objectives and its own goals as well. DOD has issued a number of publications, directives, and policy papers regarding joint officer development. However, it has not developed a strategic plan that establishes clear goals for officer development in joint matters and links those goals to DOD’s overall mission and goals. This lack of an overarching vision or strategy will continue to hamper DOD’s ability to make continued progress in this area. A well-developed human capital strategy would provide a means for aligning all elements of DOD’s human capital management, including joint officer development, with its broader organizational objectives. Professional military education and joint assignments are tools that an organization can use to shape its officer workforce, fill gaps, and meet future requirements. In prior reports and testimony, we identified strategic human capital management planning as a governmentwide high-risk area and a key area of challenge. We stated that agencies, including DOD, need to develop integrated human capital strategies that support the organizations’ strategic and programmatic goals. In March 2002, we issued an exposure draft of our model for strategic human capital management to help federal agency leaders effectively lead and manage their people. We also testified on how strategic human capital management can contribute to transforming the cultures of federal agencies. Several DOD studies have also identified the need for a more strategic approach to human capital planning within DOD. The 8th Quadrennial Review of Military Compensation, completed in 1997, strongly advocated that DOD adopt a strategic human capital planning approach. The review found that DOD lacked an institutionwide process for systematically examining human capital needs or translating needs into a coherent strategy. Subsequent DOD and service studies, including the Defense Science Board Task Force on Human Resources Strategy and the Naval Personnel Task Force, endorsed the concept of human capital strategic planning. DOD’s Joint Vision 2020 portrays a future in which the armed forces are “fully joint: intellectually, operationally, organizationally, doctrinally, and technically.” To exploit emerging technologies and to respond to diverse threats and new enemy capabilities requires increasingly agile, flexible, and responsive organizations. The vision requires the services to reexamine traditional criteria governing span of control and organizational layers; to develop organizational climates that reward critical thinking, encourage competition of ideas, and reduce barriers to innovation; to develop empowered individual warfighters; and to generate and reinforce specific behaviors such as judgment, creativity, adaptability, initiative, teamwork, commitment, and innovative strategic and operational thinking. The Goldwater-Nichols Act not only defined new duty positions and educational requirements but also envisioned a new culture that is truly oriented toward joint matters. The key question, today, is how does DOD best seize the opportunity and build on current momentum. In April 2002, the Office of the Secretary of Defense issued the Military Personnel Human Resource Strategic Plan to establish the military priorities for the next several years. The new military personnel strategy captures the DOD leadership’s guidance regarding aspects of managing human capital, but the strategy’s linkage to the overall mission and programmatic goals is not stated. DOD’s human capital strategy does not address the vision cited in Joint Vision 2020. DOD’s human capital approach to joint officer development—if it were linked to its overall mission—would emphasize individuals with the knowledge, skills, and abilities needed to function in the joint environment. DOD has not fully assessed how many joint specialty officers it actually needs. As we have previously shown, the number of joint specialty officers has decreased by almost 60 percent over the years, and DOD has a significant backlog of officers who, although otherwise qualified, have not been designated as joint specialty officers. Moreover, without knowing how many joint specialty officers it needs, DOD’s joint professional military education system may not be structured or targeted properly. For example, without first defining how many officers should be joint specialty officers—all officers, most officers, or only those needed to fill joint positions—DOD has not been able to determine the number of joint professional military education graduates it needs. Although we have already noted that there are many vacant seats at the Joint Forces Staff College, DOD does not know if the total number of available seats is sufficient to meet its needs or if it will need to explore alternatives for providing joint education to greater numbers of officers. Furthermore, comments from officers we surveyed at various commands demonstrate that they place different values on the importance of the joint specialty designation. Overall, officers told us that they viewed their assignment to a joint position as a positive experience and that their services also saw joint assignments as valuable career moves. Moreover, 51 percent of the officers surveyed responded that an assignment to a joint position is a defined aspect of their career path. Responses ranged from 57 percent in the Air Force, to 52 percent in the Army, 47 percent in the Navy, and 29 percent in the Marine Corps. However, many officers also told us that they were reluctant to seek the joint specialty designation. Their concern was that they would be flagged as joint specialty officers and, accordingly, be reassigned to subsequent tours of duty within joint organizations. They were concerned about the need to balance the requirements of already crowded service career paths and the expectation to serve in joint organizations. Their ultimate concern was that multiple joint assignments would take them away from service assignments for too great a period and that this time away could adversely affect their career progression and promotion potential. The officers responded that the joint specialty officer designation was not really important for the rank and file—but really only important for those who were going to be admirals and generals. In other words, these officers believed that the need to meet service expectations seemed to override any advantages that the joint specialty officer designation might provide. Our survey and more detailed responses to that survey are presented in appendix IV. Each of the four services has been assigning officers in pay grades O-4 through O-6 to joint organizations and, as of fiscal year 2002, about 50 percent of the services’ mid-level officers had served in at least one joint assignment. The percentage of officers who served in a joint position ranged from 46 percent in the Navy and the Marine Corps to 52 percent and 57 percent in the Air Force and the Army, respectively. Data—including some that we have already presented—however, suggest that the four services continue to struggle to balance joint requirements against their own service needs and vary in the degree of importance that they place on joint education, assignments, and promotions. The Air Force, for example, filled 16 more than its 1,983 allocated seats at the Joint Forces Staff College between fiscal years 1996 and 2001. During that 6-year period, the Air Force actually surpassed its collective allocation by 1 percent. The Marine Corps left 13 of its 316 allocated seats, or 4 percent, unfilled during those same fiscal years. Also during that time period, the Army left 192 of 1,760 seats, or 11 percent, unfilled and the Navy left 193 of 1,288 allocated seats, or 15 percent, unfilled. Accordingly, the Air Force has been able to send a higher percentage of its officers to a joint position after the officers attend a joint professional military education school. In fiscal year 2001, for example, 44 percent of Air Force officers serving in joint positions had previously attended a joint professional military education school. In contrast, 38 percent of Army officers and 33 percent of Navy and Marine Corps officers serving in joint positions had attended a joint professional military education school prior to their joint assignments. This difference can be largely attributed to the fact that the Air Force sends a higher percentage of its officers at the O-4 pay grade to the Joint Forces Staff College. Promotion statistics also suggest differences among the services. As we noted earlier, the Navy did not meet the pay grade O-6 promotion objective for officers serving in joint organizations other than the Joint Staff, and who are not joint specialty officers, between fiscal years 1988 and 2001. The Army met this objective 2 times, the Marine Corps met it 6 times, and the Air Force met it 10 times in the 14-year period. Our analysis of general and flag officer promotions showed that, between fiscal years 1995 and 2000, the Marine Corps used good-of-the service waivers to promote 19 percent of its officers to brigadier general. The Army used this waiver authority for 17 percent of its promotions, and the Navy used the authority for 13 percent of its promotions. In contrast, the Air Force only approved one good-of-the-service waiver during that time period. The Goldwater-Nichols Act states that the Secretary of Defense should establish personnel policies for reserve officers that emphasize education and experience in joint matters. A recent congressionally-sponsored study concluded, however, that DOD has not yet met this requirement and that DOD’s reserve components lack procedures to identify and track positions that will provide reserve officers with the knowledge and experience that come from working with other services and from joint operations. Providing education in joint matters to reservists has become increasingly important since 1986, given that DOD has increasingly relied on reservists in the conduct of its mission. When the act was enacted, reservists were viewed primarily as an expansion force that would supplement active forces during a major war. Since then the Cold War has ended and a shift has occurred in the way DOD uses the reserve forces. Today, no significant military operation can be conducted without reserve involvement. In addition, the current mobilization for the war on terrorism is adding to this increased use and is expected to last a long time. A few of the officers who attended our focus groups were, in fact, reservists serving on active duty in joint commands. We excluded their responses, however, since the educational and experience requirements for joint officers do not directly apply to reserve officers and, as indicated above, the Secretary of Defense has not as yet issued personnel policies emphasizing education and experience in joint matters for reserve officers as required by the Goldwater-Nichols Act. Nevertheless, many of the active duty officers we spoke with raised the issue of providing education to reservists. We interviewed officers at several joint organizations and found that reservists are serving in positions at all levels from the Chief of Staff at one command down to the mid-grade officer positions. Moreover, DOD has identified 2,904 additional positions that it will fill with reservists when it operates under mobilized conditions. All of this suggests that reservists can be assigned to joint positions without the benefit of joint education. In 1995, the Office of the DOD Inspector General recommended that DOD develop policy guidance that provides for the necessary training and education of reserve component officers assigned to joint organizations. The Under Secretary of Defense for Personnel and Readiness concurred with this recommendation. In 1997, we reported that DOD officials noted that many details needed to be resolved. For example, they said that, since reservists typically perform duties on an intermittent or part-time basis, it is difficult for reservists to find the time to attend the 3-month second phase of the joint education program. Reservists also cannot be readily assigned to locations outside of their reserve unit area, thus limiting their availability for joint education. Another concern raised by a DOD official was that if the education and experience requirements for reservists are too stringent, the available pool of reservists who can meet them will be limited, thereby denying joint duty assignments to many highly qualified personnel. During our review, officials on the Joint Staff told us that DOD recently completed a pilot program that considered alternatives for providing joint education to reservists. DOD officials anticipate that they will be able to deliver joint education to reservists through distance- learning beginning in fiscal year 2004. DOD has a wealth of information to support its implementation of provisions in the Goldwater-Nichols Act, and it has been collecting data and submitting annual reports to Congress in accordance with the act’s reporting requirements. However, in cases where the act does not require DOD to report data, DOD has not tracked meaningful information that it needs in order to fully assess its progress. For example, DOD has not kept historical data on the number of positions in joint organizations that are filled with joint specialty officers and joint specialty officer nominees. Without trend data, DOD and others cannot assess the degree to which DOD is properly targeting its joint education program or foresee problematic trends as they arise. Also, when we attempted to identify the number of officers who have completed both phases of the joint education program, DOD officials told us that they did not have fully reliable data because the services do not consistently maintain and enter such information into their databases. Furthermore, DOD does not track the degree to which reservists are filling joint positions. Given that DOD plans to offer joint education to reservists and that reservists are serving in joint positions, tracking this type of data would help DOD identify reservists who have joint education and experience during mobilizations. Effective organizations link human capital approaches to their overall mission and programmatic goals. An organization’s human capital approaches should be designed, implemented, and assessed by the standard of how well they help an organization pursue its mission and achieve desired results or outcomes. High-performing organizations use data to determine key performance objectives and goals that enable them to evaluate the success of their human capital approaches. Collecting and analyzing data are fundamental building blocks for measuring the effectiveness of human capital approaches in support of the mission and goals of the agency. DOD has taken positive steps to implement the major provisions of the Goldwater-Nichols Act that address joint officer development. However, DOD has not taken a strategic approach toward joint officer development and, without a strategic plan that will address the development of the total force in joint matters, it is more than likely that DOD will continue to experience difficulties in the future in meeting the provisions of the Goldwater-Nichols Act. While DOD has made progress in implementing provisions of the law, it has not identified how many joint specialty officers it needs. Moreover, the fact that the four services have emphasized the development of their officers in joint matters to varying degrees suggests that DOD has not taken a fully unified approach and that service parochialisms still prevail. Addressing these points will provide DOD with data it needs to determine whether it has the resources or capacity to deliver its two-phased joint education program to all of the active duty officers who need it. Furthermore, although DOD is increasingly relying upon its reserve forces, including using reserves in some of its key joint positions, it has not fully assessed how it will develop its reserve officers in joint matters. Finally, DOD has not been consistent in tracking key indicators since enactment of the act in 1986. A strategic plan that is designed appropriately will help DOD assess progress made toward meeting the act’s specific objectives and overall intent regarding joint officer development. Because the services lack the guidance they need to undertake a unified approach that will address the development of the total force in joint matters, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness to develop a strategic plan that links joint officer development to DOD’s overall mission and goals. At a minimum, this plan should (1) identify the number of joint specialty officers needed, (2) include provisions for the education and assignment of reservists who are serving in joint organizations, and (3) be developed in a manner to provide DOD with more meaningful data to track progress made against the plan. We requested written comments from the Department of Defense, but none were provided. However, the Office of the Vice Director, Joint Staff, did provide us with DOD’s oral comments in which DOD partially concurred with our recommendation that it develop a strategic plan that links joint officer development to DOD’s overall mission and goals. DOD stated that its ability to develop a strategic plan, that would improve DOD’s capability to conduct successful joint operations, is limited by the current legislation that specifies 1) quotas that artificially drive the production of joint specialty officers, 2) requirements that limit the availability of the second phase of DOD’s joint education program, and 3) post-education requirements that make advance planning for joint education difficult. DOD added that it views provisions in the act as impediments that must be removed before it can develop an effective strategic plan. Our report recognizes that DOD is required to comply with numerous provisions in the act that address the education, assignment, and promotion of officers in joint matters. While we recognize that DOD must be mindful of these provisions as it attempts to develop a strategic plan, we do not believe that the act’s provisions prohibit DOD from developing a strategic plan to achieve its goals. We believe that DOD will not be able to demonstrate that changes to the law are needed unless it first develops a strategic plan that identifies the department’s goals and objectives for joint officer development and produces empirical data to support needed changes. In response to our recommendation that DOD develop a strategic plan that identifies the number of joint specialty officers needed, DOD asserted that numerical quotas prevent it from pursuing a strategic approach to joint officer development that is based on true joint specialty requirements. Instead, DOD stated that it will produce about 1,000 joint specialty officers each year in order to satisfy the law. However, the statute does, in fact, provide some flexibility and permits the Secretary of Defense to determine the number of joint specialty officers. The act only requires that approximately one-half of the joint positions be filled at any time by officers who have either successfully completed the joint education program or received an allowable waiver to complete the education after their joint assignment. DOD also asserted that officers today are more experienced in joint matters and therefore believes that the difference between a joint educated officer and a joint specialty officer has diminished. During our review, officers who participated in our focus groups told us they believe that today’s senior leaders should have joint experience and education. We continue to believe that, in the absence of a strategic plan that is requirements based, DOD is not in a position to determine whether it is producing too many or too few joint specialty officers. In response to our recommendation that a strategic plan should include provisions for the education and assignment of reservists who are serving in joint organizations, DOD stated that it has recently finalized guidance for their development and management and is developing a joint education program for reserve officers. However, this guidance was not available at the time of our review. The act states that the Secretary of Defense should establish personnel policies for reserve officers that emphasize education and experience in joint matters. Our report acknowledges the steps DOD is taking. Given that reservists play an integral role within the total force, we view these recent actions that DOD is taking to integrate reserve officers in joint matters as positive steps. In response to our recommendation that a strategic plan should be developed in a manner to provide DOD with more meaningful data to track progress made against the plan, DOD reported that it is revamping the data system it uses to evaluate joint officer management. When complete, DOD stated that it will have current and historical data and that this information will be used to identify and correct inconsistencies. We believe that a strategic plan would help DOD identify its goals and track progress made in its joint officer program. We view DOD’s effort in this area as a positive step, provided that the revamped data system gives DOD the information it needs to better manage its joint officer program. DOD also commented on our findings that address critical joint duty positions, joint education, and general and flag officers promotions. Concerning critical joint duty positions, DOD stated that it is further inhibited from achieving its joint vision by a legislative requirement to identify 800 critical joint duty positions and fill them with joint specialty officers. Moreover, DOD questioned whether there is a valid requirement for critical billets within joint organizations. DOD believes that the essential factors that should be considered to identify those officers who best meet the needs of a joint organization are service competencies and expertise in a military occupational skill. It stated that joint qualifications should be viewed as one of many attributes that can be used. Although we did not validate the numerical requirements for critical joint positions, we do discuss difficulties DOD has experienced in filling these positions with joint specialty officers. In the absence of a strategic plan that is requirements based, we continue to believe that DOD is not in a position to determine whether it is filling its critical billets appropriately. Regarding joint education, DOD stated that it realizes the value of joint education and the importance of acculturating its officers in joint matters. However, DOD also stated that it does not have the flexibility it needs to educate top quality officers in joint matters. DOD viewed the existing requirements that it must follow as inhibitors to good personnel management and further stated that these requirements cause some officers to miss joint education due to timing limitations. DOD believes that, in order to develop an effective strategic plan, it needs greater flexibility and that leveraging new educational technologies would facilitate its ability to prepare officers for the joint environment. Specifically, DOD asserted that, while it has the flexibility to offer the first phase of its joint education program in both resident and nonresident settings, it can only provide the second phase of its joint education program in an in-resident setting, and then must assign 50 percent of the graduates to a joint assignment. Our report acknowledges the progress DOD has made in providing joint education to its officers and the difficulties DOD has experienced in providing the second phase of its joint education program. We believe, however, that while legislative provisions address the education needed to qualify an officer for the joint specialty, DOD is not precluded from using new technologies and alternative venues to provide joint education. While officers educated under alternative approaches may not be awarded the joint specialty officer designation, these officers, nonetheless, would be better educated in joint matters and prepared for joint positions. We continue to believe that a strategic approach will help DOD better identify its joint education needs. Concerning general and flag officer promotions to pay grade O-7, DOD acknowledged that our findings regarding waiver usage are correct. However, DOD believed that without further analysis, our finding that DOD still relies heavily on allowable waivers to promote one in four officers to this level without joint experience is misleading. DOD pointed out that a closer examination of the types of waivers used might be a better indicator of how well it is doing. In our report, we identify the five categories of allowable waivers. We discuss the progress DOD has made in promoting officers with joint experience as well as its progress in limiting its use of good-of-the-service waivers in particular. During our review, we attempted to obtain data on the other categories of waivers. However, DOD does not capture and report waiver usage by the various categories in its annual reports and DOD was not able to provide it to us at the time of our review. We are sending copies of this report to appropriate congressional committees. We are also sending copies of this report to the Secretary of Defense; the Secretaries of the Air Force, Army, and Navy; the Commandant of the Marine Corps; and the Chairman of the Joint Chiefs of Staff. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please call me at (202) 512-5140. Major contributors to this report are listed in appendix V. To examine the steps the Department of Defense (DOD) has taken to address the education and assignment of officers in joint matters, we initially conducted a legislative history of the act, reviewed joint directives and publications, and analyzed data contained in the Goldwater-Nichols Act Implementation Reports that are presented as an appendix to DOD’s Annual Reports to the Congress for fiscal years 1988 through 2001. We also analyzed data contained in DOD’s joint duty assignment list database and interviewed and gathered data from officials serving in the Manpower and Personnel Directorate within the Joint Chiefs of Staff, the Office of the Secretary of Defense, and the four military services headquarters. In certain cases, we analyzed data dating back to fiscal year 1989. In other cases, we could only analyze data going back to fiscal year 1996 due to changes in DOD’s reporting methods that made comparisons difficult. We used fiscal year 2001 as our end point because that year represents the last year for which complete annual data were available. To assess the services’ compliance with provisions that pertain to the promotion of officers to the flag and general officer pay grades, we measured the extent to which the services promoted officers with the requisite joint experience or used allowable waivers. In addition, we obtained and analyzed individual biographies and service histories for each officer promoted to these senior pay grades in fiscal year 2001. To analyze the extent to which DOD has complied with provisions that address the promotions of mid-grade officers who are serving or have served in joint positions, we obtained and analyzed data from the Manpower and Personnel Directorate within the Joint Staff for fiscal years 1988 through 2001. To evaluate impediments affecting DOD’s ability to fully respond to the act’s intent, we reviewed previously issued Department of Defense vision statements and human resource strategic plans. We also analyzed existing data to measure trends over time and identify the key reasons why DOD is having difficulty in responding to the act. We interviewed agency officials and obtained data at the following locations: Manpower and Personnel Directorate, Joint Chiefs of Staff, Operational Plans and Interoperability Directorate, Joint Chiefs of Staff, Washington, D.C.; Office of the Secretary of Defense, Force Management and Policy, Air Force Education Branch, Headquarters, U.S. Air Force, Joint Officer Management Branch, Air Force Personnel Center, Randolph Air Force Base, Texas; Joint Management Branch, Army Personnel Command, Alexandria, Virginia; Office of Joint Officer Management Policy Office, Naval Bureau of Personnel, Arlington, Virginia; Marine Corps Training and Education Command, Quantico, Virginia; Personnel Management Division, Headquarters, U.S. Marine Corps, National Defense University, Washington, D.C.; and Joint Forces Staff College, Norfolk, Virginia. To obtain the perspectives of officers serving in joint positions on joint officer development, we surveyed 557 officers and conducted focus group discussions with 513 officers serving in 11 different locations. We did not conduct a random sample due to the dispersion of officers serving in joint positions, and, therefore, cannot project from the information the officers provided us. However, we did attempt to include the different types of organizations in which officers serve in joint positions by selecting the Joint Staff, three geographic commands, two functional commands, three combined forces commands, and two defense agencies. While the results cannot be projected, the population of officers surveyed reflects the overall composition of the joint duty assignment list. At each location, we administered a survey (shown in appendix IV) and conducted focus group interviews with active duty officers in pay grades O-4, O-5, and O-6. To gain firsthand information from officers serving in joint duty positions, we asked them about their joint education and assignments. We also asked them about the value they place on (1) serving in a joint position and (2) attaining the joint specialty officer designation. In addition, we conducted individual interviews with senior officers and personnel officers at the commands we visited. We surveyed officers and conducted focus group discussions at the following offices, commands, and agencies: Joint Chiefs of Staff, Washington, D.C. Combined Commands Supreme Headquarters Allied Powers Europe, Mons, Belgium Allied Forces South, Naples, Italy U.S. Forces Korea, Seoul, Korea Functional Commands Special Operations Command, Tampa, Florida Strategic Command, Omaha, Nebraska Geographic Commands Joint Forces Command, Norfolk, Virginia European Command, Stuttgart, Germany Pacific Command, Honolulu, Hawaii Defense Agencies Defense Information Systems Agency, Arlington, Virginia Defense Intelligence Agency, Arlington, Virginia We administered surveys, but did not conduct site visits, to officers serving in joint positions at the following locations within the U.S. Central Command’s area of responsibility: Joint Task Force—Southwest Asia, Office of Military Cooperation—Egypt, and We conducted our review from January 2002 through October 2002 in accordance with generally accepted government auditing standards. This appendix presents information about the distribution of joint positions in DOD’s joint duty assignment list by organization, pay grade, and occupational category. Table 5 identifies the major commands and activities where joint positions are located and the number of joint positions that were in each command or activity in fiscal year 2001. In fiscal year 2001, DOD’s joint duty assignment list contained 9,146 joint positions for active duty officers in pay grades O-4 and above. Figure 6 shows that 80 percent of the positions were equally divided between the O-4 and O-5 pay grades. Joint positions include a wide range of occupational categories. Figure 7 shows that, in fiscal year 2001, the single largest percentage of joint positions fell within the category of tactics and operations. Officers with military occupation skills such as aviation and navigation, armor and infantry, and surface and submarine warfare serve in this category of positions. The second largest percentage of joint positions fell within the intelligence category. This category includes strategic intelligence, politico-military affairs, and information operations. The Goldwater-Nichols Act established promotion policy objectives for three categories of mid-level officers who are serving in or have served in joint positions. The act set expectations that these officers be promoted at a rate not less than the promotion rate of their peers. The services are expected to promote officers who are in or have been assigned to the Joint Staff, as a group, at a rate equal to or better than the promotion rate of officers who are or have been assigned to their service headquarters; promote joint specialty officers, as a group, at a rate equal to or better than the promotion rate of officers who are or have been assigned to their service headquarters; and promote officers who are serving in or have served in other joint assignments, as a group, that are not included in the previous two categories, at a rate equal to or better than their service average promotion rates. For our analysis, we compared progress DOD made between fiscal years 1988 and 1994 with progress DOD made between fiscal years 1995 and 2001. For each of the three promotion categories (Joint Staff, joint specialty officers, and officers serving in other joint positions), we multiplied the three pay grades by the four services by the 7 years and identified 84 potential promotion groups. We then eliminated those groups in which no promotions occurred to identify the actual promotion groups. We then counted the number of groups in which DOD met or exceeded the applicable standard. Table 6 shows that DOD met its promotion objectives for mid-level officers assigned to the Joint Staff in 43 out of 68 promotion groups between fiscal years 1988 and 1994, or 63 percent of the time. Between fiscal years 1995 and 2001, DOD met this objective in 55 out of 60 promotion groups, or 92 percent of the time. Table 7 shows that DOD met its promotion objectives for mid-level joint specialty officers in 26 out of 52 promotion groups between fiscal years 1988 and 1994, or 50 percent of the time. Between fiscal years 1995 and 2001, DOD met this objective in 37 out of 50 promotion groups, or 74 percent of the time. Table 8 shows that DOD met its promotion objectives for mid-level officers assigned to joint organizations other than the Joint Staff in 41 out of 82 promotion groups between fiscal years 1988 and 1994, or 50 percent of the time. Between fiscal years 1995 and 2001, DOD met this objective in 60 out of 84 promotion groups, or 71 percent of the time. We administered a survey to 557 officers serving in joint positions regarding their current joint duty assignment, their thoughts and opinions on joint duty assignments in general, joint professional military education, and other opinions regarding joint officer management. A copy of the survey appears at the end of this summary. Although the survey findings cannot be generalized to all officers serving in joint positions, the composition of the officers in our survey generally reflected the service and pay grade distribution in DOD’s joint duty assignment list. Thirty-seven percent of the officers were in the Air Force, 33 percent were in the Army, 24 percent were in the Navy, and about 6 percent were in the Marine Corps. Forty-seven percent of the officers were in pay grade O-4, 35 percent were in pay grade O-5, and 18 percent were in pay grade O-6. On average, the officers we surveyed had 16 years of commissioned service. We asked the officers in our survey to identify their current joint duty position in the context of broad functional areas and types of duties performed. Twenty-seven percent of the officers responded that their joint positions fell within the functional area of strategic, tactical, or contingency operations. Their duties involved command and control of combat operations or combat support forces; military operations; or the planning, development, staffing, assessment, or implementation of plans or requirements for forces and materiel. Twenty-eight percent of the officers surveyed responded that their joint positions fell within the functional area of direct or general support or the development, staffing, or assessment of military doctrine or policy. Forty-five percent of the officers responded that they were engaged in the functional areas of education and training or administration. They performed duties that included (1) directing, commanding, and controlling noncombat units, organizations, or activities or (2) providing general, administrative, or technical support services to military operations. Seventy-one percent of the officers we surveyed were serving in their first joint duty position in the joint duty assignment list. Twenty-one percent of the officers were in their second joint duty position, and the remaining 8 percent were serving in their third joint duty position. Most officers (85 percent) responded that their service had clearly defined the career path for their military occupation. On the other hand, just over half (51percent) of the officers responded that a joint assignment was a clearly defined component of their career path and about 35 percent of the officers responded that a joint duty assignment was not a well-defined aspect of their career path. (Fourteen percent of the officers responded that they were unsure.) Most officers (70 percent) responded that a joint duty assignment was beneficial to their career to a moderate or very great extent, while about 19 percent responded that a joint duty assignment was beneficial only to a little extent. The remaining 7 percent of the officers responded that a joint duty assignment was not beneficial to their careers. We asked the officers to identify the greatest incentive for serving in a joint position. The most common response offered by Army, Air Force, and Marine Corps officers was that joint duty assignments broadened their experience, perspective, and knowledge of the multiservice and multinational environment. The most common response offered by officers in the Navy was that joint duty assignments enhanced their promotion potential and professional development. Conversely, when we asked officers to provide their opinion regarding the greatest disincentive to serving in a joint duty position, officers in all of the services cited the time they spent in a joint position that took them away from their service. Seventy-seven percent of the officers we surveyed had attended the first phase of DOD’s joint professional military education program. Among those who had attended the first phase, 56 percent completed it at one of the professional military education schools and 44 percent completed Phase I through a nonresident program. Most officers (59 percent) responded that the first phase of the joint education program was beneficial to their careers to a great or moderate extent. Sixty-three percent of the officers responded that it was important to a great or moderate extent to complete the first phase of the joint education prior to serving in a joint position. Sixty-six percent of the officers believed that the first phase of the joint education increased their effectiveness in their joint position. Officers in all services responded that the first phase of the joint education provided a foundation of joint knowledge—a first exposure to joint doctrine, other service’s methods, and the operational and strategic levels of war-fighting. Thirty-six percent of the officers we surveyed said that they had attended the second phase of DOD’s joint professional military education program. The majority of these officers had attended the Joint Forces Staff College in Norfolk, Virginia (92 percent), while significantly smaller percentages had attended the Industrial College of the Armed Forces (5 percent) and the National War College (3 percent). Sixty-four percent of the officers had not completed the second phase of the joint professional military education program and the overwhelming majority (86 percent) of these officers reported that they would not likely attend the second phase before the end of their current joint duty assignment. Officers in all services cited timing, budget, and logistics issues as reasons for not attending the second phase after reporting to a joint assignment. They added their views that neither the losing nor gaining command wanted to be responsible for funding the education. About 60 percent of the officers responded that it was important to complete the second phase of the joint professional military education program prior to serving in a joint assignment and that this education would increase an officer’s effectiveness in a joint position. Slightly fewer officers (56 percent) responded that the second phase of the joint education program was beneficial to their careers. In addition, Ann M. Asleson, James R. Bancroft, Larry J. Bridges, Jocelyn O. Cortese, Herbert I. Dunn, Jack E. Edwards, Alicia E. Johnson, David E. Moser, Krislin M. Nalwalk, Madelon B. Savaides, and Susan K. Woodward contributed to this report. | DOD has increasingly engaged in multiservice and multinational operations. Congress enacted the Goldwater-Nichols Department of Defense Reorganization Act of 1986, in part, so that DOD's military leaders would be better prepared to plan, support, and conduct joint operations. GAO assessed DOD actions to implement provisions in the law that address the development of officers in joint matters and evaluated impediments affecting DOD's ability to fully respond to the provisions in the act. DOD has taken positive steps to implement the Goldwater-Nichols Act provisions that address the education, assignment, and promotion of officers serving in joint positions. However, DOD has relied on waivers allowable under the law to comply with the provisions and has experienced difficulties implementing some of its programs. Because of these difficulties, DOD cannot be assured that it is preparing officers in the most effective manner to serve in joint organizations and leadership positions. (1) Education. DOD has met provisions in the act to develop officers through education by establishing a two-phased joint education program, but has not determined how many officers should complete both phases. In fiscal year 2001, only one-third of the officers serving in joint positions had completed both phases of the program. (2) Assignment. DOD has increasingly not filled all of its critical joint duty positions with joint specialty officers, who are required to have both prior education and experience in joint matters. In fiscal year 2001, DOD did not fill 311, or more than one-third, of its critical joint duty positions with joint specialty officers. (3)Promotion. DOD has promoted more officers with prior joint experience to the general and flag officer pay grades. However, in fiscal year 2001, DOD still relied on allowable waivers in lieu of joint experience to promote one in four officers to these senior levels. Beginning in fiscal year 2008, most officers promoted to these senior levels will also have to complete DOD's joint education program or otherwise meet the requirements to be a joint specialty officer. Our analysis of officers promoted in fiscal year 2001 showed that 58 out of 124 officers promoted to the general and flag level did not meet these requirements. DOD has promoted mid-grade officers who serve in joint organizations at rates equal to or better than the promotion rates of their peers. However, DOD has had difficulty meeting this objective for colonels and Navy captains. DOD's ability to respond fully to these provisions has been hindered by the absence of a strategic plan that (1) establishes clear goals for officer development in joint matters and (2) links those goals to DOD's overall mission and goals. DOD has not identified how many joint specialty officers it needs and, without this information, cannot determine if its joint education programs are properly structured. The services vary in the emphasis they place on joint officer development and continue to struggle to balance joint requirements against their own service needs. DOD has also not fully addressed how it will develop reserve officers in joint matters--despite the fact that it is increasingly relying on reservists to carry out its mission. Finally, DOD has not tracked meaningful data consistently to measure progress in meeting the act's provisions. |
Coast Guard operators and commanding officers told us that the National Security Cutter, Fast Response Cutter, and HC-144 are performing well during missions and are an improvement over the vessels and aircraft they are replacing. Operators primarily attribute the performance improvements to better endurance and communications capabilities, which help to position and keep these assets in high-threat areas. Specifically, these new assets have greater fuel capacity and efficiency, engine room and boat launch automation, handling/seakeeping, and food capacity, all of which increase endurance and effectiveness. To date, the improved capabilities of the four newly fielded assets have led to mission- related successes, according to Coast Guard asset commanders. In addition to performance in the field, each major acquisition is required to undergo operational testing by an independent test agency—in this case, the Navy’s Commander of Operational Test and Evaluation Force. Operational testing is important, as it characterizes the performance of the asset in realistic conditions. During operational testing, the test agency determines whether the asset is operationally effective (whether or not an asset can meet its missions) and operationally suitable (whether or not the agency can support the asset to an acceptable standard). The Fast Response Cutter and the HC-144 completed initial operational testing in September 2013 and October 2012, respectively. Based on the results, neither asset met all key requirements during this testing. The Fast Response Cutter partially met one of six key requirements, while the HC-144 met or partially met four of seven key requirements. The Fast Response Cutter was found to be operationally effective (with the exception of its cutter boat) though not operationally suitable, and the HC- 144 was found to be operationally effective and suitable. It is important to recognize that this was the initial operational testing and that the Coast Guard has plans in place to address most of the major issues identified. For example, in order to address issues with the seaworthiness of the Fast Response Cutter’s small boat, the Coast Guard will supply the Fast Response Cutter with a small boat developed for the National Security Cutter. However, DHS officials approved both assets to move into full rate production, and we found that guidance is not clear regarding when the minimum performance standards should be met—or what triggers the need for a program manager to submit a performance breach memorandum indicating that certain performance parameters were not demonstrated. The Coast Guard did not report that a breach had occurred for the HC-144 or the Fast Response Cutter, even though neither of these programs met certain key performance parameters during operational testing. Without clear acquisition guidance, it is difficult to determine when or by what measure an asset has breached its key performance parameters and, therefore, when DHS and certain congressional committees are to be notified. We recommended that DHS and the Coast Guard revise their acquisition guidance to specify when minimum performance standards should be met and clarify the performance data that should be used to determine whether a performance breach has occurred. DHS concurred with these recommendations and stated that it plans to make changes to its acquisition guidance by June 30, 2015. By not fully validating the capabilities of the National Security Cutter until late in production, the Coast Guard may have to spend more to ensure the ship meets requirements and is logistically supportable. The Coast Guard recently evaluated the National Security Cutter through operational testing, even though 7 of 8 National Security Cutters are under contract, but results are not expected until early fiscal year 2015. Coast Guard program officials stated that, prior to the operational test, the National Security Cutter had demonstrated most of its key performance parameters through non-operational tests and assessments, but we found that a few performance requirements, such as those relating to the endurance of the vessel and its self-defense systems, have yet to be assessed. Further, several issues occurred prior to the start of operational testing that required retrofits or design changes to meet mission needs. The total cost to conduct some of these retrofits and design changes has not yet been determined, but the cost of major changes for all eight hulls identified to date has totaled approximately $140 million, which is about one-third of the production cost of a single National Security Cutter. The Coast Guard continues to carry significant risk by not fully validating the capabilities of the National Security Cutter until late in production, which could result in the Coast Guard having to spend even more money in the future, beyond the changes that have already been identified. The Coast Guard has not yet evaluated the C4ISR system through operational testing even though the system has been fielded on nearly all new assets. Instead of evaluating that system’s key performance parameters, Coast Guard officials decided to test the system in conjunction with other assets—such as the HC-144 and the Fast Response Cutter—to save money and avoid duplication. However, the C4ISR system was not specifically evaluated during the HC-144 and Fast Response Cutter tests because those assets’ test plans did not fully incorporate testing the effectiveness and suitability of the C4ISR system. The Coast Guard now plans to test the key performance parameters for the next generation C4ISR system when follow on testing is conducted on the National Security Cutter; this testing has yet to be scheduled. By not testing the system, the Coast Guard has no assurance that it is purchasing a system that meets its operational needs. To address this issue, we recommended that the Coast Guard assess the C4ISR system by fully integrating this assessment into other assets’ operational test plans or by testing the C4ISR program on its own. In response, the Coast Guard stated that it now plans to test the C4ISR system’s key performance parameters during follow on testing for the National Security Cutter. As the Coast Guard continues to refine cost estimates for its major acquisitions, the expected cost of its acquisition portfolio has grown. There has been $11.3 billion in cost increases since 2007 across the eight programs that have consistently been part of the portfolio—the National Security Cutter, the Offshore Patrol Cutter, the Fast Response Cutter, the HC-144, the HC-130H/J, HH-65, C4ISR, and Unmanned Aircraft System. These cost increases are consuming a large portion of funding. Consequently, the Coast Guard is farther from fielding its planned fleet today than it was in 2009, in terms of money needed to finish these programs. Senior Coast Guard acquisition officials told us that many of the cost increases are due to changes from preliminary estimates and that they expect to meet their current cost estimates. However, the Coast Guard has yet to construct the largest asset in the portfolio—the Offshore Patrol Cutter—and if the planned costs for this program increase, difficulties in executing the portfolio as planned will be further exacerbated. Figure 1 shows the total cost of the portfolio and cost to complete the major programs included in the Coast Guard’s 2007 baseline in 2009 and 2014. Coast Guard, DHS, and OMB officials have acknowledged that the Coast Guard cannot afford to recapitalize and modernize its assets in accordance with the current plan at current funding levels. According to budget documents, Coast Guard acquisition funding levels have been about $1.5 billion for each of the past 5 years, and the President’s budget requests $1.1 billion for fiscal year 2015. To date, efforts to address this affordability imbalance have yet to result in the significant trade-off decisions that would be needed to do so. We have previously recommended that DHS and the Coast Guard establish a process to make the trade-off decisions needed to balance the Coast Guard’s resources and needs. While they agreed with the recommendation, they have yet to implement it. In the meantime, the extent of expected costs—and how the Coast Guard plans to address them through budget trade-off decisions—is not being clearly communicated to Congress. The mechanism in place for reporting to certain congressional committees, the Capital Investment Plan, does not reflect the full effects of these trade-off decisions on the total cost and schedule of its acquisition programs. This information is not currently required by statute, but without it, decision makers do not have the information to understand the full extent of funding that will be required to complete the Coast Guard’s planned acquisition programs. For example, in the Fiscal Years 2014 through 2018 Capital Investment Plan, cost and schedule totals did not match the funding levels presented for many programs. The plan proposed lowering the Fast Response Cutter procurement to two per year but still showed the total cost and schedule estimates for purchasing three or six per year—suggesting that this reduced quantity would have no effect on the program’s total cost and schedule. Given that decreasing the quantity purchased per year would increase the unit and total acquisition cost, the Coast Guard estimated that the decision to order fewer ships will likely add $600 to 800 million in cost and 5 years to the cutter’s final delivery date, but this was absent from the plan. Reporting total cost and delivery dates that do not reflect funding levels could lead to improper conclusions about the effect of these decisions on the program’s total cost and schedule and the overall affordability of the Coast Guard’s acquisition portfolio. In our report, we suggest that Congress consider amending the law that governs the 5- year Capital Investment Plan to require the Coast Guard to submit cost and schedule information that reflects the impact of the President’s annual budget request on each acquisition across the portfolio. To address budget constraints, the Coast Guard is repeatedly delaying and reducing its capability through its annual budget process. However, the Coast Guard does not know the extent to which its mission needs can be tailored through the annual budget process and still achieve desired results. In addition, this approach puts pressure on future budgets and delays fielding capability, which is reducing performance. Thus, the Coast Guard’s ability to meet future needs is uncertain and gaps are materializing in its current fleet. In fact, the Coast Guard has already experienced a gap in heavy icebreaking capability and is falling short of meeting operational hour goals for its major cutter fleet—comprised of the National Security Cutter and the in-service high and medium endurance cutters. These capability gaps may persist, as funding replacement assets will remain difficult at current funding levels. Without a long-term plan that considers service levels in relation to expected acquisition funding, the Coast Guard does not have a mechanism to aid in matching its requirements and resources. For example, the Coast Guard does not know if it can meet its other acquisition needs while the Offshore Patrol Cutter is being built. According to the current program of record, acquisition of the Offshore Patrol Cutter will conclude in about 20 years and will account for about two-thirds of the Coast Guard’s overall acquisition budget during this time frame. In addition, as we have previously found, the Coast Guard is deferring costs—such as purchasing unmanned systems or replacing its Buoy Tender fleet—that could lead to an impending spike in the requirement for additional funds. The Coast Guard has no method in place to capture the effects of deferring such costs on the future of the acquisition portfolio. The Coast Guard is not currently required to develop a long-term fleet modernization plan that considers its current service levels for the next 20 years in relation to its expected acquisition funding. However, the Coast Guard’s acquisition guidance supports using a long range capital planning framework. According to OMB capital planning guidance referenced by the Coast Guard’s Major Systems Acquisition Manual, each agency is encouraged to have a plan that defines its long-term capital asset decisions. This plan should include, among other things, (1) an analysis of the portfolio of assets already owned by the agency and in procurement, (2) the performance gap and capability necessary to bridge the old and new assets, and (3) justification for new acquisitions proposed for funding. OMB officials stated that they support DHS and the Coast Guard conducting a long term review of the Coast Guard’s acquisitions to assess the capabilities it can afford. A long-term plan can enable trade-offs to be seen and addressed in advance, leading to better informed choices and making debate possible before irreversible commitments are made to individual programs. Without this type of plan, decision makers do not have the information they need to better understand the Coast Guard’s long-term outlook. When we discussed such an approach with the Coast Guard, the response was mixed. Some Coast Guard budget officials stated that such a plan is not worthwhile because the Coast Guard cannot predict the level of funding it will receive in the future. However, other Coast Guard officials support the development of such a plan, noting that it would help to better understand the effects of funding decisions. Without such a plan, we believe it will remain difficult for the Coast Guard to fully understand the extent to which future needs match the current level of resources and its expected performance levels—and capability gaps—if funding levels remain constant. Consequently, we recommended that the Coast Guard develop a 20-year fleet modernization plan that identifies all acquisitions needed to maintain the current level of service and the fiscal resources necessary to build the identified assets. While DHS concurred with our recommendation, the response does not fully address our concerns or set forth an estimated date for completion, as the response did for the other recommendations. We continue to believe that a properly constructed 20- year fleet modernization plan is necessary to illuminate what is feasible in the long term and will also provide a basis for informed decisions that align the Coast Guard’s needs and resources. Chairman Hunter, Ranking Member Garamendi, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you or your staff have any questions about this statement, please contact Michele Mackin at (202) 512-4841 or mackinm@gao.gov. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony are Katherine Trimble, Assistant Director; Laurier R. Fish; Peter W. Anderson; William Carrigg; John Crawford; Sylvia Schatz; and Lindsay Taylor. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | This testimony summarizes the information contained in GAO's June 2014 report, entitled Coast Guard Acquistions: Better Information on Performance and Funding Needed to Address Shortfalls , GAO-14-450 . The selected Coast Guard assets that GAO reviewed are generally demonstrating improved performance--according to Coast Guard operators--but GAO found that they have yet to meet all key requirements. Specifically, two assets, the HC-144 patrol aircraft and Fast Response Cutter, did not meet all key requirements during operational testing before being approved for full-rate production, and Department of Homeland Security (DHS) and Coast Guard guidance do not clearly specify when this level of performance should be achieved. Additionally, the Coast Guard changed its testing strategy for the Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance (C4ISR) system and, as a result, is no longer planning to test the system's key requirements. Completing operational testing for the C4ISR system would provide the Coast Guard with the knowledge of whether this asset meets requirements. As acquisition program costs increase across the portfolio, consuming significant amounts of funding, the Coast Guard is farther from fielding its planned fleet today than it was in 2009, in terms of the money needed to finish these programs. In 2009, GAO found that the Coast Guard needed $18.2 billion to finish its 2007 baseline, but now needs $20.7 billion to finish these assets. To inform Congress of its budget plans, the Coast Guard uses a statutorily required 5-year Capital Investment Plan, but the law does not require the Coast Guard to report the effects of actual funding levels on individual projects and, thus, it has not done so. For example, the Coast Guard has received less funding than planned in its annual budgets, but has not reflected the effects of this reduced funding in terms of increased cost or schedule for certain projects. Without complete information, Congress cannot know the full cost of the portfolio. The Coast Guard has repeatedly delayed and reduced its capability through its annual budget process and, therefore, it does not know the extent to which it will meet mission needs and achieve desired results. This is because the Coast Guard does not have a long-term fleet modernization plan that identifies all acquisitions needed to meet mission needs over the next two decades within available resources. Without such a plan, the Coast Guard cannot know the extent to which its assets are affordable and whether it can maintain service levels and meet mission needs. Congress should consider requiring the Coast Guard to include additional information in its Capital Investment Plan. In addition, the Secretary of DHS should clarify when minimum performance standards should be achieved, conduct C4ISR testing, and develop a long-term modernization plan. DHS concurred with the recommendations, but its position on developing a long-term plan does not fully address GAO's concerns as discussed in the report. |
Building on a decade of expanding trade and investment ties and increasing economic integration in the region, the leaders of 34 democratic countries in the Western Hemisphere pledged in December 1994 to establish an FTAA no later than 2005. The agreement would progressively eliminate barriers to trade and investment. The 34 FTAA participants include a diverse set of countries, from some of the wealthiest (the United States and Canada) to some of the poorest (Haiti) and from some of the largest (Brazil) to some of the smallest in the world (St. Kitts and Nevis). The large disparities in size and economic development in the hemisphere mean that countries come to the negotiating table with different defensive and offensive interests that in some instances coincide and in other cases diverge. In addition, smaller economies lack technical capacity and seek assurances that the FTAA will include provisions to assist them in managing the adjustment to more open markets. Many nations are participating in the negotiations as subregional groupings such as the Caribbean Community (CARICOM) and the Common Market of the South (Mercosur) to facilitate their participation in the FTAA talks. Given the size of its economy, Brazil plays a leading role in Mercosur. Between December 1994 and through negotiations’ formal launch in April 1998, FTAA negotiators agreed on several principles to guide them, notably that all decisions would be reached by consensus and that the eventual FTAA agreement would be implemented as a single undertaking. A single undertaking implies that the FTAA is a package deal to be accepted in its entirety by each of the 34 prospective signatory countries in order to benefit from the agreement’s provisions. Additionally, the negotiators agreed to the overall structure, scope, and organization of the negotiations, including the establishment of a Vice-ministerial-level Trade Negotiations Committee (TNC) to oversee negotiations in between ministerial meetings and of nine negotiating groups on particular issues, along with mandated objectives for these groups. (See fig. 1.) They also agreed that a completed FTAA agreement would include trade rules, which each of the nine negotiating groups are to establish, market access schedules in five of these nine areas, and a general text to cover overarching and institutional issues. In April 2001, the first draft FTAA agreement was made public and more precise deadlines were set for the conclusion and entry into force of the FTAA agreement (January and December 2005, respectively). The 435-page text contained a compilation and consolidation of proposals tabled by FTAA participants. Producing the text marked important progress, but also highlighted the considerable work remaining before the FTAA could be finalized. Notably, much of the text remained in brackets, denoting lack of agreement among participants. Subsequent revisions narrowed but did not eliminate these substantive disagreements. Our prior GAO reports have noted that resolving these disagreements would require considerable hard bargaining. In November 2001, in Doha, Qatar, members of the WTO agreed to launch a new round of multilateral trade negotiations called the Doha Development Agenda (commonly referred to as the Doha Round), which was also to conclude by January 1, 2005. The WTO negotiating agenda includes negotiations on issues of great importance to FTAA countries, including some of the same issues as the FTAA such as agriculture and trade remedies such as antidumping. As we noted in our April 2003 report, the inclusion of agriculture in the Doha Round was especially important for the FTAA negotiations because resolution of issues such as domestic support (subsidies) and export subsidies for agricultural goods has been linked to the ongoing WTO Doha Round. Specifically, the United States has consistently argued that the WTO, rather than the FTAA, is the appropriate forum to negotiate domestic support because two primary users of domestic support in agriculture, the European Union (EU) and Japan, are not FTAA participants. Thus, the United States says, domestic support reform must take place in the WTO, where the EU and Japan are present, to avoid putting it and other FTAA countries that subsidize farmers at a disadvantage in world markets. The United States has taken a similar stance on trade remedies. Several events that are significant to the FTAA occurred in 2002. In August 2002, Congress passed the Bipartisan Trade Promotion Authority Act of 2002 (TPA). The United States Trade Representative (USTR) characterized the passage of the TPA as instrumental to completing the FTAA negotiations on the same aggressive time frame as the WTO talks (both negotiations were to be completed by January 2005). TPA sets a number of U.S. trade negotiating objectives relevant to the FTAA, and outlines procedural requirements for the executive branch to fulfill as conditions for expedited congressional consideration of legislation to implement trade agreements. In November 2002, FTAA ministers launched a Hemispheric Cooperation Program (HCP), a special trade capacity building program intended to provide technical assistance to smaller economies for negotiating, implementing, and benefiting from the FTAA. The HCP gives interested countries and donors a mechanism to work together and with other partners to integrate trade into development strategies. Past GAO reports have highlighted the importance of strengthening smaller nations’ trade capacity to FTAA’s ultimate success. Also in November 2002, Brazil and the United States assumed the co-chairmanship of the FTAA process and are expected to remain in that role until the FTAA negotiations conclude. From the November 2002 Quito ministerial to the November 2003 Miami ministerial, negotiators made progress on the technical aspects of the FTAA, including the exchange of market access offers and some requests for improvement of these offers. However, growing differences between the United States, Brazil, and many other countries over the scope and depth of obligations in the FTAA slowed down progress. Leading up to the Miami ministerial, FTAA ministers recognized the need for flexibility and for political guidance to avoid a breakdown in the negotiations. At Miami, countries agreed on a new negotiating structure, but subsequent talks failed to define the new structure. Formal FTAA talks have yet to resume since an inconclusive February 2004 meeting. As a result, the scheduled conclusion of the FTAA in January 2005 passed without an agreement. From the November 2002 Quito ministerial to the November 2003 Miami ministerial, FTAA negotiators made technical progress. For example, the TNC held the three meetings called for in the Quito ministerial declaration. Participating governments also made progress on civil society issues by holding two open public meetings in 2003 on particular issues under discussion. Moreover, each negotiating group submitted revised versions of the FTAA text chapters by the September 2003 deadline. The chapters were substantially reorganized from those presented to ministers at the Quito Ministerial in 2002. The chapters also included proposals the United States tabled during the first half of 2003 that reflected the negotiating objectives set forth in Trade Promotion Authority. On investment, the U.S. proposals were designed to improve the efficiency and transparency of investor-state arbitration and provide guidance to the tribunals that arbitrate such claims. The United States also tabled text on environmental and labor obligations reflecting TPA guidance in the FTAA Technical Committee on Institutional Issues. In addition, all 34 countries exchanged tariff offers, and many countries exchanged services, investment, and government procurement offers by the agreed deadline of February 15, 2003. Fourteen countries prepared and submitted national or subregional trade capacity building strategies as part of the Hemispheric Cooperation Program. These and other key milestones for the FTAA during 2003 are depicted in figure 2. However, during this time—November 2002 to November 2003—mounting differences between the United States and Brazil and their respective allies over the scope and depth of obligations in the proposed agreement slowed substantive progress in the FTAA. In our last report, we noted that Brazilian officials had admitted that Brazil was holding back in FTAA negotiations because they believed the United States was not ready to negotiate on issues of greatest interest to Brazil, such as high tariffs on key Brazilian exports and trade remedies. With the November 2002 election as President of Brazil of Luiz Inacio Lula da Silva, Brazilian participation in the FTAA process further slowed down. Within the FTAA talks, Brazil and Argentina were among the few countries that failed to submit initial market access offers by the established February 2003 deadline for three topics on which they were hesitant to assume obligations--services, investment, and government procurement. Moreover, although the 1998 San Jose ministerial declaration explicitly named the nine issue areas to be negotiated in the FTAA, questions over the substance of the final agreement continued to surface. For example, the United States came under continued pressure to change its long-standing insistence that negotiations on certain agricultural subsidies and trade remedies be conducted within the WTO, not the FTAA. Among other things, passage of the 2002 Farm Bill and the WTO’s failure to meet scheduled milestones heightened concerns by some FTAA nations about prospects for addressing these two key issues. The February 2003 exchange of initial market access offers also highlighted U.S.-Brazil differences in approach to the FTAA. The United States made four different goods market access offers that were calculated to give smaller, less developed economies faster duty-free access to the United States. The United States said that its differentiated offer allowed it to accord smaller economies better treatment, a principle agreed to by other FTAA nations, as well as provided greater leverage to negotiate market- opening concessions in large, lucrative markets. However, Brazil complained that the U.S. market access offer provided Brazil and its Mercosur partners with the least favorable market liberalization for consumer and industrial goods and agricultural products, as well as placing its most competitive products in the category with the longest phase-out period for tariff elimination. However, U.S. officials believe the initial U.S. offer to Brazil and its Mercosur partners was forthcoming because it provided for immediate duty-free treatment to 58 percent of Mercosur’s industrial goods and 50 percent of its agricultural goods. In response to a slowing of progress within FTAA negotiating groups, Ambassador Zoellick visited Brazil’s Foreign Minister Amorim in May 2003 and convened an informal ministerial meeting at Wye, Maryland, in June 2003, to discuss possible ways to move the talks forward. Nevertheless, in July 2003, Mercosur, led by Brazil, formalized its vision of a scaled-back and “rebalanced” FTAA by formally tabling its “Three Track” proposal in FTAA talks. According to press and other accounts, the proposal called for (1) bilateral FTAA negotiations to focus primarily on market access for goods and services; (2) regional FTAA negotiations on rules for several issues not covered by the WTO, including competition policy and dispute settlement, and (3) leaving six of the original nine issues out of the FTAA altogether and moving them to the WTO Doha Round negotiations (i.e., Brazil’s defensive interests of services, investment, government procurement, and IPR, along with the United States’ defensive interests of agricultural subsidies and trade remedies). Figure 3 shows the key issues Mercosur proposed moving to the WTO versus those it wanted to keep in the FTAA. In public remarks the United States rejected the proposal, which some have labeled “FTAA-lite.” The lead U.S. negotiator explained that a broader agenda, including services, investment, government procurement, and intellectual property, is extremely important to fostering real integration in the hemisphere. He stressed that a market access-only agreement would be insufficient to promote economic growth and development, and expressed reservations about providing a high level of access to the U.S. market in the absence of broader commitments on rules and disciplines of interest to the U.S. and others in the region. As we noted in our September 2001 report, the United States is the world’s leading exporter of services ($253 billion in 1999), holds significant investments in FTAA countries ($661 billion in portfolio and direct U.S. investment in 1999), is interested in government procurement opportunities in the Western Hemisphere valued at approximately $250 billion, and enjoys a decisive competitive advantage in terms of high-tech, knowledge-based industries that depend on strong IPR protection. In addition, unlike agriculture and antidumping, the mandate for the WTO Doha Round does not include negotiations on investment or government procurement, nor a major update of IPR protections. As a result, those issues—which are of significant commercial interest to the United States—might not have been addressed in either the FTAA or WTO. The failure of the September 2003 WTO ministerial at Cancun further complicated FTAA talks. As we detail in a separate report, trade ministers at the WTO Cancun ministerial in September 2003 failed to adopt decisions on any of the key issues before them, including a framework for subsequent work on agriculture. Because both the FTAA and the WTO agreements are to be concluded as single undertakings, and their deadlines for conclusion were the same, failure of the WTO to progress at Cancun imperiled timely completion of both the WTO Doha Round and FTAA talks. Moreover, the Cancun failure spawned recriminations among FTAA participants. For example, Latin American nations such as Brazil, Argentina, Chile, Ecuador, and Mexico were prominent in the Group of 20 developing nations that pressed vigorously at the WTO for cuts in developed country agriculture subsidies. The United States complained at the time that the group was engaged in confrontational tactics that were more directed at making a point than making a deal. After Cancun, USTR Zoellick traveled to the Caribbean to discuss the FTAA and other matters. At the first FTAA meeting after the Cancun failure, an October 2003 TNC meeting, a group of 13 FTAA countries—supported by the United States— called for the original, comprehensive vision of the FTAA to be retained. These countries, along with the United States, further urged that the FTAA’s market liberalization commitments be highly ambitious in a number of areas, including intellectual property, investment, services, and government procurement. Although nearly all other FTAA countries expressed willingness to continue negotiating in all nine issue areas and continued commitment to meet the January 2005 deadline for concluding the FTAA, Brazil indicated a limited willingness to undertake new rules in these areas, citing a need to maintain its negotiating leverage in the WTO Doha Round and to preserve flexibility in these issues. Certain other countries also had reservations. Participants in FTAA negotiations thus effectively broke into two “camps,” articulating their competing visions of an FTAA agreement under the separate banners of U.S. and Brazilian leadership. In view of the sharp differences in vision for the FTAA, trade ministers recognized the need to provide political guidance for negotiators. FTAA countries wanted to avoid an outcome similar to the failed September 2003 WTO ministerial in Cancun, Mexico. Participants recognized that keeping all 34 FTAA countries engaged in the negotiations was critical and that flexibility would be required to do so. In particular, a number of participants feared that failure to accommodate Brazil’s demands would prompt it to abandon the negotiations, dashing their hopes of improved trade terms with South America’s largest market. As host of the Miami ministerial, the United States was particularly invested in a successful outcome. USTR and certain other U.S. officials had been working hard all year to bring about a successful ministerial by working closely with officials from the state of Florida and with representatives of Broward County and the city of Miami, which organized the event. In early November, USTR Zoellick hosted an early mini-ministerial meeting among key FTAA nations in Lansdowne, in preparation for the Miami ministerial later that month. At the Miami ministerial, after obtaining informal input from some members the early November mini-ministerial meeting organized by the United States, co-chairs the United States and Brazil proposed a new framework for the FTAA agreement as a means to move forward. Ministers in Miami discussed and approved the proposed new structure, which gives each country the flexibility to decide, according to its needs, sensitivities, objectives, and capabilities, whether to assume commitments beyond the common set which will be applicable to all 34 countries. Specifically, ministers instructed the TNC to: (1) develop a “Common and Balanced Set of Rights and Obligations” applicable to all 34 countries that would include provisions in the nine areas under negotiation since 1998 and (2) establish procedures for negotiations, possibly on a plurilateral basis, for countries interested in negotiating additional disciplines and benefits. FTAA participants, trade experts, and other analysts have commonly referred to these two components using a variety of terms (e.g., tiers, tracks, etc.). For the purposes of this report, we will use lower tier when discussing the baseline or “Common Set of Rights and Obligations” that will apply to all countries, and upper tier when referring to the plurilateral component of additional obligations that will be entered into by individual countries on a voluntary basis. The Miami instructions represented a substantive shift from the previous vision of the FTAA as a single undertaking, applying equally to all 34 nations, to that of a two-tiered or two-track agreement with varying degrees of national commitments to cut trade barriers and abide by trade rules. The two tiers combined would constitute the FTAA. Table 1 provides a brief description of the two-tiered structure. For the common set, or lower tier, ministers agreed that all nine areas previously under negotiation would be covered. They also agreed to the principle that the same rules would apply to all 34 participants. However, the specific obligations under each issue were not determined and were left to the TNC to negotiate in the future. For the upper tier, country participation, issue coverage, and specific obligations were to be worked out by the participating countries. However, the TNC was to develop procedures governing these negotiations as a component of the overall FTAA. Thus, the Miami ministerial declaration left unanswered questions of how ambitious the FTAA as a whole would be and what members could expect to gain in key issues and markets of interest. However, ministers stated that they expect that this new framework would “result in an appropriate balance of rights and obligations where countries reap the benefits of their respective commitments.” U.S. officials stress this means countries will “get what they pay for” in the negotiations. Some experts have said that the Miami compromise was a pragmatic political decision to avoid a collapse of the Miami ministerial meeting and a breakdown in the FTAA talks, even if it lacked details on how the new structure should be instituted by the TNC. Although ultimately accepted as a way to salvage the talks, the new two- tier structure disappointed some member countries. At the ministerial, several countries expressed disappointment that this new structure for the FTAA would reduce their potential gains through the agreement and urged that any two-tier arrangement be temporary in character. For example, at the closing press conference for the Miami ministerial, Mexico’s Foreign Minister noted that Mexico had “had the expectation of achieving greater progress, greater integration, and greater definition of what we want in the hemisphere for free trade.” Chile’s trade minister, while acknowledging the need to make headway in the face of economic and political sensitivities, noted that when it committed to pursuing an FTAA, Chile had been “looking for a comprehensive and ambitious agreement that would cover all the disciplines.” In general, such countries felt the new structure cast doubt on whether the FTAA agreement would ever attain the promise of trade liberalization and hemispheric-wide integration that had been collectively envisioned for nearly a decade. As a result, they urged intensive efforts to find common ground in the months ahead. Ministers at Miami set goals for concluding market access negotiations by September 2004 and the entire FTAA by January 2005 (see fig. 4). However, FTAA countries made little progress to institute the new two-tier structure in 2004 and thus did not meet these negotiation deadlines. The February 2004 TNC meeting was recessed after failing to complete the two tasks given them by ministers at Miami: (1) to define the lower tier of rights and obligations that would apply to all 34 nations and (2) to develop procedures for plurilateral negotiations, resulting in the indefinite suspension of formal talks among all FTAA members. At the close of the February 2004 TNC, the U.S.-Brazil co-chairs cited the complexity of the task and shortness of time as being their primary consideration in recessing the meeting without agreement. Hopes for reconvening the TNC later faded as ongoing efforts by the U.S. and Brazilian co-chairs to bridge outstanding differences reached a halt in mid-2004. Sharply different visions for the FTAA’s common rights and obligations were articulated at the February meeting. Ahead of the February meeting, the United States worked with four other countries (Canada, Chile, Costa Rica, and Mexico) to develop a common strategy. The United States was unsuccessful in reaching agreement with Brazil on the format and participants for a more inclusive preparatory meeting, and thus it was never held. At the February TNC meeting, the United States joined with a group of 13 nations (including the 4 it worked with ahead of the meeting) in making a proposal for the common set. Brazil and its Mercosur partners also presented a proposal. The U.S.-coalition’s proposal went beyond Mercosur’s in certain respects, whereas the Mercosur proposal went beyond the U.S. coalition’s proposal in others. The two main camps that emerged at the February TNC were roughly similar to the two main camps that emerged in the pre-Miami debate over the FTAA’s scope and depth. After the meeting, both the United States and Brazil complained that their partners were denying them benefits that they deemed were essential to attaining an acceptable balance of rights and obligations in the FTAA. Specifically, a U.S. trade official was quoted as saying that the proposal it presented in concert with 13 other countries reflected a scaling back of its objectives in areas of importance to it, namely, services, IPR, investment, and procurement, in light of the Miami framework. The fact that Mercosur’s proposals did not reflect a scale back in their own ambitions for market access for goods and in agriculture was cited by the U.S. official as the primary reason negotiators were not able to strike an acceptable balance at the February meeting. In contrast, in public remarks, Brazil’s then- ambassador complained that Brazil is being unfairly labeled as a spoiler in FTAA talks, claimed that even with the Miami compromise the FTAA could still be comprehensive, and expressed concern about the United States and its allies’ stance on market access at the February meeting. The Brazilian Ambassador stressed that Brazil needs to ensure that its concerns in the areas of domestic support for agriculture and trade remedies are adequately dealt with and that it will obtain improved access to the U.S. market, particularly for agricultural goods, in order to consider the FTAA a balanced agreement. In effect, according to a senior U.S. official involved in the talks, both sides accused the other of walking away from the Miami compromise. Subsequent informal efforts to work out remaining differences continued until June 2004. While these formal and informal efforts resulted in some progress in defining the rights and obligations for the lower tier, collectively, our analysis suggests that they further reduced the scope of the FTAA’s eventual substance in terms of market access and rules on key topics. That is, to the extent common ground was reached, it was often the result of movement in the direction of the proposal with the least ambition on a given issue. No further meetings on the FTAA took place in 2004, and a ministerial meeting slated for that year was never scheduled by Brazil as host. As a result, the scheduled deadline for concluding the FTAA negotiations in January 2005 was missed without agreement. Our analysis suggests that three main factors have inhibited progress on the FTAA. First and foremost, underlying differences between the United States and Brazil and their respective allies on the depth of rights and obligations on key issues continue. Second, negotiations in other forums were given priority over the FTAA, in part because the United States and Brazil deemed that progress there was more possible and could eventually enhance prospects for a mutually advantageous FTAA. Third, two mechanisms intended to facilitate compromise, the U.S.-Brazil co- chairmanship and the two-tier structure, have thus far failed to do so. The U.S. and Brazil’s inability to accommodate each other’s different negotiating priorities continues to be the basis for the ongoing impasse that halted FTAA negotiations for much of 2004. According to U.S. officials, serious and significant rule-making obligations on such topics as services, IPR, investment, and procurement, are essential if the FTAA is to move the hemisphere towards meaningful regional integration. Specifically, the United States seeks greater enforcement of IPR, and new commitments that go beyond existing WTO requirements in investment, government procurement, and other issues. The United States is a world leader in these sectors, yet has few multilateral and bilateral agreements with FTAA countries to protect its interests. For example, only 2 of the 34 nations participating in FTAA talks (the United States and Canada) are signatories to the WTO agreement that sets out predictable rules enabling foreign suppliers to compete on an equal footing with domestic suppliers for government contracts. However, Brazil maintains that there is domestic resistance to such reforms, and that agreeing to disciplines in these areas could be costly and limit its ability to influence its economy. Brazil is a major world producer of commodities such as coffee, oilseeds, sugar, soy, and beef, and, along with Argentina, has been among the most vocal of Mercosur members in insisting that the FTAA involve significant new market access, especially for agricultural products. Domestic sensitivities in many countries regarding these products were always going to complicate the FTAA, and are no less challenging in the new Miami framework involving generally lower ambition. As highlighted below, in the most recent negotiations co-chaired by the United States and Brazil, the 34 governments remained far apart, and agreement has not yet been reached on the extent of rights and obligations on numerous issues. The key sticking points remained market access, agriculture, and IPR. Brazil and its Mercosur partners have argued for up-front commitments that all tariffs will be phased out in the FTAA. However, the United States is not prepared to commit to an outcome to fully liberalize tariffs on all products at this stage of the FTAA negotiations--before tariff negotiations have really begun and before the overall level of ambition of the common set is known. Nevertheless, Brazil says it wants all products to be on the table – agricultural and nonagricultural – and it does not want product exclusions. Previously agreed FTAA guidance states that tariffs on all products will be subject to negotiations. It also established 4 time periods for phasing out tariffs. Both before and after Miami, Brazil unsuccessfully sought language to the effect that the goal of market access negotiations is elimination of tariffs on the entire tariff universe. Brazil’s Ambassador explained that, even since the Miami compromise, Brazil’s goal remains to ensure that the FTAA benefits all of its key export products. However, he expressed concern that the United States and its allies want key Brazilian export market products to be excluded from FTAA tariff elimination. U.S. officials acknowledge that the United States left some Mercosur products off the table at the point at which FTAA negotiations stalled. However, they explain that all of the products excluded from U.S. tariff elimination were agricultural products and that the percentage of agricultural products excluded was not high. U.S. officials had told us that countries making fewer commitments should expect fewer benefits from the FTAA. Most recently, in February 2005, a U.S. official underlined that the degree of market access the United States will offer in the FTAA will depend on what commitments it secures from other FTAA nations. Since the FTAA common set involves fewer market access and rule-making commitments than the United States has received from its bilateral and subregional FTA partners, the FTAA will likely involve fewer U.S. market access benefits, the official said. The United States and Brazil have also been unable to resolve several agricultural issues, including the handling of agricultural domestic supports. As previously noted, the United States has argued that negotiations on domestic supports should be exclusively conducted in the WTO Doha Round because it is not possible to reduce domestic supports solely on a regional basis and without all major subsidizers present. Brazil and its Mercosur partners have called for the elimination of agricultural subsidies, including domestic supports. Although in November 2004 Brazil’s foreign minister recognized that the only way to reach their goal of eliminating subsidies is through the WTO, Brazil, and its Mercosur partners have still sought ways to address agricultural supports in the FTAA. For example, according to a tripartite organization official, Mercosur made a request at the February 2004 TNC to create a hemispheric mechanism “to neutralize the effect of all distorting measures and practices that affect trade of agricultural products within the region.” A U.S. trade official confirmed that Mercosur is hoping to secure some concessions on domestic supports—such as compensation in terms of better market access—in the FTAA, but said that the United States has rejected any attempt to negotiate this issue in the FTAA. In fact, several U.S. officials expressed consternation that this issue had resurfaced after the Miami ministerial. Another outstanding issue is whether to provide for the possibility of a special agricultural safeguard—a concept the United States and numerous non-Mercosur nations have also endorsed. A USTR official said that this mechanism would allow countries to address sudden drops in prices for specified goods. A Brazilian official expressed concern that this would “impair real market access” and might be used for protectionist reasons. On export subsidies, the U.S. and Mercosur agree that export subsidies should be eliminated in the hemisphere, but no agreement has been reached on the definition of agricultural export subsidies or how to handle subsidized imports from countries outside the hemisphere. Brazil’s unwillingness to commit to binding IPR enforcement obligations is a major source of disagreement between the United States and Brazil. In May 2004, the Brazilian co-chair publicly noted that Brazil does not believe trade sanctions in retaliation for failure to enforce IPR are consistent with the FTAA’s goal of lowering barriers to trade. However, he noted that other FTAA countries do not believe voluntary consultations are sufficient for enforcement of IPR. As Foreign Minister Amorim has expressed Brazil’s position, the problem is not with enforcement per se, but with the fact that technical assistance and financing are needed to improve Brazil’s ability to comply. In a September 2004 speech, Deputy USTR Allgeier stated that the United States wants to focus on implementation and enforcement of countries’ existing WTO TRIPs commitments, that the United States has serious, unresolved concerns about Brazil’s IPR enforcement, and that the FTAA must ensure that IPR enforcement is being strengthened. In November 2004, USTR Robert Zoellick said that although the United States recognizes it cannot attain in the FTAA the high standards of IPR protection that have been achieved in bilateral FTAs, countries’ refusal to commit to enforce IPR obligations in the FTAA was unacceptable to the United States. Reports from the latest (February 2005) meeting indicate IPR remains a key sticking point. Other important differences exist on such issues as services, investment, government procurement, and trade remedies. On services, for example, the extent of and approach to FTAA liberalization and rules are at issue. However, participants have made some progress in narrowing their differences on these issues, notably government procurement and investment. In response to these and other substantive problems that slowed FTAA talks, participants turned to negotiations in other forums, such as the multilateral WTO talks and subregional and bilateral efforts, where progress looked more immediate. Coupled with the absence during most of 2004 of formal negotiations on the FTAA, this further diminished the momentum behind the regionwide effort. (See app. I.) In particular, the United States and Brazil have focused their energies on the WTO Doha Round and on regional negotiations, such as those among the United States and several Andean nations and between Mercosur and the European Union (EU). In part, this reflected their judgment that progress in these forums was more possible and would ultimately enable greater advances in the FTAA. Other trade experts, however, are not sure that the FTAs and other agreements have worked to advance the FTAA. In 2004, the United States continued to press an aggressive “competitive liberalization strategy,” which is to move its trade agenda on three fronts: multilaterally at the WTO, regionally at the FTAA, and bilaterally with a series of prospective FTA partners. The USTR has noted in its 2004 annual report that since passage of TPA, the United States has already negotiated FTAs with 12 countries including several in the Western Hemisphere— Chile, the Central American countries (CAFTA), and the Dominican Republic—and is in the process of negotiating with 12 more. Senior U.S. officials have stated that the U.S. pursuit of bilateral and multilateral FTAs would advance the FTAA and further its goal of expanded trade in the hemisphere, even if in a step-by-step fashion. For its part, Brazil’s foreign minister has indicated that the WTO talks are more important than the FTAA talks, since the WTO is the “only way to reach goal of eliminating subsidies and other trade distortions.” Brazilian officials also focused on an EU-Mercosur FTA that some believe could strengthen its hand in FTAA negotiations. The EU-Mercosur talks reportedly slowed in the fall of 2004 over many of the same issues that arose in the FTAA, but are expected to restart soon. There are mixed views about whether these bilateral and regional FTAs are having a positive impact on the FTAA. Some trade experts say that FTAs help the FTAA by facilitating free trade among countries, setting common rules, and providing a better understanding of the benefits of free trade. Moreover, these FTAs are achieving the kind of market access and updated trade rules the United States had hoped to secure in the FTAA prior to Miami. In part for this reason, several U.S. business community representatives we spoke with told us they have shifted their focus to other agreements. For example, a representative from the International Intellectual Property Alliance credited recent U.S. FTAs with Morocco, Singapore, and Australia, as setting new standards for IPR protection that are higher than the WTO, and expressed doubt that a 34-nation FTAA will include such high standards. Similarly, a trade group representative from the services community told us he believes that U.S. industries are likely to receive more market access from present and future FTA partners in the hemisphere than they would through the new two-tier FTAA structure. Trade group representatives from the U.S. agricultural community told us that they believe the sector has gained most of the market access it seeks through bilateral FTAs. Some of them now see the FTAA as more of a threat than an opportunity. This loss of interest has led other trade experts to argue that FTAs detract attention from the FTAA, create a confusing system of trade arrangements, and raise the bar—possibly beyond others’ reach— for new trade rules on issues, including services, government procurement, and IPR. On the multilateral front, lack of progress in global trade talks at the WTO also impeded progress in the FTAA negotiations in 2003 and the first half of 2004. As a result, officials told us that during a part of 2004 the United States and Brazilian focus shifted from the FTAA toward reaching agreement on a WTO framework. In fact, the United States and Brazil, among others, played leadership roles in intensive negotiations at the WTO and successfully reached agreement on a framework on August 1. The framework in agriculture—a guideline for the next phase of negotiations— represents progress. Among other things, it includes a commitment to eliminate all export subsidies on agriculture by a date certain and specifies that countries with higher levels of trade-distorting domestic supports will be subject to deeper cuts in these supports. However, it falls short of the “modalities” (numerical targets, timetables, formulas, and guidelines) required to actually make tariff and subsidy cuts that members had been targeted to attain by March 2003. In fact, given their success in adopting a package and recent efforts to accelerate progress, WTO nations are now hoping that they will have modalities in place by their December 2005 ministerial, but recognize this as an ambitious goal. WTO negotiations are thus about 2 years behind their originally scheduled date for conclusion. A third factor hindering progress on the FTAA is that two mechanisms intended to facilitate U.S.-Brazil compromise—the new two-tier structure and the co-chairmanship—have thus far failed to do so. At Miami, the United States and Brazil billed the two-tier structure as a way to bridge their differences and enable both their visions of an FTAA to co- exist. However, our analyses suggest that in practice, the new negotiating framework added new complications to the negotiations without resolving the U.S.-Brazil centered dispute over the FTAA’s ambition. First, since Miami, FTAA negotiators have faced a conceptual problem because they abandoned the original vision in favor of a scaled-back FTAA, the substantive content of which was left largely undefined. Since details on the level of trade liberalization that was envisaged in the common set were not decided at Miami, FTAA participants have interpreted the goals and the nature of the new FTAA architecture differently. Second, interdependence between the two tiers has also complicated net benefit calculations. Member countries will have to trade-off offensive and defensive interests in the two-tier framework. This is inherently more complicated to do until the content and obligations of each tier is defined. Third, the United States and Brazil have divergent strategies for instituting the two-tier structure. U.S. officials admit that the U.S. long-term goal is an FTAA modeled on the more ambitious upper tier. The United States’ basic premise is that if a country is not willing to undertake higher obligations and new rules for issues of importance to it—services, investment, government procurement, and IPR—then it should not expect as much market access for its goods and services. Brazilian officials, on the other hand, explain that Brazil is trying to achieve balance within the lower tier, including market access for goods and services, and some limited new rules for investment and government procurement. However, Brazil is otherwise generally not willing to accept an FTAA with rules that go beyond those in the WTO. In discussions with us, U.S. and Brazilian officials expressed continued belief that the two-tier structure represents the best way forward for FTAA negotiations. Certain officials from other countries and experts, however, are skeptical. Several officials said the two-tier structure is a symptom of continued U.S.-Brazil failure to agree on an FTAA that provides mutual benefits. They suggest that the two-tier structure needs to be rethought, given the difficulties experienced in instituting it and the potential it creates for moving aspects of issues essential for balance off the negotiating table. Now fearing the prospect that participating exclusively in the lower tier could result in permanent “second class” membership, an FTAA country official who supported the idea suggested to us that a single agreement applicable to all member nations with negotiated exemptions for sensitive products or capacity constraints might be preferable. In our view, the arrangement with the United States and Brazil as co-chairs of the negotiations has complicated the process of moving the FTAA negotiations forward. When negotiations were formally launched in 1998, selecting two of the largest economies in the hemisphere with vastly different interests to share the responsibility of leading the talks seemed logical to some experts, as success in the talks depended upon those two countries working together toward a common goal. Most experts and participants still believe such cooperation is a necessary, if not sufficient, condition for concluding an FTAA. U.S. and Brazilian officials believe that the co-chairmanship reflects the importance of the United States and Brazil in bringing the negotiations to a successful conclusion and keeping countries engaged at senior levels toward that end. However, some participants have questioned whether as co-chairs the United States and Brazil have in practice been able to successfully keep separate their roles of (1) negotiating in their countries’ interest, while (2) impartially leading and finding solutions to move the negotiations forward. As a result, one of the lead FTAA negotiators commented that it may have been preferable to have a neutral chair. Moreover, as co-chairs, the United States and Brazil have the power to set the pace of negotiations by setting schedules and convening meetings. As noted earlier, the co-chairs were unable to agree to hold a preparatory meeting with a cross-section of members ahead of the inconclusive February 2004 TNC. The co-chairs have not reconvened the 34 nation TNC since the February 2004 TNC, and no negotiating group meetings were held in 2004. While for most of 2004 the other member countries gave the United States and Brazil time and space to work out their differences, the co-chair talks came to a halt in June 2004. One lead negotiator suggested to us that since that time neither Brazil nor the United States is effectively leading the negotiations. Yet beginning in August 2004, after the WTO framework was agreed to, certain participating countries began coming forward, urging the co-chairs to update them on progress, including prospects for a relaunch and a schedule for re-engaging the entire membership. Until late February 2005, the co-chairs had yet to do so. In comments to us, an official from another country that has pressed for a comprehensive and ambitious FTAA urged the United States and Brazil, as co-chairs, to disavow self-serving stances and to adopt a more flexible approach, rather than using the FTAA to settle bilateral disputes and blocking, rather than advancing, hemispheric negotiations. On the other hand, Brazilian officials were not alone in commenting favorably on the U.S. co-chairs’ personal commitment to the FTAA’s success. Although many participants and experts were pessimistic when we spoke with them in the fall of 2004, they generally believe that integrating the hemisphere is still worth pursuing and remain hopeful about prospects for reviving the FTAA in 2005. Many FTAA experts and country officials we spoke with were pessimistic about the FTAA’s near-term prospects because the FTAA cannot advance until the U.S.-Brazil impasse is broken. Through mid-November 2004, neither the United States nor Brazil had decided to take the first move to break their 6-month stand-off. However, in late November, USTR Zoellick wrote to Brazil’s Foreign Minister Amorim proposing a fresh effort on the FTAA and called for the two sides to meet soon towards that end. Brazil responded positively. On the eve of issuing this report, new efforts began toward rekindling the FTAA negotiations. On January 30, 2005, Ambassador Zoellick and Brazilian Foreign Minister Amorim met to discuss the possibility of renewing FTAA talks. Following that meeting the co-chairs met in Washington, D.C., on February 23 and 24, 2005, and at the end of the meeting reported that some progress had been made in bridging their differences concerning the scope of the FTAA’s common set of obligations. Another meeting has been scheduled for late March to continue those discussions. If the co-chairs reach agreement, they plan to convene a TNC meeting in late April or early May of this year, with the goal of reaching consensus among the 34 participating countries on the instructions for the common set negotiations and on procedures for the plurilateral negotiations. A statement from the co-chairs said that they are hopeful that based on that agreement they would be able to resume FTAA negotiations in June. Nevertheless, it may be instructive to examine the reasons U.S. and Brazilian officials gave to us for their prior reticence to re-engage, based on our fall 2004 interviews--all three of them related to political will. First, several U.S. trade officials suggested the United States has little room to maneuver, especially to ensure that the final FTAA sufficiently meets the objectives of TPA. A U.S. official explained that the United States has already made considerable concessions to Brazil in agreeing to a two-tiered FTAA at Miami. The United States’ subsequent February 2004 proposals on the lower tier also reflected a scale-back from its earlier demands. The U.S. officials we spoke with are still hopeful that the FTAA will eventually deliver meaningful commercial benefits. However, they acknowledged that any benefits are likely to fall short of what it had hoped to secure prior to Miami—or what the U.S. business community has come to expect as a result of recent bilateral agreements. This diminished business support has weakened the pressure on U.S. negotiators to seek an accommodation with Brazil. Second, in discussions with us, U.S. and Brazilian officials both expressed a sense that they have made considerable effort to find common ground and showed some skepticism about their partner’s commitment. For their part, U.S. officials point to a series of meetings initiated by the USTR, both before and after Miami, as emblematic of U.S. commitment to advance the talks, but say Brazil has seemed to want to hold the FTAA back. According to a U.S. official, the United States had been interested in a substantive FTAA and the administration remains committed to the FTAA because it will be good for the United States and for the region. However, discussions since Miami have helped bring differences in U.S.-Brazil conceptions out in the open, and suggest that Brazil has not reconciled itself to an FTAA that looks anything like what the United States would like to see. U.S. officials also believe they have shown willingness to compromise and express disappointment that Brazil and its Mercosur partners have been unwilling to reciprocate. For example, the USTR told reporters in mid-November 2004 that Mercosur needs to show additional flexibility and be more willing to “give” on issues of importance to the United States in order to “get” what it wants out of the FTAA. On the other hand, Brazilian officials expressed concern to us that its positions are being mischaracterized or misunderstood. For example, Brazil counters that the kind of opening of industrial and services markets it is prepared to offer would present considerable new opportunities to the United States and other FTAA nations. Brazil has also been willing to go beyond its WTO obligations in some areas, notably investment and government procurement, where the WTO presently has no comprehensive multilateral agreements. Thus, Brazilian officials say, efforts by U.S. officials to label it as “unambitious” are both unfair and unproductive. Third, based on our conversations with U.S. and Brazilian officials, each country also appeared to feel it has a “strong hand” in the negotiations and could afford to wait. Brazil believes better access to its large and growing economy is valued by the United States and has shown its influence on the world stage by playing a central role in WTO negotiations and winning WTO disputes against the U.S. cotton and EU sugar agricultural subsidy programs. U.S. officials argue the United States has had considerable success with an aggressive “competitive liberalization strategy,” stating that, taking into account FTAs in effect, completed, or that are in ongoing negotiations, U.S. bilateral and subregional free trade efforts involve two- thirds of the hemisphere’s non-U.S. population and income. The United States also retains certain leverage associated with its trade laws and preference programs. For example, though not formally linked to its FTAA stance, Brazil’s General System of Preferences (GSP) benefits from the United States have been recently placed in jeopardy for alleged failure to adequately protect U.S. intellectual property rights. Some country officials and experts believe that conditions may be more ripe for restarting talks now that the long-standing deadlock in WTO talks has been broken and the U.S. electoral cycle is complete. (Even after the U.S. elections, Brazil had indicated it was waiting for a new USTR to be named before seriously engaging in FTAA talks.) On the substance, the WTO framework adopted in July 2004 resulted in somewhat clearer commitments regarding further disciplining agricultural subsidies and other issues. Breaking the WTO impasse also could improve the FTAA negotiating atmosphere, given the U.S.-Brazil cooperation it required. Thus, to the extent that it provides reassurance about the direction and thrust of partners’ policies, the WTO progress builds confidence that could provide impetus for restarting FTAA talks. However, several experts we spoke with felt that the WTO framework, while welcome, is not concrete enough to forestall the ongoing insistence by some parties that agriculture subsidy and trade remedy reform accompany an FTAA. Indeed, in January 2005, Brazil’s Foreign Minister stressed that Brazil’s capacity to agree to new rules in the FTAA on IPR and investment depends on securing such reform. One Andean country’s lead negotiator echoed this sentiment, saying the FTAA will remain secondary in priority to other negotiations until the outcome of the WTO Doha Round is clear. Brazilian officials told us that the WTO framework sends a “positive message” for the FTAA, but stressed that what the WTO concretely produces on agriculture remains essential to FTAA progress. Several officials and experts said the lead-up to the November 2005 Summit of the Americas in Argentina could generate forward momentum for the FTAA, although others were less sanguine. Yet, even the optimists feel concluding an agreement will only be possible if FTAA ministers halt the downward spiral in the FTAA’s ambitions and renew their efforts to negotiate a meaningful agreement. Certain nations and U.S. business associations we met with stressed that they stand ready to support a two- tier FTAA, as long as it promises sufficiently large economic gains. Several officials also suggested that building forward momentum will not be a minor undertaking, given the considerable length of time FTAA negotiations have languished. As a result, certain FTAA country officials, Tripartite Committee, and trade experts see taking action by mid-2005 such as extending TPA as critical to finishing the FTAA. Other experts suggest FTAA countries will closely watch Congress’ stance in 2005 on whether to approve the CAFTA as a bellwether for support for broader hemispheric integration. Even so, a number of experts felt the deadline for WTO and FTAA talks would remain linked with final bargaining likely to be made in 2006-07, when a new U.S. Farm Bill may be under consideration (the present U.S. Farm Bill expires in late 2006). Despite concern over the short-term prospects, many experts and officials believe that the FTAA is an idea that is still worth pursuing and are hopeful for re-engagement later in 2005. First, experts argue that the ideals that originally motivated pursuit of an FTAA remain valid. These include the desire to deepen economic integration and improve living standards throughout the hemisphere; the shared goal of fostering political cooperation and strengthening democratic, market-oriented institutions; and the imperative to increase the region’s growth and competitiveness in an ever-more-globalized economy. In this regard, China’s emergence as a global trader has lent further importance to attaining the FTAA, some suggested. Second, officials from many of the nations we contacted continue to anticipate gains from concluding an FTAA. Senior U.S. officials have repeatedly and publicly expressed continued commitment to an eventual FTAA. In an October 2004 statement signaling an improved chance of resuming talks after the U.S. election, Brazil’s Foreign Minister stated, “Integration will occur, for better or worse. It will come about through contraband, drug traffic, and guerilla warfare. Or it will be through trade, technology, and investment. Better for it to be the second way.” Nevertheless, various other public remarks by Brazil’s Foreign Minister suggest that the FTAA’s priority is not paramount and that Brazil’s principal interest is in a negotiation with the United States that will yield improved access to the U.S. market. An official from another Mercosur member noted its interest in an FTAA is based on a desire to increase and diversify its exports, a theme echoed by an official from another regional grouping. An official from an existing U.S. FTA partner highlighted its desire to further integrate hemispheric markets and sees the FTAA as integral for promoting hemispheric development. An official from another U.S. FTA partner stressed its strong commitment to the FTAA because it would bring political and economic gains over the medium- and long-term. Officials in another nation pointed out that the FTAA is critical for improving access to Latin American markets, particularly in the Mercosur region. Many could not conceive of the FTAA being officially abandoned given these stakes, and the considerable time, effort, and political capital already invested. A Central American nation representative stressed that it would be foolhardy to abandon the FTAA because it symbolizes the region’s commitment to economic and political progress. Another country representative indicated that the FTAA is a forum in which hemispheric officials at all levels share a vision of where the region aspires to move— which he considers a worthwhile endeavor—even if realizing that vision is “a complex challenge.” A representative of a CARICOM nation expressed hope that the question is not “whether we will have an FTAA, but when.” However, a consistent premise for countries’ commitment to the FTAA is that the final agreement be mutually advantageous and flexibly respond to differing capacities. A Mercosur member, for example, noted that “time frames are important, but in the end, it is more important that countries realize the economic growth, job creation, and narrowing of income disparities that could be achieved by signing an agreement that truly reflects, in the best possible way, the interests of the FTAA’s diverse membership.” After making steady technical progress, FTAA talks slowed in mid-2003 and were essentially at a standstill for over a year. Some U.S., Brazilian, and other FTAA officials think the pause in FTAA talks is an inherent part of achieving an acceptable balance of rights and obligations among the 34 nations participating. However, a number of the participants and experts we spoke with now believe that greater political commitment and decisive involvement is necessary to break the impasse and restore vitality to the flagging negotiations. The missed January 2005 deadline for concluding the FTAA coincided with renewed U.S.-Brazil efforts to find common ground. After their February 23-24, 2005 meeting, the U.S.-Brazil co-chairs issued a joint statement expressing optimism about the progress they had made. A U.S. spokesperson expressed hope that a late March meeting would prove successful in closing gaps on remaining issues and enable the co-chairs to restart FTAA talks by reconvening all 34 FTAA nations in early May. Whether there is decisive action, 2005 will determine if the decade-long effort on the FTAA and long-sought vision of hemispheric economic integration will finally come to fruition. We provided draft copies of this report to the Office of the U.S. Trade Representative, the Departments of State, Commerce, and Agriculture on January 4, 2005, and received formal comments from USTR and the Department of Commerce. USTR disagreed with our report, stating that it is an inaccurate and poorly framed portrayal of progress and problems in the negotiations, overemphasized the role of the United States and Brazil in the current impasse, and did not give sufficient weight to U.S. efforts to make progress in the talks. We disagree with USTR’s assessment. As detailed in our scope and methodology, we conducted more than 58 interviews, most of them with officials directly engaged in the FTAA negotiations, including with representatives from 17 of the 34 countries and each of the major regional groupings participating in the FTAA talks, tripartite officials and other experts, U.S. officials including USTR officials, and private sector representatives over the period leading up to and after the Miami FTAA ministerial. We also reviewed numerous U.S. and foreign government official documents and private sector submissions related to the negotiations. Moreover, we relied on the expertise developed over the course of our three prior reports and two testimonies on the FTAA issued in the past 4 years. The Chairmen of the Senate Finance Committee and the House Ways and Means Committee asked us to provide an independent perspective on the issues and challenges facing FTAA negotiators and the United States, in its capacity of co-chairman of the negotiations. Our objectives were to assess the progress that was made since our April 2003 report, the factors that have affected progress, and future prospects for the FTAA. We stand by our report’s conclusion that FTAA negotiations have not progressed since mid- 2003, in large part due to unresolved U.S.-Brazil disagreements, higher priorities, and negotiating structures that have, to date, tended to compound difficulties, rather than facilitate progress. As the USTR letter points out, the FTAA has been a centerpiece for U.S. policy towards Latin America for more than a decade, and as of yet, no way has been found to move the negotiations toward a successful conclusion. We provide additional detail in appendix II on our response to USTR’s comments, including those areas where we have made modifications to our report. As agreed with your offices, unless you publicly release its contents earlier, we plan no further distribution of this report until 30 days after the date of this report. At that time we will provide copies to interested congressional committees, the U.S. Trade Representative, the Secretary of State, the Secretary of Commerce, and the Secretary of Agriculture. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4347. Additional GAO contacts and staff acknowledgements are listed in appendix IV. To conduct our analysis of the progress made in FTAA negotiations since our last report (April 2003), the factors influencing the FTAA’s progress, and the FTAA’s future prospects, we reviewed public foreign government and official FTAA and executive branch documents. We also reviewed academic and economic literature related to the negotiations and participated in a number of discussions and panels on the FTAA sponsored by institutions such as the Inter-American Dialogue and the Woodrow Wilson Center. We conducted a total of 58 interviews both before and after the November 2003 Miami ministerial, including 21 interviews with U.S. officials from the Office of the United States Trade Representative and the Departments of State, Agriculture, and Commerce. We also interviewed foreign government officials from five FTAA participant countries and one group of countries participating in the FTAA. In addition, we sent a letter soliciting views from the Lead Negotiators of the 34 FTAA participant countries and received 15 oral and/or written responses. In total, we obtained information from 16 of the 34 nations participating in the FTAA talks, and each of the major groupings within the hemisphere. We also interviewed trade and U.S.-Latin American affairs experts at the Council of the Americas, the Inter-American Dialogue, the Center for Strategic and International Studies, the Institute for International Economics, as well as and officials from the National Association of Manufacturers, Coalition of Services Industries, United States Chamber of Commerce, and Caterpillar. We also reviewed written private sector input provided to USTR by numerous business associations and private sector advisory committee members. We held several discussions with each of the multilateral institutions that provide technical assistance to the FTAA negotiations: the Organization of American States, the Inter-American Development Bank, and the Economic Commission for Latin America and the Caribbean. In November 2003, we attended meetings associated with the FTAA trade ministerial, including the Americas Business Forum and the Americas Trade and Sustainable Development Forum. This report is also based on our past work on the FTAA negotiations in the Western Hemisphere, such as pre- and post-Miami briefings for requesters and previous public reports and testimonies (see related GAO products). We conducted our work from April through December 2004 in accordance with generally accepted government auditing standards. The following are GAO’s comments on the U.S. Trade Representatives’s letter dated February 10, 2005. 1. Our report does not attempt to assign blame for the slowdown of the talks. We conducted extensive research, including in-depth interviews with numerous participants both before and after the Miami ministerial, to identify key developments and factors that were affecting the FTAA’s progress. In general, those with whom we spoke were concerned about the lack of progress in the FTAA. However, the tenor of remarks was generally constructive, in recognition of the complexity of the task faced by the United States and Brazil as co-chairs seeking to finalize an FTAA by bridging substantive differences among the 34 diverse nations of the Western Hemisphere. In terms of our characterization of the role of the U.S. and Brazil, the evidence we collected clearly indicates that the U.S. and Brazil did play the key roles in the negotiating dynamics both as co-chairs and as proponents of different visions of the FTAA. Moreover outstanding U.S.-Brazil disagreements over key issues were identified as the most important cause of the present impasse by the FTAA participants and trade experts from whom we obtained input. Regarding the co-chairmanship see comment 11. 2. Our report describes the chronology of events that occurred since the Miami Ministerial and the level of activity in the various ongoing negotiations. The United States and Brazil are actively involved in negotiations at three levels: regionally in the FTAA, subregionally such as through bilateral FTAs, and globally at the WTO. In some cases, the same personnel are working on multiple negotiations. Officials from both countries indicated that, consistent with their respective “competitive liberalization” strategies, they were channeling their attention and efforts to those negotiations showing the most immediate promise, which for most of 2004, were not on the FTAA. The report reflects this, and also notes that officials from both countries expressed the view that progress in other negotiations would eventually contribute to progress on the FTAA. GAO is not questioning these judgments. 3. We did not assign blame for the slowdown in the negotiations to any of the parties. The report’s objectives were to describe progress in the negotiations and to identify factors affecting the FTAA’s progress. It is undisputed that there are outstanding U.S.-Brazil disagreements over key issues. These disagreements were identified by those officials and experts we contacted as the most important cause of the present impasse. The report does not “choose sides” on the issues but rather explains the basic differences between the parties’ positions. It then notes that an unwillingness or inability to accommodate each others’ priorities is at root of the present impasse. Our report presents these issues in the context of a post-Miami, two-tier FTAA that would likely involve less ambition than prior to Miami, but which remains undefined. 4. We have modified our report to elaborate upon U.S. initiatives to spur progress and on hemispheric efforts to improve dialogue with civil society and implement the Hemispheric Cooperation Program. However, our understanding is that these initiatives have only progressed since Miami with respect to countries with whom the United States is negotiating bilateral or subregional FTAs. 5. Our report already identifies the question of whether to change the FTAA’s originally-envisaged scope and depth as the central dilemma facing negotiators prior to Miami and includes all of the information USTR describes. It also includes the alternate perspective held by Brazil and its Mercosur partners, namely that, in their view, the United States also effectively called into question the FTAA’s original terms of reference by refusing to discuss the topics of domestic supports for agriculture and trade remedies within the FTAA, due to their systemic nature and ongoing WTO negotiations. In our view the presentation of both positions, in the context of the WTO Doha round’s launch in November 2001 and its ensuing delays, yields a balanced and accurate report. 6. Our report notes that the U.S. offer differentiated among nations was in keeping with the shared goal of providing smaller economies better treatment. The report also notes that some Mercosur members did not submit their market access offers on several rule-making topics on schedule. However, this section of the GAO report is intended to explain the slowdown in FTAA talks and the developments that led to the change in the FTAA’s structure at the Miami ministerial. Brazil reacted publicly and negatively to the differentiated U.S. market access offer, and cited it as one reason for its proposed scale-back, and we describe this development. 7. GAO acknowledges the extensive efforts made by U.S. government and Miami officials to make the Miami ministerial successful, and, in response to USTR’s comments, have added specific language to that effect, as well as references to the three meetings Ambassador Zoellick organized in an effort to provide direction and identify ways to move the talks forward. 8. The GAO report provides a detailed description of the Miami ministerial declaration as issued by ministers. In that section, the report makes a clear distinction between the exact words used by ministers and our own use of “lower” and “upper” tiers to describe the new, two- tier FTAA structure. We believe the terms lower and upper tiers are a concise and intuitive way of describing the notion of a baseline of commitments common to all 34 members and another set of supplementary, deeper commitments undertaken on a voluntary basis. Because of the need to refer repeatedly to these concepts throughout the rest of the report, we disagree with USTR’s comment and have not modified our report. 9. GAO modified its report to include the detailed developments described by USTR, including its efforts to work with other countries, in connection with the inconclusive February TNC meeting. The GAO report now also notes that Brazil and its Mercosur partners presented a proposal at the February meeting and that the U.S. and its partners’ proposal goes beyond Mercosur’s in certain respects, whereas the Mercosur proposal goes beyond the U.S. coalition’s proposal in others. We note that the two main “camps” at the February TNC are roughly similar to the two main “camps” that emerged in the pre-Miami debate over the FTAA’s scope and depth of obligations. The report also notes that after the February TNC meeting, both sides complained that the other side’s proposal denied them commercial benefits that they deemed were essential to attaining an acceptable balance of rights and obligations in the FTAA agreement. 10. We believe the report accurately characterizes the role of the co- chairmanship as involving management of the overall negotiating process, scheduling and chairing of senior-level meetings, and facilitating consensus. However, the report also notes that those we spoke with were concerned that thus far the U.S.-Brazil co- chairmanship has had limited success in these areas and in moving the negotiations forward generally. Moreover, we note that even U.S. and Brazilian officials told GAO that the co-chairmanship has complicated progress. For example, a senior U.S. official who is directly involved in the negotiations told us the United States and Brazil could not agree on the format and attendees for a U.S. proposed meeting to coordinate positions ahead of the February 2004 TNC, and as a result, the meeting never occurred. A Brazilian official with intimate knowledge of the co- chairmanship told us that in practice, the co-chairmanship means the U.S. and Brazil must agree on each document before it can be distributed, slowing progress. GAO notes that USTR’s agency comments indicate that the co-chairs still do not agree on the pace and direction for the FTAA. GAO believes, and its interviews suggest, that this lack of agreement has complicated the co-chairmanship’s capacity to spur FTAA progress. 11. See comment 1. 12. Although they often acknowledged that the two-tier structure was agreed to by all 34 participants at Miami, those we spoke with generally expressed disappointment that the two-tiered structure has not, as hoped, propelled the process forward, nor provided members with a workable roadmap for resuming pursuit of an FTAA. In particular, the two tiered structure has not resolved differences in vision over the FTAA’s ambition, and some experts felt it has complicated the task of striking an acceptable balance of rights and obligations among FTAA nations. 13. See comment 2. 14. GAO disagrees. As USTR is aware, GAO extensively reviewed official FTAA and U.S. government documents and had several meetings with U.S. and Brazilian officials to discuss and analyze the outstanding issues. We highlight in the report those issues that emerged as the key sticking points as of when the talks broke down, according to our interviews with officials directly familiar with the talks. We acknowledge that there are more unresolved issues and have made a minor wording changes in the report to make that more clear. 15. GAO disagrees. The report as submitted to USTR for comment states that Brazil’s ambassador expressed concern that key Brazilian products would be excluded from FTAA tariff elimination—not that the U.S. plans to reduce its market access offer. 16. USTR mischaracterizes the treatment of this issue in the report. The report describes the competitive liberalization policy of the United States and the priority given by the United States to pursuit of the WTO Doha round and sub-regional initiatives. In addition, the report notes U.S. officials’ belief that progress in these forums have already yielded important progress and may ultimately be helpful to the FTAA. In addition, the report states that those we spoke with felt the progress in the WTO was helpful to the FTAA and that further WTO progress was desirable. With respect to U.S. pursuit of sub-regional agreements such as bilateral FTAs, consistent with the evidence collected, the report notes that the United States believes that these are advancing U.S. trade goals in the hemisphere in a step-by-step fashion, but states that not all participants and observers are convinced that these are helpful to the FTAA and to hemispheric integration generally. The following is GAO’s comment on the U.S. Department of Commerce’s letter received January 24, 2005. 1. GAO updated the report to reflect this. In addition to those listed above, Jose Martinez-Fabre, Mark Keenan, Michelle Munn, Jonathan Rose, Jamie McDonald, Etana Finkler, and Ernie Jackson made key contributions to this report. World Trade Organization: Cancun Ministerial Fails to Move Global Trade Negotiations Forward; Next Steps Uncertain. GAO-04-250. Washington, D.C.: January 15, 2004. International Trade: Intensifying Free Trade Negotiating Agenda Calls for Better Allocation of Staff and Resources. GAO-04-233. Washington, D.C.: January 12, 2004. Free Trade Area of the Americas: United States Faces Challenges As Co- Chair of Final Negotiating Phase and Host of November 2003 Ministerial, GAO-03-700T. Washington, D.C.: May 13, 2003. Free Trade Area of the Americas: Negotiations Progress, but Successful Ministerial Hinges on Intensified U.S. Preparations. GAO-03-560. Washington, D.C.: April 11, 2003. World Trade Organization: Early Decisions on Key Issues Vital to Progress in Ongoing Negotiations. GAO-02-879. Washington, D.C.: September 4, 2002. Free Trade Area of the Americas: Negotiators Move Toward Agreement That Will Have Benefits, Costs to U.S. Economy. GAO-01-1027. Washington, D.C.: September 7, 2001. Free Trade Area of the Americas: April 2001 Meetings Set Stage for Hard Bargaining to Begin. GAO-01-706T. Washington, D.C.: May 8, 2001. Free Trade Area of the Americas: Negotiations at Key Juncture on Eve of April Meeting. GAO-01-552. Washington, D.C.: March 30, 2001. World Trade Organization: Progress in Agricultural Trade Negotiations May Be Slow. GAO/T-NSIAD-00-122. Washington, D.C.: March 7, 2000. World Trade Organization: Seattle Ministerial: Outcomes and Lessons Learned. GAO/T-NSIAD-00-86. Washington, D.C.: February 10, 2000. World Trade Organization: Seattle Ministerial: Outcomes and Lessons Learned. GAO/T-NSIAD-00-84. Washington, D.C.: February 8, 2000. Agricultural Trade: Changes Made to Market Access Program, but Questions Remain on Economic Impact. GAO/NSIAD-99-38. Washington, D.C.: April 5, 1999. | If completed, the Free Trade Area of the Americas (FTAA) agreement would encompass an area of 800 million people and about $13 trillion in production of goods and services, making it the most significant regional trade initiative presently being pursued by the United States. The 34 democratic nations of the Western Hemisphere formally launched negotiations towards a FTAA in 1998, and set a January 2005 deadline for concluding a FTAA agreement. GAO was asked to analyze (1) progress made in FTAA negotiations since GAO's last (April 2003) report (2) factors that have been influencing the FTAA's progress; and (3) future prospects for the FTAA. USTR disagreed with our report, stating it was a poorly framed portrayal of progress and problems in the negotiations, overemphasized the role of the United States and Brazil in the current impasse, and did not give sufficient weight to U.S. efforts to make progress. GAO made several changes in response, but disagreed with USTR's assessment. The Departments of State, Commerce, and Agriculture provided technical comments, which we incorporated. Since our April 2003 report, FTAA negotiations reached an impasse that remains unbroken. Prior to the November 2003 FTAA Ministerial in Miami, negotiators made technical advances, but differences over the scope and depth of obligations in the FTAA slowed substantive progress. Despite adopting a new structure at Miami, negotiations have been suspended since early 2004, and the scheduled conclusion of the FTAA in January 2005 expired without agreement. This spurred recent efforts to re-start the talks. Three factors have been impeding progress in the FTAA negotiations: (1) the United States and Brazil have made little progress in resolving basic differences on key negotiation issues, (2) member governments have shifted energy and engagement from the FTAA to bilateral and multilateral trade agreements, and (3) two mechanisms intended to facilitate progress--a new negotiating structure and the co-chairmanship by the U.S. and Brazil--have so far failed to do so. Although in the Fall of 2004 participants and experts were pessimistic about near-term prospects, many believe that integrating the hemisphere is still worth pursuing and hope that FTAA talks can be revived in 2005. Some believe that progress on agriculture at the World Trade Organization and the upcoming 2005 Summit of the Americas could spur movement on the FTAA. However, many still see finally concluding the FTAA as linked to further WTO progress and to renewal of U.S. Trade Promotion Authority, which facilitates U.S. Congressional approval in mid-2005. Nevertheless, officials from many of the nations and regional groups we contacted indicate continued commitment to establishing a mutually beneficial FTAA. |
Through its Environmental Management (EM) program, DOE is responsible for environmental restoration, waste management, and facility transition and management at 15 major contaminated facilities and more than 100 small facilities in 34 states and territories. These facilities encompass a wide range of environmental problems, including more than 7,000 locations where radioactive or hazardous materials were released into the environment; almost 200 tanks that contain high-level radioactive waste from nuclear weapons production, some of which have leaked or could explode; and 7,000 production facilities that are now idled and in need of deactivation, decontamination, and decommissioning. For decades, DOD has operated industrial facilities that generated, stored, or disposed of hazardous wastes. The types of hazardous wastes and contaminants that require cleanup at the majority of DOD’s installations are also found at most private industrial operations. The primary contaminants are petroleum-related products such as fuels, solvents, corrosives, and paint strippers and thinners. Contamination has usually resulted from improper disposal, leaks, or spills. Some unique military substances, such as nerve agents and unexploded ordnance, are also found at DOD’s installations. In 1984, the Congress established the Defense Environmental Restoration Program (DERP) to evaluate and clean up contamination resulting from DOD’s past activities. DERP’s primary goal is to protect human health and the environment from risks posed by contaminated sites. Since 1984, DOD has identified approximately 20,000 potentially contaminated sites (10,000 of which it believes contaminated) at over 1,700 installations, and approximately 3,200 potentially contaminated sites at about 2,200 formerly used DOD installations in the United States. In cleaning up its sites, DOD and DOE must comply with two major federal environmental laws—the Resource Conservation and Recovery Act of 1976, as amended (RCRA), and CERCLA—as well as with state environmental laws and regulations. RCRA regulates the management of facilities that treat, store, and dispose of hazardous wastes and the cleanup of hazardous wastes released from such facilities. CERCLA governs the cleanup of inactive waste sites—that is, sites where disposal is no longer occurring. The Environmental Protection Agency (EPA) is responsible for administering both acts, but EPA may authorize state agencies to implement all or part of its RCRA responsibility. To implement its responsibilities under these acts, DOE has entered into interagency compliance agreements with EPA and the states. These agreements identify activities—generally called milestones—and schedules for achieving compliance, many of which are legally binding and enforceable. Both departments are also involved in complying with other laws such as the Clean Air Act, the Clean Water Act, the Safe Drinking Water Act, the National Environmental Policy Act, and the Federal Facility Compliance Act. Cleaning up these department’s sites is an enormous task, that, in the case of DOE, is likely to span multiple generations. Over the last several years, the total estimated cost of the DOE cleanup has risen from about $100 billion in 1988 to $230 billion, with a high end estimate of $350 billion. DOD currently estimates its total costs, from its inception, at almost $39 billion. The huge cost of cleaning up the weapons complex has been a matter of growing concern, especially to the Subcommittee on Military Procurement. We have reported repeatedly on many issues that have and will affect the cost of the cleanup, including the need for a national, risk-based strategy to set realistic priorities; the need for DOE to more effectively address the complex technical problems that it faces in cleaning up its most vexing problems, such as the high-level tank wastes at Hanford; and the need for effective contractor management. At your request, we would like to address several issues of specific interest to the Subcommittee. These issues include how legislation can affect cleanup costs, ways to reduce cleanup costs, DOE’s privatization initiative, and how excess carryover balances could be used to fund DOE’s cleanup efforts. Our August 1994 report on the impact of incorporating land use planning decisions into cleanup decision-making stated that incorporating more realistic land use assumptions into the selection process for a cleanup remedy under CERCLA could result in significant cost savings—from $200 million to $600 million annually, according to DOE’s Assistant Secretary for EM. Our report noted that DOE and EPA had been assuming that all of DOE’s facilities would be cleaned up so that they could be used for unrestricted use. Consequently, the most stringent environmental requirements were imposed on every cleanup project. However, we found that because CERCLA does not specifically address using alternative land uses, such as industrial parks, EPA’s policy had been to assume residential use in its decisions—potentially the most costly cleanup requirement. Since our report was issued, DOE has begun to work with local stakeholder groups and develop land use plans for its sites. Additionally, in May 1995, EPA issued a directive indicating that cleanup decision-making should reflect “reasonably anticipated future land use” and that this could lead to more expedited, cost-effective cleanups. The practical effect of this directive is not clear. For example, CERCLA states that cleanup alternatives that permanently treat contaminants are preferred. Since some land uses may rely on institutional controls, such as deed restrictions and fencing, to prevent access to the contaminated area, it is not clear whether EPA will be able to consider these types of controls a permanent solution. As we noted in our report, if the Congress agrees that land use planning should be used in cleanup decisions, it could amend CERCLA to provide EPA with more specific direction. DOE’s facilities are subject to the cleanup actions and procedures specified by EPA under CERCLA as well as to RCRA-related requirements for corrective action established by EPA or a state regulatory agency. The need to coordinate the requirements of RCRA and CERCLA has created the potential for delays and increased costs. For example, our December 1994 report stated that officials at DOE’s Savannah River Site were preparing additional documents to meet CERCLA’s requirements, at a cost of about $33,000, for a facility that had been cleaned up and closed in 1990 under RCRA. DOE officials acknowledged that DOE would not be conducting any additional cleanup or disclosing any new information in preparing the required documents. Such problems could continue, since much cleanup work remains to be done, and additional DOE facilities have come under CERCLA regulation. DOE and EPA have recognized the potential impact of this duplication. DOE has developed an approach where it attempts to avoid duplication by specifying a lead regulator (either EPA or the state) for each cleanup project. Similarly, EPA is developing guidance on designating a lead regulator which it expects to issue in the summer of 1996. While this approach might solve the problem, it will depend on the cooperation of DOE and the EPA regions and states that oversee DOE’s facilities. Absent such cooperation, problems with duplication between RCRA and CERCLA could continue to affect the cost of the cleanup. In July 1995, we issued a report to the Subcommittee on Military Procurement examining DOE’s approach for estimating the savings it could achieve through the deactivation of surplus facilities. We found that deactivation—removing radioactive and hazardous materials from unused buildings—can save money. In estimating the net savings that DOE could realize for the 11 projects for which sufficient data were available for analysis, we found that the projects could yield a net savings of $458 million over their expected life. Despite the significant savings that some deactivation projects can generate, DOE did not have a consistent method for determining the relative savings among projects, and without a more consistent method, DOE could select the wrong priority for projects it intends to deactivate. We recommended that DOE develop a more reliable method for estimating savings and use this method to set priorities for deactivation projects. DOE agreed with our recommendations and said it would develop guidance on estimating savings and use the guidance to determine facility deactivation priorities. Currently, we are examining for you how DOE could use a process known as “removal actions” to speed the environmental restoration of its sites. A removal action shortens or eliminates some of the planning steps, such as the remedial investigation and feasibility study, normally used for full-scale remedial actions under CERCLA. Although removal actions are sometimes used to respond to emergencies or other urgent circumstances, they can also be used in more routine situations at federal facilities. Removal actions have been used for a variety of cleanups, including treating ground and surface water and excavating contaminated soil. While our work is not complete, significant potential exists to use this less-restrictive process at many DOE sites at a significant cost savings. As part of its initiatives to reduce the cost of the cleanup, DOE is now proposing to privatize portions of the cleanup, most notably, the vitrification of the high-level waste in the tanks at its Hanford facility. Rather than constructing and operating its own facilities to treat the tank waste, DOE is considering having a company or a consortium of companies finance, design, build, and operate pretreatment and treatment facilities and deliver the finished product—in this case, vitrified waste encased in stainless steel containers—to DOE for a fee. DOE expects this approach to save billions of dollars because the potential for innovation in the marketplace could lead to greater efficiencies and improved performance. A request for proposals to design the first phase of this effort was issued in February 1996, and DOE expects to award competing contracts in August 1996. It is important to recognize that for all practical purposes, DOE’s activities are already privatized. Specifically, DOE primarily relies on management and operating contractors to conduct its programs at its major sites. Under this concept, the government assumes most of the risk for the operations, while the contractor is paid on a cost-plus-award-fee basis. What sets DOE’s privatization initiative apart from its traditional approach is DOE’s attempt to shift responsibility for financing and much of the risk onto the private contractor. Although we have not evaluated DOE’s privatization initiative, we have conducted numerous reviews of DOE’s management of the cleanup and of the Hanford tank farms. You asked us to identify issues that the Congress should consider in evaluating DOE’s privatization proposal. While there are many issues to consider, we believe three are the most critical: Has DOE demonstrated that privatizing the cleanup of the tank farms will reduce the overall life cycle costs to the taxpayer? As our work has demonstrated, considerable uncertainty exists about the contents of the tanks and the effectiveness of many of the technologies needed to be successful. It is possible that the “risk premium” demanded by a private entity to cover these uncertainties could exceed the efficiency gains that might be realized by privatization. Has DOE adequately defined what liability the government should assume and what liability should be borne by the private firms? According to our past work, DOE has not used a consistent approach to indemnify its cleanup contractors, and some contractors have received more favorable treatment than others. Again, given the substantial risk involved, the issue of indemnification bears close scrutiny to ensure that the government does not assume so much of the risk that the effort becomes privatized in name only. Has DOE determined who will oversee the private firm for compliance with environmental, nuclear, and health and safety regulations? The facilities to treat Hanford’s high level waste will involve hazardous, radioactive materials potentially dangerous to workers and the public. This will require the coordination and cooperation of many agencies, including EPA, the Nuclear Regulatory Commission, the state of Washington, and the Defense Nuclear Facilities Safety Board. In addition to making the cleanup more cost effective, an additional way to provide funds for DOE’s cleanup is through the use of excess carryover balances of uncosted obligations and unobligated balances. Over the last several years, the Congress has reduced DOE’s request for new obligational authority and recommended that DOE use balances remaining from prior years’ obligational authority that are carried over into the new fiscal year. DOE’s EM program had about $1.8 billion in such carryover balances at the end of fiscal year 1996. While DOE needs some carryover balances to pay for program commitments made in prior years that have not been completed, the Department’s large and persistent carryover balances have raised concern in the Congress, and especially in the Subcommittee on Military Procurement, about whether DOE’s carryover balances exceed the minimum needed to support its programs. Over the last several years, we have consistently found that DOE had hundreds of millions of dollars in carryover balances that were not needed for their identified purpose, were not tied to specific needs, or were in excess of expected needs. For example, last year, we identified $46.2 million reserved for 15 environmental management projects at the Savannah River Site that were no longer needed because of cost underruns, reductions in the projects’ scope, or cancellation of projects. These persistent findings led us to review whether DOE had an effective approach for identifying carryover balances that exceed its program requirements and may be available to reduce its budget request and whether DOE’s process could be improved. We found that in formulating a budget request, DOE officials do not use a standard, effective approach for identifying excess carryover balances that could be used to reduce DOE’s budget request. Instead, DOE makes broad estimates of the potentially excess balances in its programs. For example, EM proposed the use of $300 million in carryover balances for its fiscal year 1996 budget. According to EM officials, that amount was not based on any detailed analysis, and only after it was proposed did the program identify where the available balances might be found. As a result, DOE cannot be sure it has reduced its balances to the minimum needed to operate its programs. Our forthcoming report will make recommendations on how DOE can better estimate the carryover balances it needs to operate its programs and make available additional resources to pay for its efforts. Addressing DOD’s environmental problems also represents a significant undertaking. Cleanup and compliance program costs make up 86 percent (including Base Realignment and Closure [BRAC] costs) of DOD’s total $5 billion fiscal year 1996 budget estimate for its overall environmental security program. Cleanup costs, excluding BRAC, total $1.6 billion for fiscal year 1996, and compliance costs, excluding BRAC, total $2.2 billion.Consequently, ensuring that these programs are well managed has been an ongoing concern of the National Security Committee and its subcommittees. In its 1994 annual report to the Congress, dated March 1995, DOD estimated that the cost of cleaning up all of its currently identified contaminated sites will total $38.9 billion. Such an immense undertaking and limited annual funding require that DOD address the most severely contaminated sites first. In April 1994, we reported that DOD had not effectively prioritized the cleanup of its contaminated sites and that some sites that were identified as high priority posed less of a risk to human health and the environment than sites that were not on the high-priority list. We reported also that DOD’s cleanup had proceeded slowly and that relatively few hazardous waste sites had been cleaned up. Citing congressional concerns and our report, DOD began to implement a risk-based prioritization system. In May 1994, an inter-military service working group developed procedures to prioritize cleanups on the basis of relative risk. Historically, priorities for cleanup were established at the field level using a variety of methods and factors—often by DOD and regulatory personnel—as part of negotiated legal agreements that included study and cleanup milestones. However, the legal agreements did not always ensure that sites posing the greatest risk to human health and the environment were cleaned up first. In the summer of 1994, DOD issued guidance to implement the relative risk model to place sites in the DERP into high, medium, and low groups. Assignment to a relative risk group considered (1) site contamination (What chemical concentrations are there?), (2) paths that the materials could travel (Is the contamination moving or will it move?), and (3) potential contacts that the contaminants could have with people, animals, or plants (Are there humans or sensitive environments nearby that could be adversely affected?). DOD expected to complete relative risk evaluations by July 1995 for the estimated 10,000 sites requiring cleanup. However, as of February 1996, evaluations had been done for 7,450 of the sites. These evaluations were to be used as a primary tool for prioritizing cleanup efforts for the fiscal year 1996 budget cycle and making funding decisions. However, the lack of relative risk evaluations for the remaining sites impedes DOD’s ability to prioritize and sequence its cleanup work. In addition, more than one-half, or about 4,000 of the 7,450 sites have been categorized as high risk. DOD and the military services plan to spend 83 percent of their fiscal year 1996 cleanup funds on sites in the high relative risk category. As shown in figure 1, the remaining 17 percent of expenditures is for sites ranked medium, low, or not evaluated. Generally, no further risk distinctions are made among the high risk sites, except for a Navy and Marine Corps effort to prioritize sites in EPA Regions 9 and 10. Not identifying the worst sites among this large number of high risk sites could impede directing scarce environmental resources to those sites posing the greatest risk to human health and the environment. This portion of our testimony addresses our concerns about the current process that DOD uses to set environmental compliance priorities and to provide the funding necessary to meet these priorities. We will also discuss proposed changes in DOD’s compliance program that are designed to give DOD management and the Congress more useful information to help them manage and oversee the overall program. We and OSD have noted that DOD’s budgeting process does not provide DOD management or the Congress with the information needed to provide for proper oversight. A DOD initiative to provide the data needed to better manage the program has developed new definitions for EPA classes that DOD used to set priorities for compliance projects. However, the initiative could dilute the highest-priority category by increasing the number of highest-priority projects, and thus significantly reduce management oversight. DOD’s process for compliance requires the services and the Defense Logistics Agency (DLA) to determine environmental requirements and obtain funding for priority needs. DOD’s current policy uses an EPA five-category classification system that places the highest priority on those projects at facilities currently out of compliance (Class I) and lesser priority on those not compliance-driven or time sensitive. In November 1993, we reported that overall environmental compliance funding procedures varied widely among the services. We noted that many military services’ compliance-related appropriations requests did not provide detailed project information, impeding DOD’s and the Congress’ ability to measure costs and progress. Similarly, OSD’s Comptroller office stated in July 1994 that DOD’s budget reports provide only appropriation-level data that are not sufficient to manage its overall environmental program. The OSD established a working group to develop procedures to ensure that necessary data such as amounts budgeted and spent can be obtained and reported in detail. The military services’ internal audit groups have also identified problems with controls over compliance project justifications, fund allocations, and expenditures. DOD began an environmental quality initiative in 1995 to promote consistency in compliance definitions, categories, and requirements. DOD has identified goals, strategies, budget items, and measures of merit for three of its environmental quality pillars: pollution prevention, conservation, and compliance. DOD developed new definitions for four of the five EPA classes, but it has not provided specific guidance to the military services. We agree with DOD’s general approach, but have concerns that the class definitions in DOD’s plan (1) are a significant departure from DOD’s past definitions, (2) do not conform to EPA’s definitions, and (3) may expand the number of projects that qualify for funding under compliance Class I, without being able to distinguish among different types, as shown in the following examples: While EPA explicitly limits Class I to facilities currently out of compliance as documented by notices of violation or consent agreements, DOD’s new definition adds projects to address requirements where the facility may not be out of compliance for 2 or more years. Although specific procedures have not yet been finalized, DOD’s descriptions also indicate that items that EPA includes in Class III (such as inventories, surveys, studies, and assessments) could also be routinely funded as Class I projects. EPA states that designating a project as, for example, Class III, does not mean the project is necessarily less important than one in Classes I or II. Nonetheless, the inclusion of greater numbers of indistinguishable projects under a redefined Class I could reduce management oversight. In discussing this issue OSD officials said it was not their intent to dilute the compliance priority setting process. Rather, they stated that they wished to permit better recognition of must-fund items within each class. They said it may be too late to define the classes again this year, but that they will act to ensure that the priorities are not diluted in the process. Each military service performs environmental self-assessments as a means of helping it determine its environmental needs. For example, the Air Force has its Environmental Compliance and Assessment Management Program, and the Army has its Environmental Compliance Assessment System. The services have set up standards for these self-assessments, which generally require an internal assessment performed by the installation each year together with an external assessment, usually performed by the major command every 3 years. The findings from these assessments may identify regulatory requirements and currently or soon-to-be-out-of compliance conditions and are thus used to help classify projects selected to correct the situation. This helps installations to rank project lists. Other means that installations use to develop requirements include inspections by EPA and state or local regulators. For example, regulators in California now have what they consider to be a cooperative working relationship with many military installations. An effort commended by regulators in California was a partnership with DOD called the California Military Environmental Coordinating Committee. The Committee brings together California regulatory agencies, EPA, and the military to help solve mutual problems. The regulators believe that the Committee fosters cooperation, coordination, and communication between DOD and the regulator community. As requirements are developed at the installation level, they are also ranked. As noted previously, while installations prioritize projects according to EPA’s classification system, they also add additional rankings to differentiate projects within each classification. Some installations rank projects as high, medium, or low within each class, according to how critical they are to the installations’ environmental programs. As an example, all Army Class I-designated projects are, by definition, of high importance. However, Class II and III projects are further subdivided into high, medium, and low, and this distinction is used to further rank the projects for funding. Our initial discussions with the military services’ headquarters officials indicate that only the Marine Corps prioritizes individual compliance projects among installations so that a service-wide prioritized list of environmental projects is developed. According to a headquarters program manager, the Corps has been prioritizing at its 25 installations for about 5 years. The Marine Corps headquarters officials revise this list as needs change. Installations develop a ranked unconstrained list of environmental compliance projects and forward these detailed lists to their major command. Major commands review projects, scrub their funding requests, and decide which projects they will support. Major commands forward their approved list to headquarters for further review and approval. The review process varies by service, but generally the review is directed at the major command program level and, except for the Marine Corps and DLA, does not normally include a review of specific projects and priorities. However, the military services’ headquarters officials review some projects, like military construction, or they may sample individual projects as shown in the following examples: The Army Environmental Center reviews a sample of projects forwarded by the major commands to the service’s headquarters. The Center’s goal is to improve future project submittals. DLA reviews all project submittals. The Marine Corps is the only service that takes this process to completion by setting priorities at the major command and headquarters levels. DOD’s policy has placed the highest priority on projects for facilities currently out of compliance and subject to an enforcement action. The next highest priority facilities are those facilities that will be out of compliance soon. The services’ environmental headquarters officials told us that they fund, within budget limitations, all EPA Class I and EPA Class II projects that will be out of compliance soon. (As noted previously, such projects and others would be considered Class I under DOD’s plans for fiscal year 1998.) In addition, the services also fund recurring “must-fund” activities. These activities may include but not be limited to manpower, fees and permits, sampling and analysis, and hazardous waste disposal. Most environmental compliance funding is provided to the services through the Operation and Maintenance (O&M) appropriation. However, significant funds are also provided by the Military Construction appropriation, especially for the Navy and Air Force. The Defense Business Operations Fund (DBOF), a nonappropriated account, also provides significant funds for environmental compliance within the Navy and DLA. DLA funds over 98 percent of its compliance activity from DBOF. OSD and military service headquarters do not currently monitor expenditures for environmental compliance projects. As noted earlier, the services’ major commands review proposed installation projects. Our visits to each headquarters office and selected commands and facilities showed little monitoring of specific expenditures except at the installation level. Funds from DOD and the services’ O&M accounts, which provide the majority of compliance funding, can be authorized by major commands or installation officials to be used for other purposes—environmental or nonenvironmental. DOD and the services currently cannot provide overall environmental compliance budget execution data to show that the projects they funded were actually executed. DOD has established a joint working group to develop operating procedures to implement a new budget execution reporting procedure. The extent to which actual expenditures will be monitored under the new reporting procedures is not clear at this time. Some headquarters officials believe that installation commanders have adequate incentives to comply with environmental regulation, as they risk being fined and/or jailed for environmental violations discovered on their installations. The services’ officials believe that indirect measures, such as the decreasing numbers of notices of violation and enforcement actions, can indicate that installation commanders are using their environmental funding for environmental projects. In a May 1995 report, the Army Audit Agency found that environmental managers (1) overestimated the number of must-fund environmental projects; (2) overestimated project costs; and (3) did not keep adequate documentation to support requirements. The Agency reviewed 196 projects classified as must-fund for fiscal year 1993 and found that 51 (27 percent) costing $22 million should not have been classified as must-fund. In a May 1995 report, the Air Force Audit Agency found that for the nine installations visited 95 percent of projects funded with fiscal year 1993 environmental compliance moneys were qualified projects. However, major commands and installations authorized some projects that did not qualify for environmental compliance funding. The Agency found 17 projects valued at $3.2 million that did not qualify for environmental compliance funding. In a January 1996 report, the Naval Audit Service found that Navy and Marine Corps activities based justification for one of six environmental projects proposed for its 1997 Military Construction Program on outdated data. The project was nonetheless considered partially valid. The Service examined another 43 projects that were not justified as environmental. The Navy Audit Service had similar overall findings in previous reviews of the 1996 and 1995 Military Construction programs. Messrs. Chairmen, this concludes our prepared statement. We will be glad to respond to any questions you may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed the Department of Energy's (DOE) and Department of Defense's (DOD) efforts to control the cost of environmental cleanup of their nuclear weapons facilities. GAO noted that: (1) if DOE incorporated more realistic land use assumptions into the selection process for cleanups, that could result in significant cost savings of $200 million to $600 million annually; (2) deactivation of surplus facilities, a shortened environmental restoration process, and privatization of some cleanup processes could also result in DOE savings; (3) DOE excess carryover balances of uncosted obligations and unobligated balances could be used to fund cleanup efforts and reduce its future budget requests; (4) DOD has evaluated about 70 percent of its 10,000 cleanup sites, but it has not ranked over half of the sites based on relative risk or ranked sites based on geographical or organizational boundaries; and (5) DOD does not have sufficient data to manage its environmental compliance programs. |
DOD’s final NSPS regulations establish a new human resources management system within the department that is intended to ensure its ability to attract, retain, and reward a workforce that is able to meet its critical mission. Further, the human resources management system is to provide DOD with greater flexibility in the way employees are to be paid, developed, evaluated, afforded due process, and represented by employee representatives while reflecting the principles of merit and fairness embodied in the statutory merit systems principles. As with any major change management initiative, the final regulations have raised a number of concerns among employees, employee representatives, and other stakeholders because they do not contain many of the important details of how the system will be implemented. We have reported that individuals inevitably worry during any change management initiative because of uncertainty over new policies and procedures. A key practice to help address this worry is to involve employees and their representatives to obtain their ideas and gain their ownership for the initiative throughout the development process and related implementation effort. We continue to believe that many of the basic principles underlying DOD’s final regulations are generally consistent with proven approaches to strategic human capital management. Today, I will provide our observations on the following elements of DOD’s human resources management system as outlined in the final regulations—pay and performance management, staffing and employment, workforce shaping, adverse actions and appeals, and labor management relations. Earlier this year, we testified that DOD’s proposed NSPS regulations reflected a growing understanding that the federal government needs to fundamentally rethink its current approach to pay and better link pay to individual and organizational performance. To this end, DOD’s final regulations take another valuable step toward a modern performance management system that provides for elements of a more market-based and performance-oriented pay system. For instance, the final regulations provide for the creation of pay bands for most of DOD’s civilian workforce that would replace the 15-grade General Schedule (GS) system now in place for most civil service employees. Specifically, DOD, after coordination with OPM, may define occupational career groups and levels of work within each career group that are tailored to the department’s missions and components. The final regulations also give DOD considerable discretion, after coordination with OPM, to set and annually adjust the minimum and maximum rates of pay for each of those career groups or bands, based on national and local labor market factors and other conditions such as availability of funds. In addition, the regulations provide that DOD may, after coordination with OPM, set and annually adjust local market supplements for different career groups or for different bands within the same career group. We strongly support the need to expand pay reform in the federal government and believe that implementing more market-based and performance-oriented pay systems is both doable and desirable. The federal government’s current pay system is heavily weighted toward rewarding length of service rather than individual performance and contributions, including requiring across-the- board annual pay increases, even to poor performers. It also compensates employees living in various localities without adequately considering the local labor market rates applicable to the diverse types of occupations in the area. Regarding performance management issues, we identified several issues in earlier testimonies that DOD will need to continue to address as it moves forward with the implementation of the system. These include aligning individual performance to organizational goals, using competencies to provide a fuller assessment of employee performance, making meaningful distinctions in employee performance, and continuing to incorporate adequate safeguards to ensure fairness and guard against abuse. Consistent with leading practices, the DOD final regulations stipulate that the performance management system will, among other things, align individual performance expectations with the department’s overall mission and strategic goals, organizational program and policy objectives, annual performance plans, and other measures of performance. DOD’s performance management system can be a vital tool for aligning the organization with desired results and creating a “line of sight” showing how team, unit, and individual performance can contribute to overall organizational results. To this end, an explicit alignment of daily activities with broader results is one of the defining features of effective performance management systems in high-performing organizations. In our previous testimony on DOD proposed NSPS regulations, we testified that the regulations did not detail how DOD was to achieve such an alignment. The final regulations were not modified to provide such details. These details do matter and are critical issues that will need to be addressed as DOD’s efforts in implementing a new personnel system move forward. In the final regulations, performance expectations may take several different forms. These include, among others, goals or objectives that set general or specific performance targets at the individual, team, or organizational level; a particular work assignment, including characteristics such as quality, quantity, accuracy, or timeliness; core competencies that an employee is expected to demonstrate on the job; or the contributions that an employee is expected to make. In a previous testimony, we reported that DOD needed to define, in more detail than was provided in the proposed regulations, how performance expectations will be set. In addition, public comments to the proposed regulations expressed concerns about the variety of forms that performance expectations could take. In response to public comments to its proposed regulations and feedback obtained during the meet and confer process with employee representatives, DOD modified the proposed regulations, so that the final regulations state that the basic performance expectations should be provided to employees in writing. As DOD develops its implementing issuances, the experiences of leading organizations suggest that DOD should reconsider its position of merely allowing, rather than requiring, the use of core competencies as a central feature of its performance management system. Based on our review of others’ efforts and our own experience at GAO, core competencies can help reinforce employee behaviors and actions that support the department’s mission, goals, and values and can provide a consistent message to employees about how they are expected to achieve results. By including competencies such as change management, achieving results, teamwork and collaboration, cultural sensitivity, and information sharing, DOD could create a shared responsibility for organizational success and help ensure accountability for the transformation process. High-performing organizations make meaningful distinctions between acceptable and outstanding performance of individuals and appropriately reward those who perform at the highest level. These organizations seek to create pay, incentive, and reward systems that clearly link employee knowledge, skills, and contributions to organizational results. As in the proposed regulations, DOD’s final regulations stated that DOD supervisors and managers are to be held accountable for making meaningful distinctions among employees based on performance and contribution, fostering and rewarding excellent performance, and addressing poor performance. Consistent with the proposed regulations, the final regulations provide for a multilevel rating system for evaluating employee performance. However, the final regulations do not specify exactly how many rating levels will be used. We urge DOD to consider using at least four summary rating levels to allow for greater performance-rating and pay differentiation. This approach is in the spirit of the new governmentwide performance-based pay system for the Senior Executive Service (SES), which requires at least four rating levels to provide a clear and direct link between SES performance and pay as well as to make meaningful distinctions based on relative performance. Cascading this approach to other levels of employees can help DOD recognize and reward employee contributions and achieve the highest levels of individual performance. As DOD develops its implementing issuances, it needs to continue building safeguards into its performance management system to ensure fairness and guard against abuse. A concern that employees often express about any pay for performance system is supervisors’ ability and willingness to assess performance fairly. Using safeguards, such as having an independent body to conduct reasonableness reviews of performance management decisions, can help allay these concerns and build a fair, credible, and transparent system. In our previous testimonies, we noted that although DOD’s proposed regulations provided for some safeguards, additional safeguards should be developed. However, the final regulations do not offer details on how DOD would, among other things, (1) promote consistency and provide general oversight of the performance management system to ensure it is administered in a fair, credible, and transparent manner; and (2) incorporate predecisional internal safeguards to achieve consistency and equity, and ensure nondiscrimination and nonpoliticization of the performance management process. As DOD moves forward, it will need to commit itself to define, in more detail than is currently provided, how it plans to review such matters as the establishment and implementation of the performance appraisal system— and, subsequently, performance rating decisions, pay determinations, and promotion actions—before these actions are finalized, to ensure they are merit based. The authorizing legislation allows DOD to implement additional hiring flexibilities that would allow it to (1) determine that there is a severe shortage of candidates or a critical hiring need and (2) use direct-hire procedures for these positions. Under current law, OPM, rather than the agency, determines whether there is a severe shortage of candidates or a critical hiring need. Direct-hire authority allows an agency to appoint candidates to positions without adherence to certain competitive examining requirements (such as veterans’ preference or numerically rating candidates based on experience, training, and education) when there is a severe shortage of qualified candidates or a critical hiring need. In our previous testimonies, we noted that while we strongly endorse providing agencies with additional tools and flexibilities to attract and retain needed talent, additional analysis may be needed to ensure that any new hiring authorities are consistent with a focus on merit principles, the protection of employee rights, and results. Hiring flexibilities alone will not enable federal agencies to acquire the personnel necessary to accomplish their missions. Agencies must first conduct gap analyses of the critical skills and competencies needed in their workforces now and in the future, or they may not be able to effectively design strategies to hire, develop, and retain the best possible workforces. Similar to the proposed regulations, the final NSPS regulations allow DOD to reduce, realign, and reorganize the department’s workforce through revised reduction-in-force (RIF) procedures. For example, employees would be placed on a retention list in the following order: tenure group (i.e., a career employee, including an employee serving an initial probationary period and an employee serving on a term appointment), veterans’ preference eligibility (disabled veterans will be given additional priority), level of performance, and length of service. In a change from the proposed regulations, employees serving in an initial probationary period have a lower retention standing than career employees (i.e., permanent will be listed first, followed by employees serving an initial probationary period, and then followed by employees on temporary appointments). In another change, the final regulations reflect the use of more than one year’s performance ratings in placing employees on the retention list. Under current regulations, length of service is considered ahead of level of performance. I have previously testified, prior to the enactment of NSPS, in support of revised RIF procedures that would require much greater consideration of an employee’s performance. DOD’s approach to reducing, realigning, and reorganizing should be oriented toward strategically shaping the makeup of its workforce if it is to ensure the orderly transfer of institutional knowledge and achieve mission results. DOD’s final regulations include some changes that would allow DOD to rightsize the workforce more carefully through greater precision in defining competitive areas, and by reducing the disruption associated with RIF orders as their affect ripples through an organization. Under the current regulations, the minimum RIF competitive area is broadly defined as an organization under separate administration in a local commuting area. Under the final NSPS regulations, DOD would be able to establish a minimum RIF competitive area on a more targeted basis, using one or more of the following factors: geographical location, line of business, product line, organizational unit, and funding line. The final regulations also provide DOD with the flexibility to develop additional competitive groupings on the basis of career group, occupational series or specialty, and pay band. Under the current GS system, DOD can establish competitive groups based only on employees (1) in the excepted and competitive service, (2) under different excepted service appointment authorities, (3) with different work schedules, (4) in the same pay schedule, or (5) in trainee status. The new reforms could help DOD approach rightsizing more carefully; however, as I have stated, agencies first need to identify the critical skills and competencies needed in their workforce if they are to effectively implement their new human capital flexibilities. Similar to DOD’s proposed regulations, the final regulations are intended to streamline the employee adverse action process. While the final regulations contain some features meant to ensure that employees receive due process, such as advance written notice of a proposed adverse action, they do not require DOD managers to provide employees with performance improvement periods, as is required under existing law for other federal employees. It is too early to tell what affect, if any, these final regulations will have on DOD’s operations and employees or on other entities involved in the adverse action process, such as the Merit Systems Protection Board (MSPB). Close monitoring of any unintended consequences, such as on the MSPB and its ability to manage adverse action cases from DOD and other federal agencies, is warranted. Similar to the proposed regulations, DOD’s final regulations also modify the current federal system by providing the Secretary of Defense with the sole, exclusive, and unreviewable authority to identify specific offenses for which removal is mandatory. In our previous testimonies, we noted that DOD’s proposed regulations only indicated that its employees would be made aware of the mandatory removal offenses. We also noted that the process for determining and communicating which types of offenses require mandatory removal should be explicit and transparent, and involve relevant congressional stakeholders, employees, and employee representatives. Moreover, we suggested that DOD exercise caution when identifying specific removable offenses and the associated punishment, and noted that careful drafting of each removable offense is critical to ensure that the provision does not have unintended consequences. In a change from the proposed regulations, DOD’s final regulations explicitly provide for publishing a list of the mandatory removal offenses in the Federal Register. Similar to its proposed regulations, DOD’s final regulations generally preserve the employee’s basic right to appeal mandatory removal offenses and other adverse action decisions to an independent body—the MSPB— but retain the provision to permit an internal DOD review of the initial decisions issued by MSPB adjudicating officials. Under this internal review, DOD can modify or reverse an initial decision or remand the matter back to the adjudicating official for further consideration. Unlike other criteria for review of initial decisions, DOD can modify or reverse an initial MSPB adjudicating official’s decision where the department determines that the decision has a direct and substantial adverse effect on the department’s national security mission. In our previous testimonies on the proposed regulations, we expressed some concern about the department’s internal review process and pointed out that the proposed regulations do not offer additional details on the department’s internal review process, such as how the review will be conducted and who will conduct it. We noted that an internal agency review process this important should be addressed in the regulations rather than in an implementing directive to ensure adequate transparency and employee confidence in the process. However, the final regulations were not modified to include such details. Similar to DOD’s proposed regulations, the final regulations shorten the notification period before an adverse action can become effective, provide an accelerated MSPB adjudication process, and continue to give the MSPB administrative judges (AJs) and arbitrators less latitude to modify DOD- imposed penalties than under current practice. Under the current system, MSPB reviews penalties during the course of a disciplinary action against an employee to ensure that the agency considered relevant prescribed factors and exercised management discretion within tolerable limits of reasonableness. MSPB may mitigate or modify a penalty if the agency did not consider prescribed factors. In a change from the proposed regulations, which precluded the MSPB from modifying a penalty imposed on an employee by DOD for an adverse action unless such a penalty was so disproportionate to the basis of the action as to be “wholly without justification,” under the final regulations the MSPB AJs and arbitrators will be able to mitigate a penalty only if it is “totally unwarranted in light of the pertinent circumstances” while the full MSPB Board may mitigate penalties in accordance with the standard prescribed in the NSPS authorizing legislation. As stated by DOD in the supplementary information to the final regulations, the “totally unwarranted in light of all pertinent circumstances” standard is similar to that recognized by the federal courts and is intended to limit mitigation of penalties by providing deference to an agency’s penalty determination. The final regulations continue to encourage the use of alternative dispute resolution (ADR) and provide that this approach be subject to collective bargaining to the extent permitted by the final labor relations regulations. To resolve disputes in a more efficient, timely, and less adversarial manner, federal agencies have been expanding their human capital programs to include ADR approaches, including the use of ombudsmen as an informal alternative to addressing conflicts. As we have reported, ADR helps lessen the time and the cost burdens associated with the federal redress system and has the advantage of employing techniques that focus on understanding the disputants’ underlying interests rather than techniques that focus on the validity of their positions. For these and other reasons, we believe that it is important to continue to promote ADR throughout the process. The final regulations recognize the right of employees to organize and bargain collectively. Similar to the proposed regulations, the final regulations would reduce the scope of collecting bargaining by removing the requirement for DOD management to bargain on matters considered to be management rights—such as the policies and procedures for deploying personnel, assigning work, and introducing new technologies. However, in a departure from the proposed regulations, the final regulations provide that the Secretary of Defense may authorize bargaining on these management rights if the Secretary in his or her sole, exclusive, and unreviewable discretion determines that bargaining would be necessary to advance the department’s mission or promote organizational effectiveness. Our previous work on individual agencies’ human capital systems has not directly addressed the scope of specific issues that should or should not be subject to collective bargaining and negotiations. At a forum we co- hosted exploring the concept of a governmentwide framework for human capital reform, which I will discuss later, participants generally agreed that the ability to organize, bargain collectively, and participate in labor organizations is an important principle to be retained in any framework for reform. DOD’s final regulations create its own internal labor relations board—the National Security Labor Relations Board—to deal with most departmentwide labor relations policies and disputes rather than submit them to the Federal Labor Relations Authority. DOD’s proposed regulations did not provide for any employee representative input into the appointment of board members. However, DOD’s final regulations require that for the appointment of two of the three board members, the Secretary of Defense must consider candidates submitted by employee representatives. However, the Secretary retains the authority to both appoint and remove any member. With the issuance of the final regulations, DOD faces multiple challenges to the successful implementation of its new human resources management system. We highlighted multiple implementation challenges at prior hearings and in our July 2005 report on DOD’s efforts to design the new system. For information about these challenges identified in our prior work, as well as related human capital issues that could potentially affect the implementation of NSPS, see the “Highlights” pages from previous GAO products on DOD civilian personnel issues in appendix I. We continue to believe that addressing these challenges is critical to the success of DOD’s new human resources management system. These challenges include establishing an overall communications strategy, ensuring sustained and committed leadership, providing adequate resources for the implementation of the new system, involving employees in implementing the system, and evaluating the new system after it has been implemented. Another significant challenge for DOD is to ensure an effective and ongoing two-way communications strategy, given DOD’s size, geographically and culturally diverse audiences, and the different command structures across DOD organizations. While we have reported that developing a comprehensive communications strategy is a key practice of a change management initiative, we reported in July 2005 that DOD lacks such a strategy. We recommended that the Secretary of Defense take steps to ensure that its communications strategy effectively addresses employee concerns and their information needs, and facilitates two-way communication between employees, employee representatives, and management. In prior testimonies, we also suggested that this communications strategy must involve a number of key players, including the Secretary of Defense. DOD also is challenged to provide adequate resources to implement its new personnel system, especially in times of increased fiscal constraints. OPM reports that the increased costs of implementing alternative personnel systems should be acknowledged and budgeted for up front. Based on the data provided by selected OPM personnel demonstration projects, we found that direct costs associated with salaries and training were among the major cost drivers of implementing pay for performance systems. Certain costs, such as those for initial training on the new system, are one-time in nature and should not be built into the base of DOD’s budget. Other costs, such as employees’ salaries, are recurring and thus should be built into the base of DOD’s budget for future years. DOD estimates that the overall cost associated with implementing the new human resources management system—including developing and delivering training, modifying automated personnel information systems, and starting up and sustaining the National Security Labor Relations Board—will be approximately $158 million through fiscal year 2008. Since experience has shown that additional resources are necessary to ensure sufficient planning, implementation, training, and evaluation for human capital reform, funding for NSPS will warrant close scrutiny by Congress as DOD’s implements the new system. We plan to evaluate the costs associated with the design and implementation of NSPS and look forward to sharing our findings with Congress upon completion of our review. One challenge DOD faces is the need to elevate, integrate, and institutionalize leadership responsibility for large-scale organizational change initiatives, such as its new human resources management system, to ensure success. A chief management officer or similar position could effectively provide the sustained and committed leadership essential to successfully completing these multiyear business transformation initiatives. Especially for an endeavor as critical as DOD’s new human resources management system, such a position could serve to elevate attention to overcome an organization’s natural resistance to change, marshal the resources needed to implement change, and build and maintain organizationwide commitment to new ways of doing business; integrate this new system with various management responsibilities so that they are no longer “stove-piped” and fit into other organizational transformation efforts in a comprehensive, ongoing, and integrated manner; and institutionalize accountability for the system to sustain the implementation of this critical human capital initiative. DOD faces a significant challenge in involving its employees, employee representatives, and other stakeholders in implementing NSPS. Similar to the proposed regulations, DOD’s final regulations, while providing for continuing collaboration with employee representatives, do not identify a process for the continuing involvement of employees in implementation of NSPS. According to DOD, almost two-thirds of its 700,000 civilian employees are represented by 41 different labor unions, including over 1,500 separate bargaining units. Consistent with DOD’s proposed regulations, its final NSPS regulations about the collaboration process, among other things, would permit the Secretary of Defense to determine (1) the number of employee representatives allowed to engage in the collaboration process, and (2) the extent to which employee representatives are given an opportunity to discuss their views with and submit written comments to DOD officials. In addition, DOD’s final regulations indicate that nothing in the continuing collaboration process will affect the right of the Secretary of Defense to determine the content of implementing guidance and to make this guidance effective at any time. DOD’s final regulations will give designated employee representatives an opportunity to be briefed and to comment on the design and results of the new system’s implementation. The active involvement of all stakeholders will be critical to the success of NSPS. Substantive and ongoing involvement by employees and their representatives both directly and indirectly is crucial to the success of new initiatives, including implementing a modified classification and pay for performance system. This involvement must be early, active, meaningful, and continuing if employees are to gain a sense of understanding and ownership of the changes that are being made. The 30-day public comment period on the proposed regulations ended March 16, 2005. During this time period, according to DOD, it received more than 58,000 comments. The public comment period was followed by a period during which DOD and OPM officials met and conferred with employee representatives to resolve differences on any portions of the proposed regulations where agreement had not been reached. Earlier this year, during testimony, we stated that the meet and confer process had to be meaningful and was critically important because there were many details of the proposed regulations that had not been defined. According to DOD, a significant issue raised in the public comments and during the meet and confer process concerned the lack of specificity in the proposed regulations. However, as we noted earlier in this statement, DOD still has considerable work to define the details for implementing its system. These details do matter, and how they are defined can have a direct bearing on whether or not the ultimate new human resources management system is both reasoned and reasonable. Evaluating the effect of NSPS will be an ongoing challenge for DOD. This element is especially important because DOD’s final regulations would give managers more authority and responsibility for managing the new human resources management system than they have under the existing system. High-performing organizations continually review and revise their human capital management systems based on data-driven lessons learned and changing needs in the work environment. Collecting and analyzing data on the costs, benefits, and effects of NSPS will be the fundamental building block for measuring the effectiveness of NSPS in support of the mission and goals of the department. DOD’s final regulations indicate that DOD will evaluate the regulations and their implementation. In our July 2005 report on DOD’s efforts to design NSPS, we recommended that DOD develop procedures for evaluating NSPS that contain results-oriented performance measures and reporting requirements. We also recommended that these evaluation procedures could be broadly modeled on the evaluation requirements of the OPM demonstration projects. Under the demonstration project authority, agencies must evaluate and periodically report on results, implementation of the demonstration project, cost and benefits, effects on veterans and other equal employment opportunity groups, adherence to merit system principles, and the extent to which the lessons from the project can be applied governmentwide. A set of balanced measures addressing a range of results and customer, employee, and external partner issues may also prove beneficial. An evaluation such as this would: facilitate congressional oversight; allow for any midcourse corrections; assist DOD in benchmarking its progress with other efforts; and provide for documenting best practices and sharing lessons learned with employees, stakeholders, other federal agencies, and the public. In commenting on our recommendation, the department stated that it has begun developing an evaluation plan and will ensure that the plan contains results-oriented performance measures and reporting mechanisms. If the department follows through with this effort, we believe that it will be responsive to our recommendation. The federal government is quickly approaching the point where “standard governmentwide” human capital policies and processes are neither standard nor governmentwide, raising the issue of whether a governmentwide framework for human capital reform should be established. The human capital environment in the federal government is changing, illustrated by the fact that DOD’s new human capital authority joins that given to several other federal departments and agencies—such as the Department of Homeland Security (DHS), GAO, the National Aeronautics and Space Administration, and the Federal Aviation Administration—to help them strategically manage their human resources management system to achieve results. To help advance the discussion concerning how governmentwide human capital reform should proceed, we and the National Commission on the Public Service Implementation Initiative co-hosted a forum on whether there should be a governmentwide framework for human capital reform and, if so, what this framework should include. While there was widespread recognition among the forum participants that a one-size-fits- all approach to human capital management is not appropriate for the challenges and demands faced by government, there was equally broad agreement that there should be a governmentwide framework to guide human capital reform. Further, a governmentwide framework should balance the need for consistency across the federal government with the desire for flexibility so that individual agencies can tailor human capital systems to best meet their needs. Striking this balance would not be easy to achieve, but is important for maintaining a governmentwide system that is responsive enough to adapt to agencies’ diverse missions, cultures, and workforces. While there were divergent views among the forum participants, there was general agreement on a set of principles, criteria, and processes that could serve as a starting point for further discussion in developing a governmentwide framework in advancing human capital reform, as shown in figure 1. We believe that these principles, criteria, and processes provide an effective framework for Congress and other decision makers to use as they consider governmentwide civil service reform proposals. Moving forward with human capital reform, in the short term, Congress should consider selected and targeted actions to continue accelerating the momentum to make strategic human capital management the centerpiece of the government’s overall transformation effort. One option may be to provide agencies one-time, targeted investments that are not built into agencies’ bases for future year budget requests. For example, Congress established the Human Capital Performance Fund to reward agencies’ highest performing and most valuable employees. However, the Administration’s draft proposed “Working for America Act” proposes to repeal the Human Capital Performance Fund. According to OPM, the provision was never implemented, due to lack of sufficient funding. We believe that a central fund has merit and can help agencies build the infrastructure needed to implement a more market-based and performance-oriented pay system. To be eligible, agencies would submit plans for approval by OPM that incorporate features such as a link between pay for performance and the agency’s strategic plan, employee involvement, ongoing performance feedback, and effective safeguards to ensure fair management of the system. In the first year of implementation, up to 10 percent of the amount appropriated for the fund would be available to train employees who are involved in making meaningful distinctions in performance. These features are similar to those cited in the draft proposal as the basis for OPM’s certification for agencies to implement their new pay and performance management systems. In addition, as agencies develop their pay for performance systems, they will need to consider the appropriate mix between pay awarded as base pay increases versus one-time cash bonuses, while still maintaining fiscally sustainable compensation systems that reward performance. A key question to consider is how the government can make an increasing percentage of federal compensation dependent on achieving individual and organizational results by, for example, providing more compensation as one-time cash bonuses rather than as permanent salary increases. However, agencies’ use of cash bonuses or other monetary incentives has an effect on employees’ retirement calculations since they are not included in calculating retirement benefits. Congress should consider potential legislative changes to allow cash bonuses that would otherwise be included as base pay increases to be calculated toward retirement and thrift savings benefits by specifically factoring bonuses into the employee’s base pay for purposes of making contributions to the thrift savings plan and calculating the employee’s “high-three” for retirement benefits. Consistent with our observations earlier this year, DOD’s final NSPS regulations take another valuable step toward a modern performance management system that provides for a more market-based and performance-oriented pay system. DOD’s final NSPS regulations are intended to align individual performance and pay with the department’s critical mission requirements; provide meaningful distinctions in performance; and give greater priority to employee performance in connection with workforce rightsizing and reductions-in-force. However, how it is done, when it is done, and the basis on which it is done will be critical to the overall success of the new system. That is why it is important to recognize that it is critically important that DOD define the details for implementing its system and that DOD does it in conjunction with applicable key stakeholders. It is equally important for DOD to ensure that is has the necessary infrastructure in place to implement the system. DOD’s regulations are especially critical and need to be implemented properly because of their potential implications for related governmentwide reform. However, compensation, pay, compensation, critical hiring, and workforce restructuring reforms should be the first step in any governmentwide reforms. For further information, please contact Derek B. Stewart, Director, Defense Capabilities and Management, at (202) 512-5559 or stewartd@gao.gov. For further information on governmentwide human capital issues, please contact J. Christopher Mihm, Managing Director, Strategic Issues, at (202) 512-6806 or mihmj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Individuals making key contributions to this statement include Sandra F. Bell, Renee S. Brown, William J. Doherty, George M. Duncan, Barbara L. Joyce, Julia C. Matta, Susan W. Tieh, and John S. Townes. The Department of Defense’s (DOD) new personnel systemthe National Security Personnel System (NSPS)will have far- reaching implications not just for DOD, but for civil service reform across the federal government. The National Defense Authorization Act for Fiscal Year 2004 gave DOD significant authorities to redesign the rules, regulations, and processes that govern the way that more than 700,000 defense civilian employees are hired, compensated, promoted, and disciplined. In addition, NSPS could serve as a model for governmentwide transformation in human capital management. However, if not properly designed and effectively implemented, it could severely impede progress toward a more performance- and results-based system for the federal government as a whole. DOD’s current process to design its new personnel management system consists of four stages: (1) development of design options, (2) assessment of design options, (3) issuance of proposed regulations, and (4) a statutory public comment period, a meet and confer period with employee representatives, and a congressional notification period. DOD’s initial design process was unrealistic and inappropriate. However, after a strategic reassessment, DOD adjusted its approach to reflect a more cautious and deliberative process that involved more stakeholders. This report (1) describes DOD’s process to design its new personnel management system, (2) analyzes the extent to which DOD’s process reflects key practices for successful transformations, and (3) identifies the most significant challenges DOD faces in implementing NSPS. DOD’s NSPS design process generally reflects four of six selected key practices for successful organizational transformations. First, DOD and OPM have developed a process to design the new personnel system that is supported by top leadership in both organizations. Second, from the outset, a set of guiding principles and key performance parameters have guided the NSPS design process. Third, DOD has a dedicated team in place to design and implement NSPS and manage the transformation process. Fourth, DOD has established a timeline, albeit ambitious, and implementation goals. The design process, however, is lacking in two other practices. First, DOD developed and implemented a written communication strategy document, but the strategy is not comprehensive. It does not identify all key internal stakeholders and their concerns, and does not tailor key messages to specific stakeholder groups. Failure to adequately consider a wide variety of people and cultural issues can lead to unsuccessful transformations. Second, while the process has involved employees through town hall meetings and other mechanisms, it has not included employee representatives on the working groups that drafted the design options. It should be noted that 10 federal labor unions have filed suit alleging that DOD failed to abide by the statutory requirements to include employee representatives in the development of DOD’s new labor relations system authorized as part of NSPS. A successful transformation must provide for meaningful involvement by employees and their representatives to gain their input into and understanding of the changes that will occur. GAO is making recommendations to improve the comprehensiveness of the NSPS communication strategy and to evaluate the impact of NSPS. DOD did not concur with one recommendation and partially concurred with two others. www.gao.gov/cgi-bin/getrpt?GAO-05-730. To view the full product, including the scope and methodology, click on the link above. For more information, contact Derek B. Stewart at (202) 512-5559 or stewartd@gao.gov. DOD will face multiple implementation challenges. For example, in addition to the challenges of continuing to involve employees and other stakeholders and providing adequate resources to implement the system, DOD faces the challenges of ensuring an effective, ongoing two-way communication strategy and evaluating the new system. In recent testimony, GAO stated that DOD’s communication strategy must include the active and visible involvement of a number of key players, including the Secretary of Defense, for successful implementation of the system. Moreover, DOD must ensure sustained and committed leadership after the system is fully implemented and the NSPS Senior Executive and the Program Executive Office transition out of existence. To provide sustained leadership attention to a range of business transformation initiatives, like NSPS, GAO recently recommended the creation of a chief management official at DOD. The Department of Defense’s (DOD) new human resources management system—the National Security Personnel System (NSPS)—will have far-reaching implications for civil service reform across the federal government. The 2004 National Defense Authorization Act gave DOD significant flexibilities for managing more than 700,000 defense civilian employees. Given DOD’s massive size, NSPS represents a huge undertaking for DOD. DOD’s initial process to design NSPS was problematic; however, DOD adjusted its approach to a more deliberative process that involved more stakeholders. NSPS could, if designed and implemented properly, serve as a model for governmentwide transformation in human capital management. However, if not properly designed and implemented, it could severely impede progress toward a more performance- and results-based system for the federal government as a whole. Many of the principles underlying the proposed NSPS regulations are generally consistent with proven approaches to strategic human capital management. For instance, the proposed regulations provide for (1) elements of a flexible and contemporary human resources management system—such as pay bands and pay for performance; (2) DOD to rightsize its workforce when implementing reduction-in-force orders by giving greater priority to employee performance in its retention decisions; and (3) continuing collaboration with employee representatives. The 30-day public comment period on the proposed regulations ended March 16, 2005. DOD and OPM have notified the Congress that they are preparing to begin the meet and confer process with employee representatives who provided comments on the proposed regulations. The meet and confer process is critically important because there are many details of the proposed regulations that have not been defined, especially in the areas of pay and performance management, adverse actions and appeals, and labor- management relations. (It should be noted that 10 federal labor unions have filed suit alleging that DOD failed to abide by the statutory requirements to include employee representatives in the development of DOD’s new labor relations system authorized as part of NSPS.) GAO has several areas of concern: the proposed regulations do not (1) define the details of the implementation of the system, including such issues as adequate safeguards to help ensure fairness and guard against abuse; (2) require, as GAO believes they should, the use of core competencies to communicate to employees what is expected of them on the job; and (3) identify a process for the continuing involvement of employees in the planning, development, and implementation of NSPS. On February 14, 2005, DOD and the Office of Personnel Management (OPM) released for public comment the proposed NSPS regulations. This testimony provides GAO’s preliminary observations on selected provisions of the proposed regulations. Also, GAO believes that DOD (1) would benefit if it develops a comprehensive communications strategy that provides for ongoing, meaningful two-way communication that creates shared expectations among employees, employee representatives, and stakeholders and (2) should complete a plan for implementing NSPS to include an information technology plan and a training plan. Until such a plan is completed, the full extent of the resources needed to implement NSPS may not be well understood. www.gao.gov/cgi-bin/getrpt?GAO-05-559T. To view the full product, including the scope and methodology, click on the link above. For more information, contact Derek B. Stewart at (202) 512-5559 or stewartd@gao.gov. The Department of Defense’s (DOD) new human resources management system—the National Security Personnel System (NSPS)—will have far-reaching implications for civil service reform across the federal government. The 2004 National Defense Authorization Act gave DOD significant flexibilities for managing more than 700,000 defense civilian employees. Given DOD’s massive size, NSPS represents a huge undertaking for DOD. DOD’s initial process to design NSPS was problematic; however, DOD adjusted its approach to a more deliberative process that involved more stakeholders. NSPS could, if designed and implemented properly, serve as a model for governmentwide transformation in human capital management. However, if not properly designed and implemented, it could severely impede progress toward a more performance- and results-based system for the federal government as a whole. Many of the principles underlying the proposed NSPS regulations are generally consistent with proven approaches to strategic human capital management. For instance, the proposed regulations provide for (1) elements of a flexible and contemporary human resources management system—such as pay bands and pay for performance; (2) DOD to rightsize its workforce when implementing reduction-in-force orders by giving greater priority to employee performance in its retention decisions; and (3) continuing collaboration with employee representatives. The 30-day public comment period on the proposed regulations ended March 16, 2005. DOD and OPM have notified the Congress that they are preparing to begin the meet and confer process with employee representatives who provided comments on the proposed regulations. The meet and confer process is critically important because there are many details of the proposed regulations that have not been defined. (It should be noted that 10 federal labor unions have filed suit alleging that DOD failed to abide by the statutory requirements to include employee representatives in the development of DOD’s new labor relations system authorized as part of NSPS.) GAO has three primary areas of concern: the proposed regulations do not (1) define the details of the implementation of the system, including such issues as adequate safeguards to help ensure fairness and guard against abuse; (2) require, as GAO believes they should, the use of core competencies to communicate to employees what is expected of them on the job; and (3) identify a process for the continuing involvement of employees in the planning, development, and implementation of NSPS. On February 14, 2005, DOD and the Office of Personnel Management (OPM) released for public comment the proposed NSPS regulations. This testimony (1) provides GAO’s preliminary observations on selected provisions of the proposed regulations, (2) discusses the challenges DOD faces in implementing the new system, and (3) suggests a governmentwide framework to advance human capital reform. Going forward, GAO believes that (1) the development of the position of Deputy Secretary of Defense for Management, who would act as DOD’s Chief Management Officer, is essential to elevate, integrate, and institutionalize responsibility for the success of DOD’s overall business transformation efforts, including its new human resources management system; (2) DOD would benefit if it develops a comprehensive communications strategy that provides for ongoing, meaningful two-way communication that creates shared expectations among employees, employee representatives, and stakeholders; and (3) DOD must ensure that it has the institutional infrastructure in place, including a modern performance management system and an independent, efficient, effective, and credible external appeals process, to make effective use of its new authorities before they are operationalized. www.gao.gov/cgi-bin/getrpt?GAO-05-517T. To view the full product, including the scope and methodology, click on the link above. For more information, contact Derek B. Stewart at (202) 512-5559 or stewartd@gao.gov. GAO strongly supports the concept of modernizing federal human capital policies, including providing reasonable flexibility. The federal government needs a framework to guide human capital reform. Such a framework would consist of a set of values, principles, processes, and safeguards that would provide consistency across the federal government but be adaptable to agencies’ diverse missions, cultures, and workforces. The Department of Defense’s (DOD) new human resources management systemthe National Security Personnel System (NSPS)will have far-reaching implications for the management of the department and for civil service reform across the federal government. The National Defense Authorization Act for Fiscal Year 2004 gave DOD significant authorities to redesign the rules, regulations, and processes that govern the way that more than 700,000 defense civilian employees are hired, compensated, promoted, and disciplined. In addition, NSPS could serve as a model for governmentwide transformation in human capital management. However, if not properly designed and effectively implemented, it could severely impede progress toward a more performance- and results-based system for the federal government as a whole. Given DOD’s massive size and its geographically and culturally diverse workforce, NSPS represents a huge undertaking for DOD. DOD’s initial process to design NSPS was problematic; however, after a strategic reassessment, DOD adjusted its approach to reflect a more cautious, deliberate process that involved more stakeholders, including OPM. Many of the principles underlying the proposed NSPS regulations are generally consistent with proven approaches to strategic human capital management. For instance, the proposed regulations provide for (1) elements of a flexible and contemporary human resources management system—such as pay bands and pay for performance; (2) DOD to rightsize its workforce when implementing reduction-in-force orders by giving greater priority to employee performance in its retention decisions; and (3) continuing collaboration with employee representatives. (It should be noted that 10 federal labor unions have filed suit alleging that DOD failed to abide by the statutory requirements to include employee representatives in the development of DOD’s new labor relations system authorized as part of NSPS.) GAO has three primary areas of concern: the proposed regulations do not (1) define the details of the implementation of the system, including such issues as adequate safeguards to help ensure fairness and guard against abuse; (2) require, as GAO believes they should, the use of core competencies to communicate to employees what is expected of them on the job; and (3) identify a process for the continuing involvement of employees in the planning, development, and implementation of NSPS. On February 14, 2005, the Secretary of Defense and Acting Director of the Office of Personnel Management (OPM) released for public comment the proposed NSPS regulations. This testimony (1) provides GAO’s preliminary observations on selected provisions of the proposed regulations, (2) discusses the challenges DOD faces in implementing the new system, and (3) suggests a governmentwide framework to advance human capital reform. Going forward, GAO believes that (1) the development of the position of Deputy Secretary of Defense for Management, who would act as DOD’s Chief Management Officer, is essential to elevate, integrate, and institutionalize responsibility for the success of DOD’s overall business transformation efforts, including its new human resources management system; (2) DOD would benefit if it develops a comprehensive communications strategy that provides for ongoing, meaningful two-way communication that creates shared expectations among employees, employee representatives, and stakeholders; and (3) DOD must ensure that it has the institutional infrastructure in place to make effective use of its new authorities before they are operationalized. www.gao.gov/cgi-bin/getrpt?GAO-05-432T. To view the full product, including the scope and methodology, click on the link above. For more information, contact Derek B. Stewart at (202) 512-5559 or stewartd@gao.gov. GAO strongly supports the concept of modernizing federal human capital policies, including providing reasonable flexibility. There is general recognition that the federal government needs a framework to guide human capital reform. Such a framework would consist of a set of values, principles, processes, and safeguards that would provide consistency across the federal government but be adaptable to agencies’ diverse missions, cultures, and workforces. During its downsizing in the early 1990s, the Department of Defense (DOD) did not focus on strategically reshaping its civilian workforce. GAO was asked to address DOD’s efforts to strategically plan for its future civilian workforce at the Office of the Secretary of Defense (OSD), the military services’ headquarters, and the Defense Logistics Agency (DLA). Specifically, GAO determined: (1) the extent to which civilian strategic workforce plans have been developed and implemented to address future civilian workforce requirements, and (2) the major challenges affecting the development and implementation of these plans. OSD, the service headquarters, and DLA have recently taken steps to develop and implement civilian strategic workforce plans to address future civilian workforce needs, but these plans generally lack some key elements essential to successful workforce planning. As a result, OSD, the military services’ headquarters, and DLA—herein referred to as DOD and the components—do not have comprehensive strategic workforce plans to guide their human capital efforts. None of the plans included analyses of the gaps between critical skills and competencies (a set of behaviors that are critical to work accomplishment) currently needed by the workforce and those that will be needed in the future. Without including gap analyses, DOD and the components may not be able to effectively design strategies to hire, develop, and retain the best possible workforce. Furthermore, none of the plans contained results-oriented performance measures that could provide the data necessary to assess the outcomes of civilian human capital initiatives. GAO recommends that DOD and the components include certain key elements in their civilian strategic workforce plans to guide their human capital efforts. DOD concurred with one of our recommendations, and partially concurred with two others because it believes that the department has undertaken analyses of critical skills gaps and are using strategies and personnel flexibilities to fill identified skills gaps. We cannot verify DOD’s statement because DOD was unable to provide the gap analyses. In addition, we found that the strategies being used by the department have not been derived from analyses of gaps between the current and future critical skills and competencies needed by the workforce. The major challenge that DOD and most of the components face in their efforts to develop and implement strategic workforce plans is their need for information on current competencies and those that will likely be needed in the future. This problem results from DOD’s and the components’ not having developed tools to collect and/or store, and manage data on workforce competencies. Without this information, it not clear whether they are designing and funding workforce strategies that will effectively shape their civilian workforces with the appropriate competencies needed to accomplish future DOD missions. Senior department and component officials all acknowledged this shortfall and told us that they are taking steps to address this challenge. Though these are steps in the right direction, the lack of information on current competencies and future needs is a continuing problem that several organizations, including GAO, have previously identified. www.gao.gov/cgi-bin/getrpt?-GAO-04-753. To view the full product, including the scope and methodology, click on the link above. For more information, contact Derek Stewart at (202) 512-5559 or stewartd@gao.gov. People are at the heart of an organization’s ability to perform its mission. Yet a key challenge for the Department of Defense (DOD), as for many federal agencies, is to strategically manage its human capital. DOD’s proposed National Security Personnel System would provide for wide-ranging changes in DOD’s civilian personnel pay and performance management and other human capital areas. Given the massive size of DOD, the proposal has important precedent- setting implications for federal human capital management. GAO strongly supports the need for government transformation and the concept of modernizing federal human capital policies both within DOD and for the federal government at large. The federal personnel system is clearly broken in critical respects—designed for a time and workforce of an earlier era and not able to meet the needs and challenges of today’s rapidly changing and knowledge-based environment. The human capital authorities being considered for DOD have far-reaching implications for the way DOD is managed as well as significant precedent-setting implications for the rest of the federal government. GAO is pleased that as the Congress has reviewed DOD’s legislative proposal it has added a number of important safeguards, including many along the lines GAO has been suggesting, that will help DOD maximize its chances of success in addressing its human capital challenges and minimize the risk of failure. This testimony provides GAO’s observations on DOD human capital reform proposals and the need for governmentwide reform. More generally, GAO believes that agency-specific human capital reforms should be enacted to the extent that the problems being addressed and the solutions offered are specific to a particular agency (e.g., military personnel reforms for DOD). Several of the proposed DOD reforms meet this test. In GAO’s view, the relevant sections of the House’s version of the National Defense Authorization Act for Fiscal Year 2004 and the proposal that is being considered as part of this hearing contain a number of important improvements over the initial DOD legislative proposal. www.gao.gov/cgi-bin/getrpt?GAO-03-851T. To view the full testimony, click on the link above. For more information, contact Derek Stewart at (202) 512-5559 or stewartd@gao.gov. Moving forward, GAO believes it would be preferable to employ a governmentwide approach to address human capital issues and the need for certain flexibilities that have broad-based application and serious potential implications for the civil service system, in general, and the Office of Personnel Management, in particular. GAO believes that several of the reforms that DOD is proposing fall into this category (e.g., broad banding, pay for performance, re-employment and pension offset waivers). In these situations, GAO believes it would be both prudent and preferable for the Congress to provide such authorities governmentwide and ensure that appropriate performance management systems and safeguards are in place before the new authorities are implemented by the respective agency. Importantly, employing this approach is not intended to delay action on DOD’s or any other individual agency’s efforts, but rather to accelerate needed human capital reform throughout the federal government in a manner that ensures reasonable consistency on key principles within the overall civilian workforce. This approach also would help to maintain a level playing field among federal agencies in competing for talent and would help avoid further fragmentation within the civil service. Many of the basic principles underlying DOD’s civilian human capital proposal have merit and deserve serious consideration. The federal personnel system is clearly broken in critical respects—designed for a time and workforce of an earlier era and not able to meet the needs and challenges of our current rapidly changing and knowledge-based environment. DOD’s proposal recognizes that, as GAO has stated and the experiences of leading public sector organizations here and abroad have found, strategic human capital management must be the centerpiece of any serious government transformation effort. More generally, from a conceptual standpoint, GAO strongly supports the need to expand broad banding and pay for performance-based systems in the federal government. However, moving too quickly or prematurely at DOD or elsewhere, can significantly raise the risk of doing it wrong. This could also serve to severely set back the legitimate need to move to a more performance- and results-based system for the federal government as a whole. Thus, while it is imperative that we take steps to better link employee pay and other personnel decisions to performance across the federal government, how it is done, when it is done, and the basis on which it is done, can make all the difference in whether or not we are successful. One key need is to modernize performance management systems in executive agencies so that they are capable of supporting more performance-based pay and other personnel decisions. Unfortunately, based on GAO’s past work, most existing federal performance appraisal systems, including a vast majority of DOD’s systems, are not currently designed to support a meaningful performance-based pay system. The critical questions to consider are: should DOD and/or other agencies be granted broad-based exemptions from existing law, and if so, on what basis? Do DOD and other agencies have the institutional infrastructure in place to make effective use of any new authorities? This institutional infrastructure includes, at a minimum, a human capital planning process that integrates the agency’s human capital policies, strategies, and programs with its program goals and mission, and desired outcomes; the capabilities to effectively develop and implement a new human capital system; and, importantly, a set of adequate safeguards, including reasonable transparency and appropriate accountability mechanisms to ensure the fair, effective, and credible implementation of a new system. www.gao.gov/cgi-bin/getrpt?GAO-03-741T. To view the full testimony, click on the link above. For more information, contact Derek Stewart at (202) 512-5559 or stewartd@gao.gov. In GAO’s view, as an alternative to DOD’s proposed approach, Congress should consider providing governmentwide broad banding and pay for performance authorities that DOD and other federal agencies can use provided they can demonstrate that they have a performance management system in place that meets certain statutory standards, that can be certified to by a qualified and independent party, such as OPM, within prescribed timeframes. Congress should also consider establishing a governmentwide fund whereby agencies, based on a sound business case, could apply for funding to modernize their performance management systems and ensure that those systems have adequate safeguards to prevent abuse. This approach would serve as a positive step to promote high-performing organizations throughout the federal government while avoiding further human capital policy fragmentation. Many of the basic principles underlying DOD’s civilian human capital proposals have merit and deserve serious consideration. The federal personnel system is clearly broken in critical respects—designed for a time and workforce of an earlier era and not able to meet the needs and challenges of our current rapidly changing and knowledge-based environment. DOD’s proposal recognizes that, as GAO has stated and the experiences of leading public sector organizations here and abroad have found strategic human capital management must be the centerpiece of any serious government transformation effort. More generally, from a conceptual standpoint, GAO strongly supports the need to expand broad banding and pay for performance-based systems in the federal government. However, moving too quickly or prematurely at DOD or elsewhere, can significantly raise the risk of doing it wrong. This could also serve to severely set back the legitimate need to move to a more performance and results- based system for the federal government as a whole. Thus, while it is imperative that we take steps to better link employee pay and other personnel decisions to performance across the federal government, how it is done, when it is done, and the basis on which it is done, can make all the difference in whether or not we are successful. In our view, one key need is to modernize performance management systems in executive agencies so that they are capable of supporting more performance-based pay and other personnel decisions. Unfortunately, based on GAO’s past work, most existing federal performance appraisal systems, including a vast majority of DOD’s systems, are not currently designed to support a meaningful performance-based pay system. The critical questions to consider are: should DOD and/or other agencies be granted broad-based exemptions from existing law, and if so, on what basis; and whether they have the institutional infrastructure in place to make effective use of the new authorities. This institutional infrastructure includes, at a minimum, a human capital planning process that integrates the agency’s human capital policies, strategies, and programs with its program goals and mission, and desired outcomes; the capabilities to effectively develop and implement a new human capital system; and, importantly, a set of adequate safeguards, including reasonable transparency and appropriate accountability mechanisms to ensure the fair, effective, and credible implementation of a new system. www.gao.gov/cgi-bin/getrpt?GAO-03-717T. To view the full report, including the scope and methodology, click on the link above. For more information, contact Derek Stewart at (202) 512-5559 or stewartd@gao.gov. In our view, Congress should consider providing governmentwide broad banding and pay for performance authorities that DOD and other federal agencies can use provided they can demonstrate that they have a performance management system in place that meets certain statutory standards, which can be certified to by a qualified and independent party, such as OPM, within prescribed timeframes. Congress should also consider establishing a governmentwide fund whereby agencies, based on a sound business case, could apply for funding to modernize their performance management systems and ensure that those systems have adequate safeguards to prevent abuse. This approach would serve as a positive step to promote high-performing organizations throughout the federal government while avoiding fragmentation within the executive branch in the critical human capital area. People are at the heart of an organization’s ability to perform its mission. Yet, a key challenge for the Department of Defense (DOD), as for many federal agencies, is to strategically manage its human capital. With about 700,000 civilian employees on its payroll, DOD is the second largest federal employer of civilians in the nation. Although downsized 38 percent between fiscal years 1989 and 2002, this workforce has taken on greater roles as a result of DOD’s restructuring and transformation. DOD’s proposed National Security Personnel System (NSPS) would provide for wide-ranging changes in DOD’s civilian personnel pay and performance management, collective bargaining, rightsizing, and other human capital areas. The NSPS would enable DOD to develop and implement a consistent DOD-wide civilian personnel system. Given the massive size of DOD, the proposal has important precedent-setting implications for federal human capital management and OPM. DOD’s lack of attention to force shaping during its downsizing in the early 1990s has resulted in a workforce that is not balanced by age or experience and that puts at risk the orderly transfer of institutional knowledge. Human capital challenges are severe in certain areas. For example, DOD has downsized its acquisition workforce by almost half. More than 50 percent of the workforce will be eligible to retire by 2005. In addition, DOD faces major succession planning challenges at various levels within the department. Also, since 1987, the industrial workforce, such as depot maintenance, has been reduced by about 56 percent, with many of the remaining employees nearing retirement, calling into question the longer-term viability of the workforce. DOD is one of the agencies that has begun to address human capital challenges through strategic human capital planning. For example, in April 2002, DOD published a department wide strategic plan for civilians. Although a positive step toward fostering a more strategic approach toward human capital management, the plan is not fully aligned with the overall mission of the department or results oriented. In addition, it was not integrated with the military and contractor personnel planning. We strongly support the concept of modernizing federal human capital policies within DOD and the federal government at large. Providing reasonable flexibility to management in this critical area is appropriate provided adequate safeguards are in place to prevent abuse. We believe that Congress should consider both governmentwide and selected agency, including DOD, changes to address the pressing human capital issues confronting the federal government. In this regard, many of the basic principles underlying DOD’s civilian human capital proposals have merit and deserve serious consideration. At the same time, many are not unique to DOD and deserve broader consideration. This testimony provides GAO’s preliminary observations on aspects of DOD’s proposal to make changes to its civilian personnel system and discusses the implications of such changes for governmentwide human capital reform. Past reports have contained GAO’s views on what remains to be done to bring about lasting solutions for DOD to strategically manage its human capital. DOD has not always concurred with our recommendations. www.gao.gov/cgi-bin/getrpt?GAO-03-493T. To view the full testimony, including the scope and methodology, click on the link above. For more information, contact Derek B.Stewart at (202) 512-5140 or Stewartd@gao.gov. Agency-specific human capital reforms should be enacted to the extent that the problems being addressed and the solutions offered are specific to a particular agency (e.g., military personnel reforms for DOD). Several of the proposed DOD reforms meet this test. At the same time, we believe that Congress should consider incorporating additional safeguards in connection with several of DOD’s proposed reforms. In our view, it would be preferable to employ a government-wide approach to address certain flexibilities that have broad-based application and serious potential implications for the civil service system, in general, and the Office of Personnel Management (OPM), in particular. We believe that several of the reforms that DOD is proposing fall into this category (e.g., broad-banding, pay for performance, re-employment and pension offset waivers). In these situations, it may be prudent and preferable for the Congress to provide such authorities on a governmentwide basis and in a manner that assures that appropriate performance management systems and safeguards are in place before the new authorities are implemented by the respective agency. However, in all cases whether from a governmentwide authority or agency specific legislation, in our view, such additional authorities should be implemented (or operationalized) only when an agency has the institutional infrastructure in place to make effective use of the new authorities. Based on our experience, while the DOD leadership has the intent and the ability to implement the needed infrastructure, it is not consistently in place within the vast majority of DOD at the present time. The Department of Defense’s (DOD) civilian employees play key roles in such areas as defense policy, intelligence, finance, acquisitions, and weapon systems maintenance. Although downsized 38 percent between fiscal years 1989 and 2002, this workforce has taken on greater roles as a result of DOD’s restructuring and transformation. Responding to congressional concerns about the quality and quantity of, and the strategic planning for the civilian workforce, GAO determined the following for DOD, the military services, and selected defense agencies: (1) the extent of top-level leadership involvement in civilian strategic planning; (2) whether elements in civilian strategic plans are aligned to the overall mission, focused on results, and based on current and future civilian workforce data; and (3) whether civilian and military personnel strategic plans or sourcing initiatives were integrated. Generally, civilian personnel issues appear to be an emerging priority among top leaders in DOD and the defense components. Although DOD began downsizing its civilian workforce more than a decade ago, it did not take action to strategically address challenges affecting the civilian workforce until it issued its civilian human capital strategic plan in April 2002. Top-level leaders in the Air Force, the Marine Corps, the Defense Contract Management Agency, and the Defense Finance Accounting Service have initiated planning efforts and are working in partnership with their civilian human capital professionals to develop and implement civilian strategic plans; such leadership, however, was increasing in the Army and not as evident in the Navy. Also, DOD has not provided guidance on how to integrate the components’ plans with the department-level plan. High-level leadership is critical to directing reforms and obtaining resources for successful implementation. The human capital strategic plans GAO reviewed for the most part lacked key elements found in fully developed plans. Most of the civilian human capital goals, objectives, and initiatives were not explicitly aligned with the overarching missions of the organizations. Consequently, DOD and the components cannot be sure that strategic goals are properly focused on mission achievement. Also, none of the plans contained results-oriented performance measures to assess the impact of their civilian human capital initiatives (i.e., programs, policies, and processes). Thus, DOD and the components cannot gauge the extent to which their human capital initiatives contribute to achieving their organizations’ mission. Finally, the plans did not contain data on the skills and competencies needed to successfully accomplish future missions; therefore, DOD and the components risk not being able to put the right people, in the right place, and at the right time, which can result in diminished accomplishment of the overall defense mission. Moreover, the civilian strategic plans did not address how the civilian workforce will be integrated with their military counterparts or sourcing initiatives. DOD’s three human capital strategic plans-- two military and one civilian--were prepared separately and were not integrated to form a seamless and comprehensive strategy and did not address how DOD plans to link its human capital initiatives with its sourcing plans, such as efforts to outsource non-core responsibilities. The components’ civilian plans acknowledge a need to integrate planning for civilian and military personnel—taking into consideration contractors—but have not yet done so. Without an integrated strategy, DOD may not effectively and efficiently allocate its scarce resources for optimal readiness. Between 1987 and 2002, the Department of Defense (DOD) downsized the civilian workforce in 27 key industrial facilities by about 56 percent. Many of the remaining 72,000 workers are nearing retirement. In recent years GAO has identified shortcomings in DOD’s strategic planning and was asked to determine (1) whether DOD has implemented our prior recommendation to develop and implement a depot maintenance strategic plan, (2) the extent to which the services have developed and implemented comprehensive strategic workforce plans, and (3) what challenges adversely affect DOD’s workforce planning. DOD has not implemented our October 2001 recommendation to develop and implement a DOD depot strategic plan that would delineate workloads to be accomplished in each of the services’ depots. The DOD depot system has been a key part of the department’s plan to support military systems in the past, but the increased use of the private sector to perform this work has decreased the role of these activities. While title 10 of the U.S. code requires DOD to retain core capability and also requires that at least 50 percent of depot maintenance funds be spent for public-sector performance, questions remain about the future role of DOD depots. Absent a DOD depot strategic plan, the services have in varying degrees, laid out a framework for strategic depot planning, but this planning is not comprehensive. Questions also remain about the future of arsenals and ammunition plants. GAO reviewed workforce planning efforts for 22 maintenance depots, 3 arsenals, and 2 ammunition plants, which employed about 72,000 civilian workers in fiscal year 2002. GAO recommends that the DOD complete revisions to core policy, promulgate a schedule for completing core computations, and complete depot strategic planning; develop a plan for arsenals and ammunition plants; develop strategic workforce plans; and coordinate the implementation of initiatives to address various workforce challenges. DOD concurred with 7 of our 9 recommendations; nonconcurring with two because it believes the proposed National Security Personnel System, which was submitted to Congress as a part of the DOD transformation legislation, will take care of these problems. We believe it is premature to assume this system will (1) be approved by Congress as proposed and (2) resolve these issues. The services have not developed and implemented strategic workforce plans to position the civilian workforce in DOD industrial activities to meet future requirements. While workforce planning is done for each of the industrial activities, generally it is short-term rather than strategic. Further, workforce planning is lacking in other areas that OPM guidance and high-performing organizations identify as key to successful workforce planning. Service workforce planning efforts (1) usually do not assess the competencies; (2) do not develop comprehensive retention plans; and (3) sometimes do not develop performance measures and evaluate workforce plans. Several challenges adversely affect DOD’s workforce planning for the viability of its civilian depot workforce. First, given the aging depot workforce and the retirement eligibility of over 40 percent of the workforce over the next 5 to 7 years, the services may have difficulty maintaining the depots’ viability. Second, the services are having difficulty implementing multiskilling—an industry and government best practice for improving the flexibility and productivity of the workforce—even though this technique could help depot planners do more with fewer employees. Finally, increased training funding and innovation in the training program will be essential for revitalizing the aging depot workforce. Staffing Levels, Age, and Retirement Eligibility of Civilian Personnel in Industrial Facilities Percent eligible to retire by 2009 www.gao.gov/cgi-bin/getrpt?GAO-03-472. To view the full report, including the scope and methodology, click on the link above. For more information, contact Derek Stewart at (202) 512-5559 or stewartd@gao.gov. Total This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | People are critical to any agency transformation because they define an agency's culture, develop its knowledge base, promote innovation, and are its most important asset. Thus, strategic human capital management at the Department of Defense (DOD) can help it marshal, manage, and maintain the people and skills needed to meet its critical mission. In November 2003, Congress provided DOD with significant flexibility to design a modern human resources management system. On November 1, 2005, DOD and the Office of Personnel Management (OPM) jointly released the final regulations on DOD's new human resources management system, known as the National Security Personnel System (NSPS). Several months ago, with the release of the proposed regulations, GAO observed that some parts of the human resources management system raised questions for DOD, OPM, and Congress to consider in the areas of pay and performance management, adverse actions and appeals, and labor management relations. GAO also identified multiple implementation challenges for DOD once the final regulations for the new system were issued. This testimony provides GAO's overall observations on selected provisions of the final regulations. GAO believes that DOD's final NSPS regulations contain many of the basic principles that are consistent with proven approaches to strategic human capital management. For instance, the final regulations provide for (1) a flexible, contemporary, market-based and performance-oriented compensation system--such as pay bands and pay for performance; (2) giving greater priority to employee performance in its retention decisions in connection with workforce rightsizing and reductions-in-force; and (3) involvement of employee representatives throughout the implementation process, such as having opportunities to participate in developing the implementing issuances. However, future actions will determine whether such labor relations efforts will be meaningful and credible. Despite these positive aspects of the regulations, GAO has several areas of concern. First, DOD has considerable work ahead to define the important details for implementing its system--such as how employee performance expectations will be aligned with the department's overall mission and goals and other measures of performance, and how DOD would promote consistency and provide general oversight of the performance management system to ensure it is administered in a fair, credible, transparent manner. These and other critically important details must be defined in conjunction with applicable stakeholders. Second, the regulations merely allow, rather than require, the use of core competencies that can help to provide consistency and clearly communicate to employees what is expected of them. Third, although the regulations do provide for continuing collaboration with employee representatives, they do not identify a process for the continuing involvement of individual employees in the implementation of NSPS. Going forward, GAO believes that (1) DOD would benefit from developing a comprehensive communications strategy, (2) DOD must ensure that it has the necessary institutional infrastructure in place to make effective use of its new authorities, (3) a chief management officer or similar position is essential to effectively provide sustained and committed leadership to the department's overall business transformation effort, including NSPS, and (4) DOD should develop procedures and methods to initiate implementation efforts relating to NSPS. While GAO strongly supports human capital reform in the federal government, how it is done, when it is done, and the basis on which it is done can make all the difference in whether such efforts are successful. DOD's regulations are especially critical and need to be implemented properly because of their potential implications for related governmentwide reform. In this regard, in our view, classification, compensation, critical hiring, and workforce restructuring reforms should be pursued on a governmentwide basis before and separate from any broad-based labor-management or due process reforms. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.